Return to page

Machine Learning Apps at PropertyGuru - #H2OWorld 2019 NYC

This session was recorded in NYC on October 22nd, 2019. Slides from the session can be viewed here.

PropertyGuru is the largest prop tech company in South-east Asia. We enable our customers to find their dream homes and add value to the agents who trust our platform to match them to the right property seekers. In this session, I will talk about how we are using machine learning to build products and experiences that help people make confident property decisions. I will cover how we guide property seekers and agents with innovative ways to search listings and personalised recommendations, and how we build models to maintain the quality of the listings that they interact with.


Gautam Borgohain is a Data Scientist and Software engineer with over 7 years of experience building and leading data science products in various industries and projects like recommendation systems, image-classification and object detection services, NLP, property valuation and credit risk evaluation among others. He obtained his Master Degree in Analytics from Nanyang Technological university in Singapore. Before joining PropertyGuru, Gautam gained cross-industry experience with previous stints in a fintech start-up , an university and a software company. He loves spending hours analysing data and developing smarter applications with machine learning.

Read the Full Transcript

Gautam Borgohain:

Okay. It’s on.

Hello everybody. My name is Gautam Borgohain. Are the slides up? Oh yeah. Okay.

Hello everybody. My name is Gautam Borgohain. I’m a data scientist at PropertyGuru. And in this session, I’m going to talk about how we use machine learning and PropertyGuru. Before I start, let me start with a brief overview on PropertyGuru itself.

We are an online property marketplace and we connect property seekers to property agents. We started a while back in 2007 in Singapore, and now we are in five different countries, including Malaysia, Indonesia, Vietnam, and Thailand. We are actually the leading Prop Tech company in Southeast Asia. We have over 2.4 million property listings, and we have a monthly user base of over 23 million users. As a company, our vision is to be the trusted advisor for property seekers. And in order to achieve that vision, we know that we need to help people make confident property decisions through relevant content, actionable insights, and world class service.

Now, as you can see from our mission statement itself, having relevant content and actionable insights, machine learning plays an important role in achieving our vision. The Data Science Team in PropertyGuru itself, there five of us who are data science and machine learning engineers. We all have different backgrounds from some engineering and some bio-informatics and statistics. And we rely on tools like driverless AI to find the best possible models for the different use cases that we work on.

In today’s session. I’m going to talk about some of the areas we’re using machine learning to build products that help people make confident property decisions. More specifically, I’m going to talk about two different areas. First is how we enable better discovery of listings. And second, how we maintain the quality of those listings.

First up, lets at PropertyGuru Lens, which is how we enable the discovery process for users. Now, normally property seekers use filters on our website and search for the listings, and shortlist, and so on. But if you are looking for all available units in a building that you just saw any liked and you want to go online and search for that, you’ll have to use your detective skills. You’ll have to know where you saw that building, what color it was, et cetera. So we recently launched PropertyGuru Lens, which is the first app of its kind in Southeast Asia, that uses augmented reality to allow property seekers to discover properties by just pointing their phones at them. The app then presents a list of all the available units in that building a for rent and sale. It is like having your own personal property expert on the device itself.

We are able to do this by running a couple of models that take input from the camera and geolocation data. And we do some field of view adjustments to make sure we are accurately identifying the building that you’re looking at. All the processing that we do happens on the device itself. So it’s private and it has been optimized so that it uses minimal battery power. Actually, the model itself is just two megabytes in size and the latency for scoring over a catalog of 8,000 condos is just six milliseconds.

Another thing that we used to help in the discovery process for users is personalized recommendations. Now, while building, our recommendations are generally used almost everywhere now, but while building the recommendation engine for PropertyGuru, we had a couple of considerations to keep in mind. Which were not available, which were not usable in the off the shelf solutions that were available.

Firstly listings expire. That is once an agent completes a deal, he takes a listing of our site. And even if the listing, even if the unit were to come back on the market, it would come on as a different listing. Now, generally recommendation engines rely on the signal of similar items occurring in different users activities. And because we have this timeframe limitation that signal for us, is very sparse. So we had to change algorithm to modify around that and add the old listings, which do not get as much user activity.

Secondly, we know from previous user research, that users had, that property seekers have different preferences in different stages of their journey to finding their home. So initial phases, like the discovery phase, where they are expecting more variety in their recommendations, and they are looking for multiple different properties, and even locations. While further down the line, after a certain amount of time, after a couple of weeks, they are looking at more granular recommendations. Where they are basically finding the, looking for the best deal. And taking all these factors into account, our customer recommendation engine is now able to serve our catalog of over 2 million listings, and the other regions that we have, to our user base of 20 billion. And we personalize it real time with our API being able to operate in just 95 milliseconds of latency.

Also to help in the discovery process, we allow the agents who posts this listings of the units, valuable list insights on their listings performance, so that they know how their listings are performing. And we also give them tools that allow them to manage and promote selectively some of these high quality listings. For example, here we are asking this agent to promote this particular listing because we think that that’s going to get him more leads. Now, how do we know this? We know this because in the background, we are running a listing via model, this listing’s performance based on the current demand and supply and it’s competitions that are there for similar listings. So we have calculated the propensity for the listings performance to increase. That means to get more leads, if it were promoted in a certain period of time. So insights like this, not only help agents get more value out of our platform, but also helps match the right listings to the right users.

Now let’s look at how we maintain the quality of the listings. So we are doing all this stuff of aiding the discovery process. Maintaining the quality of the things that our users or property seekers are seeing is also very important. Now, since we are online, agents are basically uploading different content, they are adding images to listings, and giving a lot of descriptions. And property seekers, when they’re on the site, they needed them to be high quality. That is they have to be relevant and informative. They need… You’re basically looking for a virtual experience, virtual tour experience of a property when you’re browsing online. Now up until April of 2019, we allowed images that did not meet all our guidelines to be uploaded regardless. So our site used to look something like this with a lot of overlays, distract-ive banners, and neon colored phone numbers, and face overlays.

So as a property seeker, that creates a very distracting experience because you’re not able to see what that property is. You’re not able to see how the room actually looks, but you’re being distracted by all these different noises. So we started monitoring the images that are uploaded. And given the volume of images that get uploaded in different regions, we use multiple machine learning models to do various checks before the listings goes live basically. So our image moderation stack is basically four different tasks.

First as aspect detection, where we are looking, identifying banners, and overlays, and text. Like the banners that are here on the top and on the bottom. Second, we do object detection to localize some of these objects and some other objects specific to properties. Like sofas, et cetera. We do this to know where exactly and how much of an area is it is being covered by these faces and all these overlays on the images.

Thirdly, we do scene detection. Whether the image is of an indoor kitchen, or an outdoor dining area, we do this so that we can understand, we know the coverage of the unit by the images. That is, do we have images of all the bedrooms that are mentioned in the listing or not?

Lastly, we also evaluate the quality of the images that are uploaded. So we use a model that generates an aesthetic score for the image that has been uploaded. And we use that along with some other image quality features. Like sharpness, the brightness, and the exposure levels of the image to recommend the best cover image for our listing.

That’s mixed moderation. Text is uploaded on our site as descriptions. And basically agents add more context and a description like how old the property is and any additional preferences by the user. Whether or not it’s a co-sharing apartment or not. Moderating this descriptions is also very important. Now we do some general topic analysis on these descriptions. Like text analysis descriptions, like getting the topic, whether or not it’s relevant to property or not. And we do some black-listing of keywords based on our prior experience and things that we don’t want to be mentioned. But bias detection is something that we do that is quite important, actually. But what is it? Bias Detection is basically detecting phrases in text that hint or explicitly state preference based on race. For example, if a listing has in the description that inquiries from only a particular race would be considered, that is something that we do not want on our platform.

So we built a model that detects these inherent and explicitly stated biases in the descriptions. We started off with using a bi-directional LSTM model using word to [inaudible] embeddings. But soon we realized that a lot of the words that were used in these contexts, when specifying these preferences, did not exist in the world available vocabulary and in a general English vocabulary. For example, words like Chindia and which refers to a person of mixed Chinese and Indian descent. So this led us to experiment with bigger models like BERT, and we got the performance that we need, but they’re really big models with 110 million parameters that were very hard to be deployed. So we use knowledge distillation to train a model that is 10 times smaller in size, while getting the same inference. And so right now we do batch moderation of these listings and remove all the listings, which violate these particular guidelines.

Lastly, I want to end by talking a bit on model deployment. As you can imagine, deploying these models that we just talked about in a scalable and performance matter is very important. And due to the small size of the team and because we do the whole process end to end, we prefer serverless deployment. Whether the models are built on byte rods or via driverless AI. Now in the example that I’ve shown here, this is one deployment of one of the task in the image moderation stack that I talked about earlier. Where we are using models for detecting face overlays, text overlays, and other NSFW detection of the images. We, increasingly call these different Lambda functions, which are serving these different models. And then we collate their results if they are performing the same task or different task and process them for downstream consumption.

Now, making deployments like this server-less, we saw, especially for our image moderation use case, we could process more than 10,000 images a second, which is more, much more capability than we actually require for our live API. And we’ve noticed that having this process of deploying to a serverless, in AWS Lambda, means that we always end up optimizing our, having that step of optimizing the models further. For example, using knowledge distillation or quantization to reduce a model size to adhere to Lambda restriction also lists district restrictions. So not only the models become smaller, but they also become more efficient and faster and they are more cost effective.

In fact, we found that hosting our image moderation stack on Lambda made us reduce cost by almost 97% compared to a previous way of hosting them on GPU instances. So we really think that having this, our current state of using the best tools, which help us iterate foster, and using tools like server-less and driverless AI to deploy and maintain these models faster, has been really helpful for us.

And with that’s all for me. And thank you so much for listening. If you want to reach out, feel free to reach out on my email. Thank you.