October 31st, 2013

0xdata and Yelp – Machine Learning for Relevance and Serendipity/Distributed Gradient Boosting

RSS icon RSS Category: Uncategorized
Fallback Featured Image

Join us and Yelp for a chat on Machine Learning, and make sure not to miss Sri's lightning talk on Distributed Gradient Boosting!

Main Talk: Machine Learning for Relevance and Serendipity
Speaker: Aria Haghighi (Prismatic)
Abstract: 
Careful use of well-designed machine learning systems can transform products by providing highly personalized user experiences. Unlike hand-tuned or heuristic-based personalization systems, machine learning allows for the use of millions of different potential indicators when making a decision, and is robust to many types of noise. In this talk, I will discuss our deeply-integrated use of machine learning and natural language processing for content discovery at Prismatic. Our real-time personalization engine is designed to give our users not just the content they expect, but also a healthy dose of targeted serendipity, all based on relevance models learned from users’ interactions with the site. We use sophisticated machine learning techniques for topical classification of stories, to determine story similarity, make topic suggestions, rate the value of different social connections, and ultimately to determine the relevance of a particular story for a particular user. I will go into detail describing our personalized relevance model, starting with a description of our problem formulation, then discussing feature design, model design, evaluation metrics, and our experimental setup which allows quick offline prototyping without forcing users into the role of guinea pig. Our model’s combination of social cues, topical classification, publisher information, and analysis of the user’s prior interactions produces highly-relevant and often delightfully serendipitous content for our users to consume.
Lightning Talk: Distributed Gradient Boosting
Speaker: SriSatish Ambati (0xdata)
Abstract: 
Boosting is a simple yet powerful technique for learning algorithms. We present a distributed gradient boosting algorithm that's accessible from R and a simple API for roll-your-own Distributed Machine Learning Algorithm for Big Data.
Tentative Schedule:
6:30-7:00 – socializing
7:00-7:20 – lightning talk
7:20-8:30 – main presentation
8:30-9:00 – socializing
 
Learn more and sign up at http://www.meetup.com/SF-Bayarea-Machine-Learning/events/146775042/?joinFrom=event

Leave a Reply

+
Developing and Retaining Data Science Talent

It’s been almost a decade since the Harvard Business Review proclaimed that “Data Scientist” is

May 12, 2022 - by Jon Farland
+
The H2O.ai Wildfire Challenge Winners Blog Series – Team Too Hot Encoder

Note: this is a community blog post by Team Too Hot Encoder - one of

May 10, 2022 - by H2O.ai Team
+
The H2O.ai Wildfire Challenge Winners Blog Series – Team HTB

Note: this is a community blog post by Team HTB - one of the H2O.ai

May 10, 2022 - by H2O.ai Team
+
Bias and Debiasing

An important aspect of practicing machine learning in a responsible manner is understanding how models

April 15, 2022 - by Kim Montgomery
+
Comprehensive Guide to Image Classification using H2O Hydrogen Torch

In this article, we will learn how to build state-of-the-art models in computer vision and

March 29, 2022 - by H2O.ai Team
+
H2O Wave Snippet Plugin for PyCharm

Note: this blog post by Shamil Dilshan Prematunga was first published on Medium. What is PyCham? PyCharm

March 24, 2022 - by Shamil Prematunga

Start Your Free Trial