November 25th, 2014

Key Takeaways from the World’s Top Kagglers

RSS icon RSS Category: Uncategorized [EN]
panel

Ever wondered why data science is so competitive? After a highly successful H2O World event last week, we’re shining some light on what we’ve learned from some of the world’s best data scientists and how they go about winning these data science challenges such as Kaggle. In case you missed it, we held a Competitive Data Science Panel at H2O World for which we invited top-notch data scientists and we are very luck that they shared some of their priceless secrets with us!

panel
Our panelists were (from left to right):
+ Jose Guerrero, #8 at Kaggle, formerly #1
+ Guocong Song, #12 at Kaggle, formerly #8
+ Mark Landry, #123 at Kaggle, formerly #110
+ Chris Severs, data scientist at Ebay
+ Arno Candel, H2O.ai (moderator)
Disclaimer: The views and opinions expressed herein are those of the author and the panelists and do not reflect the views and opinions of anyone else. Your changes of winning a Data Science competition will remain ~~infinitesimally~~ small. The information set forth herein has been obtained or derived from sources believed by the author to be reliable. However, the author does not make any representation or warranty, express or implied, as to the information’s accuracy or completeness, nor does the author recommend that the attached information serve as the basis of any data science challenge submission.
And here’s what you’ve been waiting for! The key takeaways from the world’s top Kagglers!

Question: What’s the point of data science competitions?

Jose recommended that we watch the following video:


Fairly convincing, eh? Alright. Let’s get back to data science, and see what the experts had to say!

Question: What’s more important? Data exploration? Feature engineering/mining? Model tuning? Better algorithms?

  1. Mark: Don’t forget Exploratory Data Analysis to better understand your data (Note: EDA was first introduced by John Tukey after which the second stage at H2O World was named)
  2. Jose: Tree-based methods such as Random Forest or Gradient Boosted Methods are great default algorithms
  3. Jose: If there’s strong linear dependency of features with the response, linear regression or SVM models can give good results
  4. Guocong: Better algorithms can make the difference if it’s difficult to extract information from features (e.g., Higgs dataset)
  5. Guocong: Feature engineering can make a huge difference if done right
  6. Mark: Real world: You need all of the above: Best-of-breed algorithms, sophisticated feature engineering, model tuning, ensemble
  7. Mark: Always understand how well your algorithm is doing, establish a baseline (e.g., compare to the mean)
  8. Chris; In industry, most the work is in feature engineering, and there’s often runtime considersations (e.g., real-time scoring must be fast, large ensembles can be too slow)
  9. Arno: Simple fast algorithms such as stochastic gradient descent with feature hashing can outperform sophisticated models if there are many predictors (e.g., lots of categorical levels)

Question: What are your favorite tools? R, python, Mathlab/Octave, SQL, Excel, H2O, …?

  1. Chris: For data munging: Prefer Scala instead of Pig/Hive because it has compile-type type checking
  2. Mark: Use SQL to explore your dataset
  3. Mark: Visualize with Tableau, R (ggplot2) or Excel
  4. Guocong: H2O is easy to run, anyone can use it to get a summary on big data or run sophisticated algorithms
  5. Guocong: Keep learning! Use new tools and languages, lots of stuff out there! Scala, Java8, H2O, Python, Java
  6. Jose: Python: scikit-learn is well-structured, has nice API
  7. Arno: h2o-dev will have improved API as well, similar to scikit-learn
  8. Jose: R data management is poor, but there’s Matt Dowell’s data.table

Question: Feature engineering/mining – manual or automatic?

  1. Chris: Try deep learning to do automatic feature engineering – automation is good (for industry)
  2. Mark: Manual feature creation – domain experts can help a lot – understand your data, what’s the distribution of categoricals? Try interaction features, equivalence
  3. Mark: Important to keep track of decisions made: Is log transform needed? Was it done? On which features?
  4. Mark: Allocate time to run ensembles before the submission deadline
  5. Mark: Not all features work well in combination with a certain model type (interactions good for linear, not always for tree-based)
  6. Guocong: If you have lots of categoricals, tree-based models will run slower (need to apply some tricks to make features that allow to build trees faster)
  7. Jose: Step-by-step selection of features for many columns can lead to overfitting, better to use strong regularization instead of feature selection, prefer Lasso or restricted tree-based method
  8. Mark: Sometimes useful to add new data to get new features (e.g., weather data helps for airline data) – hard to automate this
  9. Mark: The only way to stand out in data science competitions is to have better features (and the best algos)

Question: Is it OK to sample the data?

  1. Jose: Sampling only useful for a fast quick look – to win a competition, you’ll need all the data
  2. Jose: Over/Under-sampling is a fine tool, if you know what you are doing
  3. Guocong: If data distribution is stationary (or you don’t know), use all the data
  4. Guocong: If data changes over time, the latest (newest) data might lead to a better model
  5. Chris: If data distribution is well understood, can be OK to sample (some models such as streaming K-means can quickly build a sketch, good for industry)
  6. Mark: Even 10% loss of data for 10-fold cross-validation hurts, but you have to do it

Question: What’s your favorite algorithms?

  1. Jose: GBM, RF, SVM, GLM, and lastly, Neural Nets (hardest to tune)
    1 Arno: Try H2O Deep Learning!
  2. Guocong: GLM (won 3 competitions with it), trees, deep learning
  3. Mark: GBM, ridge regression, deep learning, superlearner
  4. Chris: RF
  5. Arno: I use all: First GLM, RF, GBM, then try to beat them all with Deep Learning

Question: What about ensembles? Weak vs strong learners?

  1. Jose/Mark: Ensemble works best with independent models (such as RF or linear models)
  2. Mark: Even an ensemble of just two models can make a big difference
  3. Jose: Use out-of-bag predictors during bagging, strong ensemble with stacking
  4. Jose: If bagged predictions are not independent, use generalized additive model with a spline with few degrees of freedom
  5. Guocong: Industry is now using ensembles: Geoffrey Hinton’s talk on Dark Knowledge – Google uses lots of ensembles
  6. Mark: Data size matters: Purely random trees can make a good ensemble for big data, ensembling lots of cheap models is no problem

Question: What was the simplest hack you did to win a competition?

  1. Mark: A simple rule-based model can naturally avoid overfitting and can beat fancy machine learning algorithms
  2. Guocong: Used a simple hash table to win a competition
  3. Jose: Dataset had inches and centimeters mixed up (wrong data) – converted data into both units to win the competition

Question: What was the most complex sequence of operations you needed to significantly improve your ranking?

  1. Jose: Iterate nested loops with cross-validation, feature engineering and parameter tuning
  2. Arno: That’s what I always do, just without sophisticated feature engineering, not good enough alone
  3. Guocong: Reading papers – takes time, as lots of machine learning papers are junk
  4. Mark: Calculating lots of features to extract the signal from a small dataset (black hole)

Question: How much time have you invested in Kaggle so far? Do you remember your first competition?

  1. Mark: Started 2 years ago, Jose won that challenge Practice Fusion Diabetes Classification
  2. Guocong: Netflix price, took Andrew Ng’s Coursera course for Machine Learning, was EE in former career, learned programming (can help a lot to be time-effective)
  3. Jose: Kaggle is very addictive, reached Top position in December 2013, got since involved in lots of projects
  4. Jose: Placed 3rd in my first competition (got $0, first place took $500k), will never forget

Question: What tools would it take to make you even better at Kaggle?

  1. Jose: A tool to control sampling for cross-validation, bagging, time-series, geographical data, grouping. Need to keep data in the same folds, same bag to get fair estimates, also for ensembles
  2. Guocong: Writes his own tools, data work flow, open-source projects, always looking for new tools
  3. Mark: Workflow helper tool to keep track of stuff (log transform, change data, no more need) – checks correlation, for example
  4. Chris: GPU support for H2O!

Question: What kind of hardware are you using?

  1. Jose: Dual-Xeon server with 256GB
  2. Guocong: 4-core with 32GB
  3. Mark: Laptop with 8GB + EC2
  4. Chris: 4000 node Hadoop cluster, 64 GPUs

Leave a Reply

+
Recap of H2O World India 2023: Advancements in AI and Insights from Industry Leaders

On April 19th, the H2O World  made its debut in India, marking yet another milestone

May 29, 2023 - by Parul Pandey
+
Enhancing H2O Model Validation App with h2oGPT Integration

As machine learning practitioners, we’re always on the lookout for innovative ways to streamline and

May 17, 2023 - by Parul Pandey
+
Building a Manufacturing Product Defect Classification Model and Application using H2O Hydrogen Torch, H2O MLOps, and H2O Wave

Primary Authors: Nishaanthini Gnanavel and Genevieve Richards Effective product quality control is of utmost importance in

May 15, 2023 - by Shivam Bansal
AI for Good hackathon
+
Insights from AI for Good Hackathon: Using Machine Learning to Tackle Pollution

At H2O.ai, we believe technology can be a force for good, and we're committed to

May 10, 2023 - by Parul Pandey and Shivam Bansal
H2O democratizing LLMs
+
Democratization of LLMs

Every organization needs to own its GPT as simply as we need to own our

May 8, 2023 - by Sri Ambati
h2oGPT blog header
+
Building the World’s Best Open-Source Large Language Model: H2O.ai’s Journey

At H2O.ai, we pride ourselves on developing world-class Machine Learning, Deep Learning, and AI platforms.

May 3, 2023 - by Arno Candel

Request a Demo

Explore how to Make, Operate and Innovate with the H2O AI Cloud today

Learn More