August 8th, 2013
Random Forest Measurements for the MNIST DatasetRSS Share Category: Uncategorized [EN]
This post discusses the performance of H2O’s Random Forest  algorithm. We compare different versions of H2O as well as the RF implementation by wise.io. We use wall-clock time to measure work flows that match up with the user experience. A link to the scripts used is available here .
- Amazon EC2 in US-EAST-1
- M2 High-Memory Quadruple Extra Large EC2 (m2.4xlarge)
- 26 ECUs, 8 vCPUs, 68.4 GB RAM
- We used a 100 GB EBS mount to host the code and data.
- No additional swap volume was needed.
- Software environment:
- Ubuntu Server 12.04 LTS (64 bit)
- Oracle Java 1.7.0_25
- Bash 4
- Software under test:
- H2O-Fourier-1 (60GB Heap)
- H2O-Fourier-6 (60GB Heap)
- WiseRF 1.5.9
- 0xdata recommends using the Oracle JDK/JRE.
- No swapping occurred.
- EC2 has variable performance and results are aggregate scores over many runs.
- We used H2O’s REST API. This API is intended for customer use.
- Caches were dropped before each run (“sudo bash -c “sync; echo 3 > /proc/sys/vm/drop_caches”)
The original MNIST data (60,000 28×28 images of hand-written digits) was expanded into 8.1 million instances by thickening, dilating, skewing, and contracting the original images as described here .
This expanded MNIST dataset is available here . There are 784 features with values in the range 0 – 255. This data was split into testing and training sets following the methodology described here .
Dataset Name: mnist8m Number of Features: 784</code></li> Number of Training Observations: 7,000,000 Number of Testing Observations: 1,100,000 Number of Classes: 10
Parameters & Methodology
The following parameters were used for comparing the H2O RF and WiseRF algorithms. These parameters were chosen using the methodology described here .
This methodology was previously shown to produce very low error rates when predicting on the test data (less than one-tenth of one percent in all cases).
Tests were performed on an Amazon EC2 instance. Since individual runs in EC2 can experience variability, each configuration was run 10 times. The graphs in the “Speed” section below show box plots for each configuration (each box represents 10 runs).
depth: 2147483647 (no limit)
bin limit: 1024
max depth: 0 (no limit)
min node size: 1
feature type: uchar, float, double
The following graphs show the measurements for H2O’s RF and WiseRF. All measurements are wall clock times. The graphs are broken down into overall run time, the time to parse the training file, the time to train the model, and the time to parse and score a test file. Note: Graphs are annotated with median times.
The graphs show that H2O’s RF performance improved significantly from Fourier-1 to Fourier-6. The reader can see that overall the Wise RF algorithm performs roughly equivalently to H2O when the dataset values do not fit perfectly within a single byte. Also note that, once a model is built, both H2O and WiseRF-uchar parse and score the test data in nearly the same amount of time.
The graph shows memory utilization (RSS) for each algorithm over time. These values were collected over one additional independent run for each algorithm. Note that none of the algorithms required swap to run.
Reproducing These Results
Scripts and Steps to reproduce the data are available here. Directions in the README.
Data used to generate charts above available here and also below in the tables (memory tables excluded).
WiseRF-double Speed Summary
WiseRF-float Speed Summary
WiseRF-uchar Speed Summary
H2O-Fourier-1-aec3679e7c.csv Speed Summary
H2O-Fourier-6-137 Speed Summary
Speed Table (All Data Points)