Return to page

BLOG

Spam Detection with Sparkling Water and Spark Machine Learning Pipelines

 headshot

By H2O.ai Team | minute read | June 15, 2016

Blog decorative banner image

This short post presents the “ham or spam” demo, which has already been posted earlier by Michal Malohlava , using our new API in latest Sparkling Water for Spark 1.6 and earlier versions, unifying Spark and H2O Machine Learning pipelines. It shows how to create a simple Spark Machine Learning pipeline and a model based on the fitted pipeline, which can be later used for prediction whether a particular message is spam or not.
Before diving into the demo steps, we would like to provide some details about the new features in the upcoming Sparkling Water 2.0:

  • Support for Apache Spark 2.0 and backwards compatibility with all previous versions.
  • The ability to run Apache Spark and Scala through H2O’s Flow UI.
  • H2O feature improvements and visualizations for MLlib algorithms, including the ability to score feature importance.
  • Visual intelligence for Apache Spark.
  • The ability to build Ensembles using H2O plus MLlib algorithms.
  • The power to export MLlib models as POJOs (Plain Old Java Objects), which can be easily run on commodity hardware.
  • A toolchain for ML pipelines.
  • Debugging support for Spark pipelines.
  • Model and data governance through Steam.
  • Bringing H2O’s powerful data munging capabilities to Apache Spark.

In order to run the code below, start your Spark shell with attached Sparkling Water JAR or use sparkling-shell script that already does this for you.
You can start the Spark shell with Sparkling Water as follows:

$SPARK_HOME/bin/spark-submit \
--class water.SparklingWaterDriver \
--packages ai.h2o:sparkling-water-examples_2.10:1.6.5 \
--executor-memory=6g \
--driver-memory=6g /dev/null

Preferable Spark is Spark 1.6 and Sparkling Water 1.6.x.

Prepare the coding environment

Here we just import all required libraries.

import org.apache.spark.SparkFiles
import org.apache.spark.ml.PipelineModel
import org.apache.spark.ml.feature._
import org.apache.spark.ml.h2o.H2OPipeline
import org.apache.spark.ml.h2o.features.{ColRemover, DatasetSplitter}
import org.apache.spark.ml.h2o.models.H2ODeepLearning
import org.apache.spark.sql.types.{StringType, StructField, StructType}
import org.apache.spark.sql.{DataFrame, Row, SQLContext}
import water.support.SparkContextSupport
import water.fvec.H2OFrame

Add our dataset to Spark environment. The dataset consists of 2 columns where the first one is the label ( ham or spam ) and the second one is the message itself. We don’t have to explicitly ask for Spark context since it’s already available via sc variable.

val smsDataFileName = "smsData.txt"
val smsDataFilePath = "examples/smalldata/" + smsDataFileName
SparkContextSupport.addFiles(sc, smsDataFilePath)

Create SQL support.

implicit val sqlContext = SQLContext.getOrCreate(sc)

Start H2O services.

import org.apache.spark.h2o._
implicit val h2oContext = H2OContext.getOrCreate(sc)

Create helper method which loads the dataset, performs some basic filtering and at last creates Spark’s DataFrame with 2 columns – label and text.

def load(dataFile: String)(implicit sqlContext: SQLContext): DataFrame = {
val smsSchema = StructType(Array(
StructField("label", StringType, nullable = false),
StructField("text", StringType, nullable = false)))
val rowRDD = sc.textFile(SparkFiles.get(dataFile)).map(_.split("\t")).filter(r => !r(0).isEmpty).map(p => Row(p(0),p(1)))
sqlContext.createDataFrame(rowRDD, smsSchema)
}

Define the pipeline stages

In Spark, a pipeline is formed of two basic elements – transformers and estimators. Estimators usually encapsulate an algorithm for model generation and their output are transformers. During fitting the pipeline stage, all transformers and estimators are executed and estimators are converted to transformers. The model generated by the pipeline contains only transformers. More about Spark pipelines can be found on Spark’s pipeline overview 
In H2O we created a new type of pipeline stage, which is called OneTimeTransformer. This transformer works similarly to Spark’s estimator in a way that it is only executed during fitting the pipeline stage. It does not however produces a transformer during fitting pipeline stage and the model generated by the pipeline does not contain this OneTimeTransformer.
An example for one-time transformer is splitting the input data into a validation and training dataset using H2O Frames. We don’t need this one-time transformer to be executed every time we do prediction on the model. We just need this code to be executed when we are fitting the pipeline to the data.
This pipeline stage is using Spark’s RegexTokenizer to tokenize the messages. We just specify input column and output column for tokenized messages.

val tokenizer = new RegexTokenizer().
 setInputCol("text").
 setOutputCol("words").
 setMinTokenLength(3).
 setGaps(false).
 setPattern("[a-zA-Z]+")

Remove unnecessary words using Spark’s StopWordsRemover.

val stopWordsRemover = new StopWordsRemover().
 setInputCol(tokenizer.getOutputCol).
 setOutputCol("filtered").
 setStopWords(Array("the", "a", "", "in", "on", "at", "as", "not", "for")).
 setCaseSensitive(false)

Vectorize the words using Spark’s HashingTF.

val hashingTF = new HashingTF().
 setNumFeatures(1 << 10).
 setInputCol(tokenizer.getOutputCol).
 setOutputCol("wordToIndex")

Create inverse document frequencies based on hashed words. It creates a numerical representation of how much information a
given word provides in the whole message.

val idf = new IDF().
 setMinDocFreq(4).
 setInputCol(hashingTF.getOutputCol).
 setOutputCol("tf_idf")

This pipeline stage is one-time transformer. If setKeep(true) is called in it, it preserves specified columns instead
of deleting them.

val colRemover = new ColRemover().
 setKeep(true).
 setColumns(Array[String]("label", "tf_idf"))

Split the dataset and store the splits with the specified keys into H2O’s distributed storage called DKV. This is one-time transformer which is executed only during fitting stage. It determines the frame, which is passed on the output in the following order:

  1. If the train key is specified using setTrainKey method and the key is also specified in the list of keys, then frame with this key is passed on the output
  2. Otherwise, if the default key – ‚Äútrain.hex‚Äù is specified in the list of keys, then frame with this key is passed on the output
  3. Otherwise the first frame specified in the list of keys is passed on the output
val splitter = new DatasetSplitter().
 setKeys(Array[String]("train.hex", "valid.hex")).
 setRatios(Array[Double](0.8)).
 setTrainKey("train.hex")

Create H2O’s deep learning  model.
If the key specifying the training set is set using setTrainKey, then frame with this key is used as the training frame, otherwise it uses the frame from the previous stage as the training frame

val dl = new H2ODeepLearning().
 setEpochs(10).
 setL1(0.001).
 setL2(0.0).
 setHidden(Array[Int](200, 200)).
 setValidKey(splitter.getKeys(1)).
 setResponseColumn("label")

Create and fit the pipeline

Create the pipeline using the stages we defined earlier. As a normal Spark pipeline, it can be formed of Spark’s transformers and estimators, but it also may contain H2O’s one-time transformers.

val pipeline = new H2OPipeline().
 setStages(Array(tokenizer, stopWordsRemover, hashingTF, idf, colRemover, splitter, dl))

Train the pipeline model by fitting it to a Spark’s DataFrame

val data = load("smsData.txt")
val model = pipeline.fit(data)

Now we can optionally save the model to disk and load it again.

model.write.overwrite().save("/tmp/hamOrSpamPipeline")
val loadedModel = PipelineModel.load("/tmp/hamOrSpamPipeline")

We can also save this unfitted pipeline to disk and load it again.

pipeline.write.overwrite().save("/tmp/unfit-hamOrSpamPipeline")
val loadedPipeline = H2OPipeline.load("/tmp/unfit-hamOrSpamPipeline")

Train the pipeline model again on loaded pipeline just to show deserialized model works as it should.

val modelOfLoadedPipeline = loadedPipeline.fit(data)

Create helper function for predictions on unlabeled data. This method is using model generated by the pipeline. To make a prediction we call transform method with Spark’s Dataframe as an argument on the generated model. This call executes each transformer specified in the pipeline one after one producing Spark’s DataFrame with predictions.

def isSpam(smsText: String,
 model: PipelineModel,
 h2oContext: H2OContext,
 hamThreshold: Double = 0.5):Boolean = {
 import h2oContext.implicits._
 val smsTextDF = sc.parallelize(Seq(smsText)).toDF("text") // convert to dataframe with one column named "text"
 val prediction: H2OFrame = model.transform(smsTextDF)
 prediction.vecs()(1).at(0) < hamThreshold
}

Try it!

println(isSpam("Michal, h2oworld party tonight in MV?", modelOfLoadedPipeline, h2oContext))
println(isSpam("We tried to contact you re your reply to our offer of a Video Handset? 750 anytime any networks mins? UNLIMITED TEXT?", loadedModel, h2oContext))

In this article we showed how Spark’s pipelines and H2O algorithms work together seamlessly in Spark environment. We strive to be consistent with Spark API in H2O.ai  and make the life of a developer/data scientist easier by hiding H2O internals and exposing the APIs that are natural for Spark users.

 headshot

H2O.ai Team

At H2O.ai, democratizing AI isn’t just an idea. It’s a movement. And that means that it requires action. We started out as a group of like minded individuals in the open source community, collectively driven by the idea that there should be freedom around the creation and use of AI.

Today we have evolved into a global company built by people from a variety of different backgrounds and skill sets, all driven to be part of something greater than ourselves. Our partnerships now extend beyond the open-source community to include business customers, academia, and non-profit organizations.