Deploying an NLP Model to Flask

0 Shares
0
0
0
0

Disclosure: This post may contain affiliate links, meaning I recommend products and services I've used or know well and may receive a commission if you purchase them, at no additional cost to you. Learn more.



There are many ways to deploy your machine learning model, from converting them to a different language, to embedding them into an API like Flask.

In our continuation of “NLP: Fake Jobs Text Classifier using Naive Bayes”, we are going to create a Flask API in which a Chrome extension can call upon.

Creating a Flask API and deploying it to a virtual server like in Digital Ocean could be quite expensive (like $5 USD per month!), but it was better than converting our model into JavaScript which is inconvenient and unconventional.

So, in this blog post, I would go through:

  1. Saving a Python model using Pickle
  2. Deploying the model to Flask
  3. Calling your Flask API using Postman

Saving a Python model using Pickle

While there are many ways to save our model, we’ll be using the pickle module which allows us to easily serialise and de-serialise a Python object structure.

If you could remember from our previous block post (or if you had even read it), we managed to create a Naive Bayes text classification model in which we can save and deploy.

We can do so like below:

# Save model and tokenizer 
import pickle
filename = 'api/model_NB.pkl'
pickle.dump(model_GNB, open(filename, 'wb'))
token_file = 'api/tokenizer.pkl'
pickle.dump(tokenizer, open(token_file, 'wb'))

You might have noticed that we have also saved our tokenizer. This is important as the tokenizer contains our vocabulary which is needed by our model.


Creating a simple Flask application

After saving our model, we just have to deploy it to Flask, a lightweight Python framework that you can easily create APIs with.

First, as usual, we have to import our necessary libraries and initialise our Flask API.

from flask import Flask, jsonify, request
from flask_cors import CORS
from tensorflow.keras.preprocessing.sequence import pad_sequences
import pickle
app = Flask(__name__)
cors = CORS(app)

@app.route('/')
def hello_world():
    return "Hello World"
  
if __name__ == '__main__':
    app.run(port=8080)

Once you’ve copied, pasted, and saved this in whatever code editor you’re using, you can run the following in your command line to run your Flask application.

python yourFlaskScript.py

Once you run your script, you will see something similar to the above in your command line. You might panic a little because there are some big red warnings, but don’t worry! As a wise programmer once said:

“Ever worked with someone who kept working until all warnings were eliminated? Yeah we don’t do that here, pal.”

– PressAnyKeyToExit, reddit

So now that we’ve gotten our API running, we can head over to http://localhost:8080/ to see a fancy “Hello World” thrown at our face. Fantastic.


Loading the model in Flask

Next up, we have to import our model and tokenizer into Flask before we can start using it (obviously). Under our if__name__ == ‘__main__’: statement, open our models and use pickle to load them in.

if __name__ == '__main__':
    with open('model_NB.pkl', 'rb') as model:
        model = pickle.load(model)
    with open('tokenizer.pkl', 'rb') as tokenizer:
        tokenizer = pickle.load(tokenizer)
    app.run(port=8080)

Next, we create a new route, /predict in our Flask application with a POST method only.

@app.route('/predict', methods=['POST'])
def predict():
    json_ = request.json
    tokenized_input = tokenizer.texts_to_sequences([json_['jobDescription']])
    padded_input = pad_sequences(tokenized_input, maxlen = 100, padding = 'post')
    prediction = model.predict(padded_input)
    return jsonify({'prediction': int(prediction)})

And just like what we did in our previous blog post, we have to tokenize the inputs of our API (which should be a job description), perform padding over the tokenized job description, and we finally shove it into the model.

At the end of it all, the model puts out a prediction and we return the results in a JSON format using jsonify.


Calling the Flask API using Postman

To test our our API, we can use Postman to make calls to your server. If you don’t have it, go do it now…

But when you do have it, start it and create a new request.

Set the request URL to http://127.0.0.1:8080/predict and set the body to a job description of some kind.

As an example, I’ll be using a job advertisement for my company. And because I know the job position is real, I would expect the API to return a value of 0 (Meaning it has predicted the job ad is real).

The data should be in the following format:

{
	"jobDescription": "<Your job description here>"
}

Create a key-value pair under the headers section and set the key, ‘Content-Type’ to the value, ‘application/json’. This way, our API knows what format the data is coming in.

Once you’re done mingling around in Postman, click “Send” and you’ll send the data flying to the API. Wait a second or two and you’ll receive your prediction.

And it seems likes our API has returned a prediction of 0! Meaning, in my case, I know that our job advertisement is legit! (And also that our model is working to a certain extend!)


Summary

In this blog post, we managed to save our text classifier model and tokenizer using pickle, create a Flask application and import out model, and finally calling our API using Postman to ensure it works.

In our next post, we’ll go through on how to create a chrome extension that utilises our Flask API to determine if a job advertisement is real or not.


Further Reading

Books

  • Flask Web Development (A book on learning basic to intermediate Flask web development if you’re interested in learning about the framework in more detail)

Documentation

0 Shares
Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments