Deploy Machine Learning Models Using Flask

Deploy Machine Learning Models Using Flask

ยท

2 min read

Deploying Machine Learning Models as REST API

Machine Learning models can be deployed within the microservice to serve as an end-point. For this, I am using an existing model that I have developed to classify the product categories based on the description using Scikit-learn and serialised with the help of Pickle.

Required Python libraries

  • Flask
  • Flask-HTTPAuth
  • SQLAlchemy
  • Scikit-Learn
  • PyJWT

Flask is a lightweight web application framework. It is developer-friendly and easy to start, with the ability to scale up to complex applications. It began as a simple wrapper around Werkzeug and Jinja and has become one of the most popular Python web application frameworks.

Authentication and Prediction

I am using Flask basic authentication and JWT to secure the end-points. Hence we need to establish a connection with the database to validate the username and password. The passwords are hashed using hashing algorithms.

class Access(db.Model):
    __tablename__ = 'clients'
    id = db.Column(db.Integer, primary_key=True)
    username = db.Column(db.String(32), index=True)
    password_hash = db.Column(db.String(64))

    def hash_password(self, password):
        self.password_hash = generate_password_hash(password)

    def verify_password(self, password):
        return check_password_hash(self.password_hash, password)

    def generate_auth_token(self, expires_in=600):
        return jwt.encode(
            {'id': self.id, 'exp': time.time() + expires_in},
            grepy.config['SECRET_KEY'], algorithm='HS256')

    @staticmethod
    def verify_auth_token(token):
        try:
            data = jwt.decode(token, grepy.config['SECRET_KEY'],
                              algorithms=['HS256'])
        except:
            return
        return User.query.get(data['id'])

Once the user is authenticated with username and password, they can request a token for the following requests.

@grepy.route('/api/token')
@auth.login_required
def get_auth_token():
    token = g.user.generate_auth_token(600)
    return jsonify({'token': token.decode('ascii'), 'duration': 600})

Finally, to use the Prediction endpoints, the user must need to send the secret token in every request. The below snippet would run through the predictions using the existing serialised Machine learning models.

@grepy.route("/api/predict", methods=['POST'])
@auth.login_required
def predict_servived_api():
    try:
        req_json = request.get_json()
        request_data = json_normalize(req_json)

    except Exception as e:
        raise e
    finally:

        if request_data.empty:
            return bad_request()
        else:
            start_time = time.time()
            predict_set = request_data.dropna()
            X_test = predict_set.description.values[:]
            y_test = predict_set.id.values[:]

            trained_model = pickle.load(open(b"model.pkl", "rb"))

            predicted = trained_model.predict(X_test)

            predict_set['Prediction'] = predicted
            result = predict_set[['id','Prediction']]

            response = jsonify(predictions=result.to_json(orient="records"))
            response.status_code = 200
            return (response)

Conclusion

The model inference is a process of running the live data feed through the Machine Learning model which we developed. So in this article, we have seen how it can be deployed as a micro-service using Python flask.

Did you find this article valuable?

Support Mohamed Fayaz by becoming a sponsor. Any amount is appreciated!

ย