DevLog: Prototype Training and Deployment

ML people don’t understand deployment and software engineers don’t understand ML.

Luckily for ML there are trivial ways to quickly prototype and deploy models. In this post, I’ll go over how to train and deploy a model in a few minutes using Colab for training, and Hugging Face space for hosting, and gradio for the interface.

You can get the code here and see the demo here.

Training the Model

To train the model see this Colab.

This is the model code:

path = untar_data(URLs.PETS)/'images'

dls = ImageDataLoaders.from_name_func('.',
    get_image_files(path), valid_pct=0.2, seed=42,
    # pickle won't save the implementation which is why I use a lambda
    label_func=lambda x: x[0].isupper(), # For some crazy reason cat image names are upcased...
    item_tfms=Resize(192))
learn = vision_learner(dls, resnet18, metrics=error_rate)
learn.fine_tune(3)
# pickles the model, so beware of defining your own functions
# in the future using an executable with no deps like onnx is preferable
learn.export()
# Go ahead and download the *.pkl file for use in your deployment repo

That’s it! Here we are using fastai — a library on top of torch. We use a vision_learner and initialize it with resnet18 — a pre-existing image classifier model. This way we can fine tune this model to take advantage of what is already known rather than train it from scratch.

For engineers, this is like reusing pre-built libraries and extending it in your custom code rather than a full re-write. We then grab an existing pets image dataset for finetuning. You can do the same process for anything i.e. whether a plant is dehydrated or not. See this Colab.

Deployment

Now, for deployment follow the instructions here to make a HF space.

Pull down the “app” implementation by cloning the repo here.

Note, that if you edit the file you may have issues authenticating with HF on OSX. To get around this use ssh rather than https when cloning and setup your ssh key. Instructions are here.

# I was too lazy to re-export the model using a lambda so I defined the is_cat method here before unpickling
def is_cat(x): return x[0].isupper() 

learn = load_learner('dog_or_cat.pkl')
labels = learn.dls.vocab
def predict(img):
    img = PILImage.create(img)
    pred,pred_idx,probs = learn.predict(img)
    return {labels[i]: float(probs[i]) for i in range(len(labels))}

title = "Is it a Cat?"
examples = ['cat_ex.jpg', 'dog_ex.jpg']

gr.Interface(
    fn=predict,
    inputs=gr.components.Image(height=512, width=512),
    outputs=gr.components.Label(),
    title=title,
    examples=examples,
).launch()

Here we unpickle the fastai model. Define a predict function that will take in a user uploaded image from our gradio interface and then run inference. See the input and output components!

This is a simple illustration of trivial deployment of a ML model trained from a notebook.

Where to go from here: