Photo by Andy Kelly on Unsplash

NLP pipeline: A Primer

A short write up that provides a high level explanation of a generic Natural Language Processing pipeline, by breaking down the entire process into stages.

Natural Language Processing (NLP) is definitely not a new term for most of us. It was primarily a topic for research and development in field of computer science and linguistics. But with the recent advancements in AI specific in field of NLP and the benefits that it brings to a business, it has become a mainstream topic. Whether we realize it or not but we have been using NLP for most of our routine work. For example intelligent search suggestions, chatbots on websites, voice commands for Alexa , recommendation of products and many more.

For humans, language is a natural medium of communication and it can be in form of a speech or text. But (as we already know) for computer’s this isn’t the case. Natural Language Processing or NLP is technique that enables computers to understand and process information in this natural form replacing the need for humans to pre-program it to follow the instructions.

Though powerful, NLP is quite challenging because human language is diverse and is quite tough to make a machine understand the context of a speech.

Let’s take a look at the different stages of a NLP system development pipeline.

NLP pipeline

Data Acquisition

The first step in a NLP pipeline is to collect or generate data. This data will be usually in the form of html content, tweets, document, logs etc. Below are few important methods that can be used to collect, instrument or generate data.

Text Cleaning

The second step is to extract text and remove symbols, html mark up, junk characters etc, from the data and might involve below actions.

Pre-processing

Once all the text is extracted and cleaned from the raw data, we can perform additional processing on it. In this step we are trying to convert this text data into sentences or words that can be further used for Feature Engineering. Some of the important processes like Stemming and Lemmatization is a part of this step.

Feature Engineering

In this step, we will extract features from the input text and feed it to the ML algorithm. Feature extraction can be done manually or with the help of a neural network (in case of Deep learning). Both the approaches have their own pros and cons. For instance, if feature engineering is done manually, we can easily determine how these features impact a model performance but in case of DL driven feature engineering, this information is not available since the neural network won’t expose on what basis it had selected a feature and what impact does it have on the model performance. On the other hand, manual feature engineering requires domain knowledge about the task at hand which is not needed for a DL driven approach.

Modelling

In this step we will be selecting an appropriate ML model based on the task and will be using the features from previous step as input to our model. The performance of this step also depends on the previous steps. A good model can yield bad results if the input text that was fed to feature engineering was not processed properly. On the other hand a simple model can give best results if the text was properly processed and the feature extraction was done carefully. The amount of data available to train a model is also a major factor. While there are plenty of documentation about how modelling can be done , in general below points can be considered.

Evaluation

In this step we basically determine the models performance on unseen data. This depends a lot on the metric selected for evaluation and the evaluation process itself. It also depends on the phase in which the evaluation was done, example modelling phase, deployment phase and production phase. The evaluation done for first 2 phases are necessary and are called as Intrinsic Evaluation. Along with Intrinsic evaluation , additional metrics to measure business impact can be performed in 3rd phase and is called as Extrinsic evaluation. Few Intrinsic evaluations done for NLP model, are listed below:

Deployment and Monitoring

The deployment of a NLP software is usually done as a web or REST service that can be consumed by users or other services.

The Performance of a deployed model is monitored continuously. This kind of monitoring is different from traditional software monitoring tools. One of the tools that I came across is Evidently. Once we start collecting more data we have to update the deployed model so that the newer data is also considered by the model while making predictions.

References

This post was meant to just get familiar with the several stages that a generic NLP pipeline consists of and is an extract of the topics mentioned in chapter 2 of Practical Natural Language Processing book. Each stage is covered in detail in individual chapters of the same book.

Thank you for taking time to read this post!

I work on cloud and containerization technologies and interested in coding, problem solving and writing philosophical and motivational quotes.