Build, Train and Deploy ML Pipelines using BERT

BERT is a state-of-the-art architecture based on Transformer Architecture. BERT Training is a two step process. The first step is called Pre-Trining and the next step is fine tuning for a specific language task. The original BERT model has been trained using gigantic amout of data in Google Research Lab. Several pre-trained model of BERT is available for us which we can use to fine tune our specific task. This section will also include monitor and profile the model training during fine-tuning step.

Once the pre-training is done, an ML pipeline is also need to be in-place to complete the deployment demonstration that will orchestrate the variuos steps of feature engineering, model training and evaluatuion and the model deployment. The last step would be to focus on automating an end-to-end machine learning workflow.

BERT Model Feature Engineering

In this phase raw data is trasformed to features by the application of Business domain knowledge, statistical transformation and exploratory data anlysis. We derive meaningful features for training the model. AWS Feature Store is a tool that has been used in order to derive meaningful features. This process involves preparing data in order to fit to the nominated model and also obtain good performance from the model. Feature Engineering components are Feature Selection, Feature Creation, and Featute Transformation.

In this phase the dimensionality of feature set is reduced in order to facilitate faster training process. Feature Importance chart is a good indicator of the the relevant features required for training the model. Feature inference is a process of inferring new features from an existing features. Feature transformation step uses the Imputation technique to come up with missing fature values and Feature Scaling numerical features using Standardization or Normalization and converts the non-numerical features to numerical values.