Switch to English Site

dotsdots

Build, Train, and Deploy ML Pipelines using BERT

Descripción

In the second course of the Practical Data Science Specialization, you will learn to automate a natural language processing task by building an end-to-end machine learning pipeline using Hugging Face’s highly-optimized implementation of the state-of-the-art BERT algorithm with Amazon SageMaker Pipelines.

Your pipeline will first transform the dataset into BERT-readable features and store the features in the Amazon SageMaker Feature Store. It will then fine-tune a text classification model to the dataset using a Hugging Face pre-trained model, which has learned to understand the human language from millions of Wikipedia documents. Finally, your pipeline will evaluate the model’s accuracy and only deploy the model if the accuracy exceeds a given threshold.Lee mas.

Este recurso es ofrecido por un socio afiliado. Si paga por la capacitación, podemos ganar una comisión para respaldar este sitio.

Relevancia profesional por rol de datos

Las técnicas y herramientas cubiertas en Build, Train, and Deploy ML Pipelines using BERT son muy similares a los requisitos que se encuentran en los anuncios de trabajo de Científico de datos.

Puntuaciones de similitud (sobre 100)

Secuencia de aprendizaje

Build, Train, and Deploy ML Pipelines using BERT is a part of uno structured learning path.

Coursera
DeepLearning.AI

3 Courses 3 Months

Practical Data Science