GLUE Benchmark
The GLUE benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding models like BERT, including text classification datasets.
The GLUE benchmark is a collection of resources for training, evaluating, and analyzing natural language understanding models like BERT, including text classification datasets.
Hugging Face provides pre-trained models and datasets for text classification tasks using BERT, including the IMDB and 20 Newsgroups datasets.
This research article provides an overview of BERT-based text classification, including discussions on popular datasets like MNLI, SST-2, and QQP.
Stanford University's Natural Language Processing group provides resources and tutorials on using BERT for text classification tasks, including access to the Stanford Sentiment Treebank dataset.
This tutorial provides a step-by-step guide to building a BERT-based text classification model using the Kaggle IMDB dataset.
Kaggle provides a list of popular text classification datasets, including the IMDB, 20 Newsgroups, and Reuters datasets, which can be used to train and evaluate BERT models.
This article provides a comprehensive review of BERT-based text classification, including discussions on the strengths and limitations of popular datasets like GLUE and SuperGLUE.
The official PyTorch website provides tutorials and examples on using BERT for text classification tasks, including access to the AG News and Yahoo Answers datasets.