Semi-supervised Learning

These terms encapsulate the core methods and concepts of semi-supervised learning, illustrating how it bridges the gap between supervised learning (with ample labeled data) and unsupervised learning (with no labeled data), exploiting the abundance of unlabeled data to enhance model performance.

Co-training - A semi-supervised learning technique where two classifiers are trained separately on different views of the data and then predict unlabeled examples for the other to train on.

Label Propagation - A technique in semi-supervised learning where labels from a small subset of labeled data are propagated to a larger set of unlabeled data based on similarity or proximity, helping to classify or categorize unlabeled instances.

Pseudo-labelling - A method in semi-supervised learning where a model trained on a small amount of labeled data predicts labels for unlabeled data, and these predicted labels are used as if they were true labels to retrain the model, enhancing its learning from the combination of labeled and unlabeled data.

Semi-Supervised Learning - A learning paradigm that involves training models on a combination of a small amount of labeled data and a large amount of unlabeled data, leveraging the structure and distribution of the unlabeled data to improve learning accuracy and efficiency.

Tri-training - Similar to co-training but involves three classifiers that help label unlabeled data for each other, improving as they learn more from the newly labeled data.

Unlabeled Data - Data that has input features but no corresponding target values, crucial in semi-supervised learning for leveraging large amounts of available data.