Self-supervised Learning
These terms encapsulate key methodologies and concepts within self-supervised learning, highlighting how AI models can leverage inherent structures or patterns in data to learn meaningful representations or perform tasks without the need for explicit labels, bridging the gap between supervised and unsupervised learning paradigms.
Autoencoder - A type of neural network used to learn efficient representations (encoding) of unlabeled data, typically for the purpose of dimensionality reduction or feature learning, by attempting to output a reconstruction of its input.
Contrastive Learning - A technique used in self-supervised learning that teaches the model to understand which data points are similar or different, enhancing the model's ability to learn meaningful representations of data without explicit labels.
Pretext Task - A task designed to generate artificial labels from unlabeled data, which is used in self-supervised learning to train models in a way that enables them to learn useful features or representations that can be leveraged for downstream tasks.
Pseudo-labelling - A technique where a model's own predictions on unlabeled data are used as labels for training, commonly used in semi-supervised learning, which shares similarities with self-supervised learning in its use of unlabeled data for training models.
Self-Supervised Learning - A learning paradigm where the model learns to predict any part of its input from any other part of its input, using the input data as its own supervision.