Different A.I. Technologies
This revision is from 2024/02/08 08:22. You can Restore it.
Machine Learning:
Supervised Learning:
- Regression
- Classification
Unsupervised Learning:
- Clustering
- Dimensionality Reduction
- Association Rule Learning
Semi-supervised Learning
Reinforcement Learning
Deep Learning:
- Convolutional Neural Networks (CNNs)
- Recurrent Neural Networks (RNNs)
- Generative Adversarial Networks (GANs)
- Transformer Networks
Transfer Learning
Ensemble Learning:
- Bagging (Bootstrap Aggregating)
- Boosting
Self-supervised Learning
Active Learning
Instance-based Learning:
- k-Nearest Neighbors (k-NN)
Decision Tree Learning
Bayesian Methods
Evolutionary Algorithms:
- Genetic Algorithms
- Genetic Programming
- Evolutionary Strategies
Fuzzy Logic
Neuroevolution
Neural Networks:
Feedforward Neural Networks (FNN):
- The basic form of neural networks where information flows in one direction, from input to output layer, without cycles.
Convolutional Neural Networks (CNN):
- Specialized for processing structured grid data such as images. They consist of convolutional layers that automatically learn hierarchical patterns.
Recurrent Neural Networks (RNN):
- Designed to work with sequence data, such as time series or natural language. They have connections that form loops, allowing information to persist.
Long Short-Term Memory Networks (LSTM):
- A type of RNN designed to overcome the vanishing gradient problem. They are capable of learning long-term dependencies in data.
Gated Recurrent Unit (GRU):
- Another variant of RNNs designed to address the vanishing gradient problem and perform better on some tasks compared to traditional RNNs.
Autoencoder:
- Neural networks designed for unsupervised learning by attempting to learn compressed representations of input data. They consist of an encoder and a decoder.
Generative Adversarial Networks (GAN):
- Comprising two neural networks, a generator and a discriminator, GANs are used for generating new data samples that resemble a given dataset.
Variational Autoencoder (VAE):
- An extension of autoencoders with probabilistic interpretations. VAEs are used for generating new data samples while allowing control over the generation process.
Self-Organizing Maps (SOM):
- Neural networks used for clustering and visualization of high-dimensional data.
Radial Basis Function Networks (RBFN):
- A type of neural network with radial basis functions as activation functions, often used for function approximation and classification tasks.
Echo State Networks (ESN):
- A type of recurrent neural network with a fixed, sparsely connected hidden layer, often used for time-series prediction tasks.
Deep Belief Networks (DBN):
- A type of generative neural network composed of multiple layers of stochastic, latent variables.