Different A.I. Technologies

This revision is from 2024/02/08 08:22. You can Restore it.

Machine Learning:

  1. Supervised Learning:

    • Regression
    • Classification
  2. Unsupervised Learning:

    • Clustering
    • Dimensionality Reduction
    • Association Rule Learning
  3. Semi-supervised Learning

  4. Reinforcement Learning

  5. Deep Learning:

    • Convolutional Neural Networks (CNNs)
    • Recurrent Neural Networks (RNNs)
    • Generative Adversarial Networks (GANs)
    • Transformer Networks
  6. Transfer Learning

  7. Ensemble Learning:

    • Bagging (Bootstrap Aggregating)
    • Boosting
  8. Self-supervised Learning

  9. Active Learning

  10. Instance-based Learning:

    • k-Nearest Neighbors (k-NN)
  11. Decision Tree Learning

  12. Bayesian Methods

  13. Evolutionary Algorithms:

    • Genetic Algorithms
    • Genetic Programming
    • Evolutionary Strategies
  14. Fuzzy Logic

  15. Neuroevolution

Neural Networks:

  1. Feedforward Neural Networks (FNN):

    • The basic form of neural networks where information flows in one direction, from input to output layer, without cycles.
  2. Convolutional Neural Networks (CNN):

    • Specialized for processing structured grid data such as images. They consist of convolutional layers that automatically learn hierarchical patterns.
  3. Recurrent Neural Networks (RNN):

    • Designed to work with sequence data, such as time series or natural language. They have connections that form loops, allowing information to persist.
  4. Long Short-Term Memory Networks (LSTM):

    • A type of RNN designed to overcome the vanishing gradient problem. They are capable of learning long-term dependencies in data.
  5. Gated Recurrent Unit (GRU):

    • Another variant of RNNs designed to address the vanishing gradient problem and perform better on some tasks compared to traditional RNNs.
  6. Autoencoder:

    • Neural networks designed for unsupervised learning by attempting to learn compressed representations of input data. They consist of an encoder and a decoder.
  7. Generative Adversarial Networks (GAN):

    • Comprising two neural networks, a generator and a discriminator, GANs are used for generating new data samples that resemble a given dataset.
  8. Variational Autoencoder (VAE):

    • An extension of autoencoders with probabilistic interpretations. VAEs are used for generating new data samples while allowing control over the generation process.
  9. Self-Organizing Maps (SOM):

    • Neural networks used for clustering and visualization of high-dimensional data.
  10. Radial Basis Function Networks (RBFN):

    • A type of neural network with radial basis functions as activation functions, often used for function approximation and classification tasks.
  11. Echo State Networks (ESN):

    • A type of recurrent neural network with a fixed, sparsely connected hidden layer, often used for time-series prediction tasks.
  12. Deep Belief Networks (DBN):

    • A type of generative neural network composed of multiple layers of stochastic, latent variables.

  

📝 📜 ⏱️ ⬆️