AI Models
| Acronym | Full Name | Category |
|---|---|---|
| GPT | Generative Pre-trained Transformer | Language Model Architecture |
| SLM | Small Language Model | Language Model (size class) |
| HLM | Hybrid Language Model | Language Model Architecture |
| LAM | Large Action Model | Functional Model built on LLMs |
| LRM | Large Reasoning Model | Functional Model built on LLMs |
| VLM | Vision-Language Model | Multimodal Model (vision + text) |
| MoE | Mixture of Experts | Model Architecture Pattern |
| LCM | Latent Consistency Model | Generative Image Model (diffusion-family) |
GPT - Generative Pre-trained Transformer
- Dev by OpenAI to power ChatGPT and other generative AI apps
- Language Language Model (LLM)
- based on Transformer deep learning architecture
SLM - Small Language Model
- Smaller version of LLM
- fewer parameters
HLM - Hybrid Language Model
?
LAM - Large Action Model
- Agentic AI
- AI that can perform multiple tasks autonomously
- Built on top of LLMs
- Uses LLM to understand and generate human-like text
LRM - Large Reasoning Model
- AI model designed to perform complex reasoning tasks
- breaking down problems into smaller steps (resoning traces)
VLM - Vision-Language Model
- Capable of understanding and processing video, image, text.
MoE - Mixture of Experts
- Devides AI model into separate sub networks (smaller sub-models ("experts"))
LCM - Latent Consistency Model
- Generative AI model for creating images
- Focuses on maintaining consistency in generated images
Sources: