>_TheQuery
← Glossary

Recurrent Neural Network

Deep Learning

A neural network architecture with loops that allow information to persist across time steps, designed for processing sequential data.

A recurrent neural network (RNN) is a class of neural networks where connections between neurons form directed cycles, creating an internal state (memory) that allows the network to process sequences of inputs. At each time step, the RNN takes the current input and its previous hidden state to produce an output and a new hidden state, enabling it to model temporal dependencies.

Vanilla RNNs suffer from the vanishing and exploding gradient problems, making it difficult to learn long-range dependencies. This led to the development of gated architectures like Long Short-Term Memory (LSTM) networks and Gated Recurrent Units (GRUs), which use gating mechanisms to control the flow of information and better preserve gradients over long sequences.

While RNNs were once the dominant architecture for sequential tasks like language modeling, machine translation, and speech recognition, they have been largely superseded by transformers for most NLP applications. However, RNNs and their variants still find use in time-series forecasting, real-time audio processing, and other domains where sequential processing is natural or where computational constraints favor recurrent over attention-based architectures.

Last updated: February 20, 2026