The article examines the constraints of auto-regressive language models, which struggle with complex reasoning due to their fixed computational steps. It proposes integrating associative memory structures like graphs to improve reasoning abilities, enabling models to better represent real-world relationships, perform multi-hop reasoning, and efficiently retrieve relevant information. The piece also highlights the potential of combining graph-based structures with language models for applications such as question answering and problem-solving.