Robotic control policies that integrate visual perception, language understanding, and action execution into a unified model. These policies allow robots to interpret high-level instructions expressed in natural language and ground them in visual observations. VLA approaches are commonly used in embodied AI and general-purpose robotics. They enable flexible task execution without task-specific reprogramming. By leveraging multimodal representations, VLA policies improve generalization across environments and tasks. Although they may use large neural architectures, their core contribution lies in robotic decision-making. VLA policies exemplify the trend toward more autonomous and instruction-following robots.