The advancement of autonomous robots is still in need of a comprehensive framework for task execution awareness, enabling the generation of autonomous behaviors and responses in unknown dynamic environments. Situation awareness aims to enhance the robot's understanding of its surroundings while it performs long-horizon tasks. In this work, we propose CRESTA, a novel bio-inspired cognitivist framework for semantic-driven task awareness addressing the intricate challenges of perceiving, navigating and manipulating dynamic environments. CRESTA's objective of achieving effective robot awareness relies on the perceived environment semantics and on the combined use of online planning, reasoning, monitoring, while also enabling recovery from task level failures. It is designed as a set of online synchronous processes for (a) collecting and analyzing sensors data as well as updating world model description, (b) real-time decision-making and task states monitoring, and (c) execution of each action. Being highly modular and configurable to assorted robotic systems, the proposed framework aims for adaptability across diverse robotic platforms and tasks. In this work, a detailed description of CRESTA's framework comes along with demonstrative tasks to showcase its capabilities on CENTAURO robot, offering insights into its potential efficacy in real-world scenarios. In the discussed experimental results, CRESTA leads the robot to navigate towards and manipulate a lever, while recovering from failures by adapting the parameters of its actions, e.g. increasing forces for successfully pushing the object.
Fig. 1: CRESTA framework overview. The hybrid nature of the framework is represented in the schema, distinguishing the high semantic level and the low control level. Perception is illustrated as a separated component because it serves both levels, the first for semantic handling and the second for reactive behaviors, respectively. At each level the framework offers several cutting-edge functions for robot autonomous behaviors in long-horizon tasks. The key ones are the Task Awareness implementing the planning, the Action Awareness monitoring the task states during the task execution and the World Model, which can also implement reasoning queries.
Fig. 2: Execution Overview. Interaction between CRESTA Task Manager and Action Awareness with their main mechanisms, during the execution of a lever arm pushing scenario example.
Fig. 3: Experimental task setup. Two examples of robot view during experiments are illustrated here, while in the Action Timeline box of Fig. 2 there are some frames extracted from an external camera.
Fig. 4: CRESTA architecture. A summary of each component is provided, along with the possible interactions between them. Components in light blue are implemented as node entities, while the ones in yellow are modules within a node
All the data corresponding to the experiments showed in the Video are available HERE .