Developing self-programmable AI devices
Modern technology increasingly depends on machine intelligence. Many manufacturing processes, smart objects and advanced robots all rely on mechanisms that offer some sort of programmability. Yet these programmes are limited as their logic is based on hardwired rules designed by humans. Problems can arise when these devices face new or unknown situations beyond their original design. “Programs often execute in unexpected circumstances,” explains Giuseppe De Giacomo(opens in new window), professor of Computer Science at the University of Oxford. “In many application areas, it is simply too costly and error-prone to delegate to software engineers to list and handle all possible adaptation tasks that may arise in the mechanism execution.” When applications must handle widely unexpected circumstances – whether from interactions with the real world or with humans making decisions based on unmodelled circumstances – it is infeasible to determine a priori all possible adaptations that may be needed, he explains. Generative artificial intelligence (GenAI) is seen as a powerful tool for avoiding preprogrammed solutions in favour of learned ones and has been widely adopted in fields including medicine. Yet we still don’t know exactly how and why GenAI makes the decisions it does, or whether they are correct for the task in hand. “These solutions have a black-box nature, which restrains their adoption in so-called safety-critical systems, where decisions may have serious consequences, such as for safety or security,” says De Giacomo. In the WhiteMech project, which was funded by the European Research Council(opens in new window), De Giacomo and his team started the development of tools to create white-box self-programming mechanisms able to self-generate behaviour to achieve certain goals and – importantly – explain how they did so.
Creating transparent white-box mechanisms
The WhiteMech project proposed self-programming solutions based on a mathematical model of the environment in which the system operates, and how actions of the system impact such an environment. The concept of ‘planning’ in AI usually means that given a model of the world and a desired goal, it computes a series of actions to achieve the goal. Yet in the WhiteMech project, the researchers took a more nuanced route, creating programmes that can perform complex tasks that extend over time, such as going through certain steps – possibly depending on the observations collected while executing – while simultaneously always remaining in a safe area. And while many IT companies routinely check their systems for correctness after the fact, WhiteMech aimed to develop programmes automatically able to perform tasks – a problem known as ‘reactive synthesis’. “In WhiteMech we want to use reactive synthesis while the system is in operation: when an exceptional circumstance manifests itself, the system autonomously computes and enacts an appropriate reaction with formal guarantees of correctness,” adds De Giacomo. Through WhiteMech, the team successfully developed the science, methodologies, algorithms and tools to address this basic problem: building white-box self-programming mechanisms.
Attracting interest among smart industries
While WhiteMech was a basic science project, the results have already attracted the interest of several communities, including smart factories, robotics, digital twins and business process management. “A clear witness of this interest is the number of citations that the scientific papers related to WhiteMech are attracting,” notes De Giacomo. The team will continue their research, revisiting the tools and techniques and transitioning them from the lab into real-world applications. WhiteMech has also opened up new avenues for further research. This includes using GenAI to mathematically model a space – a room, say – then use WhiteMech techniques to automatically carry out its tasks.