Can the usage of simulators and digital twins enhance trustworthiness in AI? .
AI (and more specifically ML) can produce hard-to-explain/understand outputs (e.g., non-linear prediction functions), thus increasing the "black-box" perception that humans have about these systems. To address that, there is an ongoing discussion on the integration of simulators and digital twins to assist the operation of AI systems. The usage of simulators is expected to improve the trustworthiness and reliability of AI. However, there are still a lot of questions to be addressed and a lot of standardization work to be done.
A simulator or a digital twin can be considered a safe environment where ML models can be trained, tested, evaluated, and validated. But the insights obtained in a simulation domain are tied to how realistic simulations are and how close their characterizations are to real phenomena.
Highly related to this, in the last ITU-T Study Group 13 meeting (4-15 July 2022, Geneva), a recommendation on ML sandbox for future networks, entitled "Architectural framework for Machine Learning Sandbox in future networks including IMT-2020", was consented for approval (more details can be found here). This is one of the first of its kind type of standard and opens the door to a new exciting field.
Mathematical simulation (model-based, not ML statistical techniques) has the great possibility of explaining and document the equations and mathematics explicitly. They don't use big data or statistical identification, but technical o scientific knowledge.
Integrated in a digital twin offers the possibility of creating the knowledge base upon clear and safe formulations. I think this kind of AI should be promoted and reinforced.
Please login to post comments