Skip to main content
Space Mapping - A Gateway to Explainable AI
AI's effectiveness is undeniable. However, the lack of explainability of AI models poses a significant challenge for application safety and future development. Space mapping techniques, rooted in human cognition and intuition, have been mathematically and intuitively explained. They exploit intuition and knowledge as parts of human intelligence, connecting unknown complex entities (referred to as "fine models") to learned knowledge (referred to as "coarse models") through simple formulations. Knowledge is updated using intuition through classic mathematical formulas like Broyden updates. Parameter extraction and prediction, as two key components of space mapping techniques, leverage human intuition to compare the unknown to the known and find its place in the knowledge space. Prediction identifies new solutions within the knowledge space. Optimization techniques serve as fundamental engines for both components. AI techniques, typically involving learning and prediction processes, also depend on optimization techniques for success. Through a comparison between space mapping and AI techniques, it could potentially explain AI techniques using the space mapping concept. Furthermore, space mapping has been proven to be convergent and may be implemented as safeguard for AI models to ensure convergence.