Nov 3, 2020

Explainability of AI systems

 According to McKinsey, one-third of AI products that go live need monthly updates to keep up with changing conditions, like model drift or use case transformation. The same is true for many other computer science domains where advanced algorithms need to be more or less thoroughly fine-tuned over extended periods. This is increasingly acceptable for system users that experienced similar circumstances with firmware embedded in their cars, videogames, or phone apps. A far more significant issue that affects many industries is system explainability—unfortunately, some of the most robust algorithms, e.g., deep learning algorithms that are somewhat difficult to analyze. According to current research, there is thus a gap between explainability in practical applications and transparency for internal stakeholders.



No comments:

Post a Comment

Note: Only a member of this blog may post a comment.