Machine Learning Helping the Transparency of Robots
- Vivan Vemula
- Dec 7, 2024
- 2 min read

Source: Freepik
Being transparent as most of us know is being open and honest to one another, but how can this be applied to artificial intelligence? Well for starters, transparency can help artificial intelligence become more open to development, deployment, and design systems to integrate more sources of information and to overall help the output be more accessible and easier to understand for the user.
Artificial intelligence can also explain by helping evaluate the decisions and reasoning behind it. This can be more relatable to the user and help form connections and appeal between the system and the user. For example, explanability of artificial intelligence may indicate the reasons for why you are in debt, like you are not wisely budgeting your money by overspending.
On the other hand, interpretabiltiy is using data and processes and understanding why and how it reached a specific output based on an associated input. All of these factors can be generated and implemented in open-source artificial intelligence. However, in some cases artificial intelligence may not be able to reach the levels of transparency and openness that humans are able to.
This is because humans can perceive through their senses and know all of their problems and to branch out to others about their troubles. Artificial intelligence on the other hand cannot do this to the extent of humans. They are not able to detect the exact specific reason of understanding towards humans as humans itself.
Overall, artificial intelligence can be integrated to help understand, explain, and interpret problems associated with humans, but may not fully solve their problems as it cannot be 100% accurate in most cases.
References
Transparency, explainability, and interpretability of ai. Cimplifi. (2024, March 27). https://www.cimplifi.com/resources/transparency-explainability-and-interpretability-of-ai/