〒180-0006 東京都武蔵野市中町2-3-5 IIHA武蔵野1階
Dear MEL Topic Readers,
Can we trust AI if we don’t know how it works?
Which is more reliable, reasonable or acceptable, a decision made by an educated and qualified human or Artificial Intelligence powered by machine learning and algorithms? As AI is taking more decision-making roles in the areas where human intelligence and judgment used to play a part, such as a loan approval, medical diagnosis, and driving, humans are becoming less aware or informed of how those decisions were made. For example, when a person who is declined a loan application by an AI assessor wants to know the reason, will the machine or a human be able to provide a rational explanation in plain language?
Algorithm is a list of mathematical rules or procedures to follow in order to achieve an objective. It learns as more relevant data is provided. Unlike conventional computer programs that are written by humans, outcomes of AI are hardly retraced or reverse engineered for an explanation. In other words, Blackbox.
As long as the outcome is right or more appropriate for the situation, it won’t bother many. But when it is questioned, it isn’t designed to provide proper or understandable explanations. Will that be problematic?
Enjoy reading and learn what AI decisions are like.