Artificial intelligence (AI) is becoming more integrated into human life with the use of AI chat applications, like Open Ai's ChatGPT. More people are relying on it to write reports, to get advice, generate art, etc. But there is a big potential problem. These AI applications are only as good as the data that they are trained on. The data that these AI uses are selected by us humans, mainly computer devs. and it's possible to have some Ai application filled with data that is biased in one way or another, like if the dev. only allows data that would align more to a liberal point-of-view or a conservative one. Not being able to know about the type of data that an Ai application is running on is known as the 'black box problem'.
Basically, the Black box problem seems to boil down to can we trust Ai seeing that the information it uses is based in part on human input (whatever knowledge
base the developer puts in).
For Debate:
1. What's your view on the Black box problem of Ai?
2. Even if Ai could eventually "think" and make decisions without human input, should we trust it?
3. How can we resolve the Black box problem?
Source: https://umdearborn.edu/news/ais-mysterious-black-box-problem-explainedBut Rawashdeh says that, just like our human intelligence, we have no idea of how a deep learning system comes to its conclusions. It “lost track” of the inputs that informed its decision making a long time ago. Or, more accurately, it was never keeping track.
This inability for us to see how deep learning systems make their decisions is known as the “black box problem,” and it’s a big deal for a couple of different reasons. First, this quality makes it difficult to fix deep learning systems when they produce unwanted outcomes.
Deep learning systems are now regularly used to make judgements about humans in contexts ranging from medical treatments, to who should get approved for a loan, to which applicants should get a job interview. In each of these areas, it’s been demonstrated that AI systems can reflect unwanted biases from our human world. (If you want to know how AI systems can become racially biased, check out our previous story on that topic.) Needless to say, a deep learning system that can deny you a loan or screen you out of the first round of job interviews but can’t explain why, is one most people would have a hard time judging as “fair.”
Basically, the Black box problem seems to boil down to can we trust Ai seeing that the information it uses is based in part on human input (whatever knowledge
base the developer puts in).
For Debate:
1. What's your view on the Black box problem of Ai?
2. Even if Ai could eventually "think" and make decisions without human input, should we trust it?
3. How can we resolve the Black box problem?