Artificial intelligence (AI) is becoming more integrated into human life with the use of AI chat applications, like Open Ai's ChatGPT. More people are relying on it to write reports, to get advice, generate art, etc. But there is a big potential problem. These AI applications are only as good as the data that they are trained on. The data that these AI uses are selected by us humans, mainly computer devs. and it's possible to have some Ai application filled with data that is biased in one way or another, like if the dev. only allows data that would align more to a liberal point-of-view or a conservative one. Not being able to know about the type of data that an Ai application is running on is known as the 'black box problem'.

But Rawashdeh says that, just like our human intelligence, we have no idea of how a deep learning system comes to its conclusions. It “lost track” of the inputs that informed its decision making a long time ago. Or, more accurately, it was never keeping track.

This inability for us to see how deep learning systems make their decisions is known as the “black box problem,” and it’s a big deal for a couple of different reasons. First, this quality makes it difficult to fix deep learning systems when they produce unwanted outcomes.

Deep learning systems are now regularly used to make judgements about humans in contexts ranging from medical treatments, to who should get approved for a loan, to which applicants should get a job interview. In each of these areas, it’s been demonstrated that AI systems can reflect unwanted biases from our human world. (If you want to know how AI systems can become racially biased, check out our previous story on that topic.) Needless to say, a deep learning system that can deny you a loan or screen you out of the first round of job interviews but can’t explain why, is one most people would have a hard time judging as “fair.”
Source: https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

Basically, the Black box problem seems to boil down to can we trust Ai seeing that the information it uses is based in part on human input (whatever knowledge
base the developer puts in).

For Debate:
1. What's your view on the Black box problem of Ai?
2. Even if Ai could eventually "think" and make decisions without human input, should we trust it?
3. How can we resolve the Black box problem?
 
3. How can we resolve the Black box problem?
This article might provide some ideas when it comes to decentralizing Ai...

Much of today’s AI exists in centralized black boxes owned by a few influential organizations. This concentration of control counters the otherwise democratizing potential of AI and hands over outsized influence on society, finance and creativity to a handful of unchecked entities.

As AI systems advance, decentralizing its development and its applications becomes even more critical. Trustless, permissionless AI can power innovation across industries.
...
Decentralized AI distributes the control over humanity’s most capable technology ever, rather than concentrating its power, which mitigates the potential for overarching influence by any one entity.

With development and governance distributed across entities with diverse incentives and priorities, AI can progress in better alignment with individual needs rather than imposing homogeneous outcomes. This nurtures diverse applications rather than having a handful of prevailing models dominate the culture.

Decentralized AI also provides checks against mass surveillance and manipulation by governments or corporations. Centralized control enables advanced AI utilization against citizen interests on massive scales. But decentralized AI limits this avenue of oppression.

Overall, decentralized AI limits any one entity from imposing a single set of incentives, constraints or goals, which is necessary for such a critical tool.

To decentralize AI, we must rethink the fundamental layers that comprise the AI stack. This includes components like computing power, data, model training, fine-tuning and inference. Merely using open source models is not enough if other parts of the stack, such as the entities providing compute for training or inference, remain centralized.

True decentralization requires active coordination across all layers of the AI stack. After all, networks are only as decentralized as their least decentralized component.

This is where markets can provide the necessary boost. Markets are the best coordination mechanisms we have access to for organizing people. As such, decentralized AI networks can compete with their centralized counterparts by deconstructing the AI stack into basic modular functions and creating markets around them.
Source: BuiltIn
 
Artificial intelligence (AI) is becoming more integrated into human life with the use of AI chat applications, like Open Ai's ChatGPT. More people are relying on it to write reports, to get advice, generate art, etc. But there is a big potential problem. These AI applications are only as good as the data that they are trained on. The data that these AI uses are selected by us humans, mainly computer devs. and it's possible to have some Ai application filled with data that is biased in one way or another, like if the dev. only allows data that would align more to a liberal point-of-view or a conservative one. Not being able to know about the type of data that an Ai application is running on is known as the 'black box problem'.

But Rawashdeh says that, just like our human intelligence, we have no idea of how a deep learning system comes to its conclusions. It “lost track” of the inputs that informed its decision making a long time ago. Or, more accurately, it was never keeping track.

This inability for us to see how deep learning systems make their decisions is known as the “black box problem,” and it’s a big deal for a couple of different reasons. First, this quality makes it difficult to fix deep learning systems when they produce unwanted outcomes.

Deep learning systems are now regularly used to make judgements about humans in contexts ranging from medical treatments, to who should get approved for a loan, to which applicants should get a job interview. In each of these areas, it’s been demonstrated that AI systems can reflect unwanted biases from our human world. (If you want to know how AI systems can become racially biased, check out our previous story on that topic.) Needless to say, a deep learning system that can deny you a loan or screen you out of the first round of job interviews but can’t explain why, is one most people would have a hard time judging as “fair.”
Source: https://umdearborn.edu/news/ais-mysterious-black-box-problem-explained

Basically, the Black box problem seems to boil down to can we trust Ai seeing that the information it uses is based in part on human input (whatever knowledge
base the developer puts in).

For Debate:
1. What's your view on the Black box problem of Ai?
2. Even if Ai could eventually "think" and make decisions without human input, should we trust it?
3. How can we resolve the Black box problem?
AI may be trusted only if all the initial input has our species survival as the foci to bias it from dominating and ultimately destroying us.
 
Last edited:
  • Like
Reactions: AgnosticBoy