computer_science2 papersavg year 2026quality 5/5strong evidence

Black-box AI Models

Research gap analysis derived from 2 computer_science papers in our local library.

The gap

There is a need for explainable artificial intelligence (XAI) techniques in various applications such as medical imaging, cybersecurity, and education to improve transparency and trustworthiness of black-box models.

Consensus across the literature

The papers collectively establish that black-box AI models limit interpretability and trust but leave open the specific methods and frameworks needed to address this issue.

Research trend

Emerging — attention growing, methods still coalescing.

Supporting evidence — 2 representative gaps

  • Artificial Intelligence-Driven Environmental Toxicology: Predictive Toxicity Modelling, Forensic Pollution Analysis, and AI-Enabled Public Health Surveillance (2026) · doi

    Black-box AI models are difficult to interpret because the reasoning or process behind conclusions is unknown, and absence of explainable AI (XAI) limits defensibility.

    Keywords: black models difficult interpret reasoning process behind conclusions unknown absence explainable limits defensibility
  • The Role of Machine Learning in Cyber Security (2026) · doi

    Many ML algorithms, particularly deep learning models, are considered 'black boxes' due to their complex architectures, making it difficult to understand the reasoning behind ML model decisions and hindering the ability to trust and explain their predictions.

    Keywords: algorithms particularly deep learning models considered black boxes complex architectures making difficult understand reasoning behind

Working on this gap? Publish with us.

Science AI Journal reviews manuscripts in under 15 minutes with 8 specialised AI reviewers calibrated on 23,000+ real peer reviews. Open access, CC BY 4.0.

Related gaps in computer_science

Command palette

Jump anywhere, run any action.