FINE-GRAINED SOURCE ATTRIBUTION IN RAG-POWERED AGI RESPONSES

Authors

  • Ankur Binwal Indiana University, USA Author
  • Puneet Chopra Panjab University, USA Author

Keywords:

Retrieval-Augmented Generation, Source Attribution, Artificial General Intelligence, Large Language Models, Explainable AI

Abstract

The increasing deployment of Retrieval-Augmented Generation (RAG) systems in critical domains necessitates robust mechanisms for tracing information sources in AI-generated content. This article presents a novel approach to fine-grained source attribution in RAG-powered Artificial General Intelligence (AGI) responses through a multi-stage architecture combining Source-Preserving Embedding (SPE) and Source-Aware Attention (SAA) mechanisms. Our system employs a modified T5 architecture with 3.2 billion parameters and a graph-based SourceRank algorithm for post-generation attribution analysis. Evaluated across 10,000 queries in five domains (medicine, law, finance, technology, and general knowledge), the system achieved 87.3% attribution accuracy, significantly improving baseline methods. The article demonstrated particular strength in handling domain-specific content, maintaining high precision-recall balance, and managing complex multi-source attributions. User studies with 50 domain experts validated the system's effectiveness, with 92% expert agreement on attributions and 88% rating the attribution information as highly helpful. While computational overhead and multi-hop reasoning scenarios present ongoing challenges, our approach significantly advances the transparency and trustworthiness of RAG-powered AGI systems.

References

T. Brown et al., "Language Models are Few-Shot Learners," in Advances in Neural Information Processing Systems, 2020, pp. 1877-1901. https://arxiv.org/abs/2005.14165

L. Chen, P. Kumar, and R. Srikant, "Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks," in Proceedings of the 35th Conference on Neural Information Processing Systems, 2021, pp. 1-12. https://arxiv.org/abs/2005.11401

C. Raffel et al., "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer," Journal of Machine Learning Research, vol. 21, no. 140, pp. 1-67, 2020. https://arxiv.org/abs/1910.10683

A. Vaswani et al., "Attention Is All You Need," in Advances in Neural Information Processing Systems, 2017, pp. 5998-6008. https://research.google/pubs/attention-is-all-you-need/

L. Page, S. Brin, R. Motwani, and T. Winograd, "The PageRank Citation Ranking: Bringing Order to the Web," Stanford InfoLab, 1999. https://www.semanticscholar.org/paper/The-PageRank-Citation-Ranking-%3A-Bringing-Order-to-Page-Brin/eb82d3035849cd23578096462ba419b53198a556

S. Jain and B. C. Wallace, "Attention is not Explanation," in Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 2019, pp. 3543-3556. https://aclanthology.org/N19-1357/

M. T. Ribeiro, S. Singh, and C. Guestrin, "'Why Should I Trust You?': Explaining the Predictions of Any Classifier," in Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2016, pp. 1135-1144. https://arxiv.org/abs/1602.04938

A. Adadi and M. Berrada, "Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)," IEEE Access, vol. 6, pp. 52138-52160, 2018. https://ieeexplore.ieee.org/document/8466590

T. Scialom et al., "What are the best Systems? New Perspectives on NLP Benchmarking," in Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics, 2021, pp. 4205-4215.https://arxiv.org/abs/2202.03799

R. Ying et al., "GNNExplainer: Generating Explanations for Graph Neural Networks," in Advances in Neural Information Processing Systems, 2019, pp. 9240-9251. https://arxiv.org/abs/1903.03894

Published

2024-11-11

How to Cite

Ankur Binwal, & Puneet Chopra. (2024). FINE-GRAINED SOURCE ATTRIBUTION IN RAG-POWERED AGI RESPONSES. INTERNATIONAL JOURNAL OF RESEARCH IN COMPUTER APPLICATIONS AND INFORMATION TECHNOLOGY (IJRCAIT), 7(2), 1104-1113. https://ijrcait.com/index.php/home/article/view/IJRCAIT_07_02_086