Responsible AI for Genomics Research - A Review
Abstract:
Artificial intelligence (AI) has become a crucial tool in genomics research, driven by advances in high-throughput sequencing technologies that generate vast amounts of data. While AI enables the integration and interpretation of diverse omics data, it also raises concerns about trust and responsibility. As AI systems grow more complex, the demand for Responsible and Explainable AI (XAI), which enhances the transparency of AI outputs and decision-making, has emerged as a critical priority in ensuring ethical use. We conducted a comprehensive literature review to select 23 AI models applicable to various fields within genomics, including synthetic biology and cancer research. These models were evaluated based on the FAIR principles: Findable, Accessible, Interoperable, and Reusable. Our findings reveal that while the models scored between 60% and 90% across all metrics, they particularly excelled in Findable and Accessible. However, their Interoperable scores were lower, indicating a lack of design for broad application in diverse workflows, which limits their potential impact. This study highlights the need for responsible and FAIR practices in AI model development for genomics. We recommend focusing on Interoperability to enhance the usability of models across different settings, ensuring reproducibility through containerization, and facilitating future enhancements by maintaining accessible code repositories. By adopting these practices, the AI model authors will be able to replicate and extend the impacts of their AI models.