Lewis, PeterAhmadi, Zahra2024-08-272024-08-272024-08-01https://hdl.handle.net/10155/1814Artificial Intelligence (AI) is used to improve assistive technologies, but these systems can fail in various ways. Despite being designed for individuals, many technologies lack user engagement from design to evaluation. This raises the question for users: “Can I trust this technology to perform its intended task?” We conducted a Systematic Literature Review (SLR) of AI-based assistive technologies for persons with visual impairments, focusing on how studies report potential risks and failures. Our findings reveal that many systems lack evaluation by the sight-loss community, and many studies do not adequately report failure cases and associated risks. This oversight can lead to serious safety concerns. To address these issues, we proposed TACTIC: a process for Transparent, Accessible, Co-design, Through Inclusive, Iterative Cycles, emphasizing iterative end-user engagement. TACTIC includes four co-design loops: problem identification, methodology design, solution evaluation, and knowledge sharing. This process aims to improve system design, community engagement, and the standardized reporting of risks, thereby enhancing the safety and effectiveness of AI-based assistive technologies.enAssistive technologyArtificial IntelligenceSystematic literature reviewFailureCo-designBuilding reliable AI assistive technologies: a comprehensive process for inclusive and transparent co-designThesis