Autor: |
Anjimoon Shaik, B Swathi, Sobti Rajeev, Kumar Ashwani, Chauhan Shilpi, Ali Abdul-jabbar A., Bandhu Din |
Jazyk: |
English<br />French |
Rok vydání: |
2024 |
Předmět: |
|
Zdroj: |
E3S Web of Conferences, Vol 505, p 03007 (2024) |
Druh dokumentu: |
article |
ISSN: |
2267-1242 |
DOI: |
10.1051/e3sconf/202450503007 |
Popis: |
This paper presents innovative methodologies in image and video processing aimed at augmenting accessibility for differently abled individuals. Central to this research is the development of advanced algorithms that enable enhanced interpretation and interaction with multimedia content, thereby empowering users with sensory impairments. The study introduces a multi-layered framework that integrates adaptive filtering, object recognition, and augmented reality, tailored to the needs of users with visual and auditory challenges. Semantic scene analysis is leveraged to provide descriptive audio annotations for the visually impaired, facilitating a comprehensive understanding of visual data. For individuals with hearing impairments, the system incorporates real-time sign language interpretation within videos, utilizing deep learning techniques. The efficacy of these solutions is measured against conventional accessibility tools, demonstrating significant improvements in user engagement and comprehension. A novel contribution of this research is the application of machine learning to calibrate the system according to individual user profiles, ensuring a personalized and intuitive user experience. The scalability of the proposed system is validated through its implementation across various platforms and content formats. The findings suggest that such technological advancements have the potential to significantly reduce the barriers faced by differently abled individuals in accessing multimedia information. |
Databáze: |
Directory of Open Access Journals |
Externí odkaz: |
|