Please login first

List of accepted submissions

 
 
Show results per page
Find papers
 
  • Open access
  • 4 Reads
Easy-to-read communication for cancer screening in people with intellectual disabilities: The Slovak perspective
, , , , ,

Background: People with intellectual disabilities (PwIDD) face an increased risk of developing cancer and encounter multiple barriers to participating in screening programs, including limited awareness, communication challenges, and lack of social support. Cancer remains a major public health issue in Slovakia, yet participation in national screening programs remains below the European average. Strengthening social support from caregivers and healthcare professionals is essential to promote participation and understanding of preventive health measures. Methods: Within the European COST Action CUPID project CA21123 (Cancer Understanding, Prevention and Improved Detection for People with Intellectual Disabilities), a Summer School in Prague brought together participants from across Europe to co-create inclusive health communication tools. Results: As part of this collaboration, easy-to-read information material was developed to promote understanding of cancer prevention and screening. The Slovak version was adapted to national screening programs (breast, cervical, and colorectal cancer) and designed to meet the communication needs of PwIDD. The Slovak version of the easy-to-read material can also be used as a supportive educational activity in the course of health education for nursing students and to raise awareness for caregivers. Conclusion: This initiative demonstrates how accessible communication and international collaboration can improve cancer prevention and promote equity in health.

Acknowledgment: This work is based upon work from COST Action CUPID CA21123, supported by COST (European Cooperation in Science and Technology). The authors also gratefully acknowledge all participants of the CUPID Summer School in Prague for their collaboration.

  • Open access
  • 0 Reads
Multilingual and Region-Specific Image Captioning and Contextual Scene Recognition using Transformer-Based Architecture for Inclusive Technology Design for the Visually Impaired

Inclusivity has become a central theme in the development of digital technologies. Designing with inclusivity in mind requires that systems remain accessible and valuable to users irrespective of their age, gender, or socio-economic background. However, most applications remain restricted to a single language—primarily English—thereby marginalizing large groups of non-English-speaking individuals. This work explores the integration of Computer Vision (CV) and Natural Language Processing (NLP) to enhance accessibility for visually impaired users, with a particular emphasis on multilingual support in assistive technologies. Through a review of the existing literature and user experiences, the study identifies language barriers as a major obstacle in accessing essential services. To address this, we employ a multi-stage methodology for multilingual image captioning. Image–caption pairs were extracted from the MS COCO dataset, reformatted into JSON, and translated from English to the local language (currently Hindi) to generate a bilingual corpus. The model combines Recurrent Neural Networks (RNNs) for image feature extraction with Long Short-Term Memory (LSTM) units for sequence generation, enabling the system to capture temporal dependencies inherent to natural language. Experimental outcomes indicate that the model can generate Hindi captions with about 80% accuracy, effectively describing visual scenes despite some grammatical limitations. Real-time camera integration and a text-to-voice module further enhance usability by delivering immediate audio captions to visually impaired users. Future work will focus on transformer-based multilingual architectures and larger datasets to improve accuracy, contextual richness, and language coverage, moving towards a robust, speech-enabled assistive platform.

Top