The challenges of inclusion in an ocular-centric environment: How visually impaired people use AI mobile object recognition when shopping
Research output: Contribution to conference › Conference abstract for conference › Research › peer-review
Standard
The challenges of inclusion in an ocular-centric environment : How visually impaired people use AI mobile object recognition when shopping. / Nielsen, Ann Merrit Rikke; Due, Brian Lystgaard; Lüchow, Louise.
2021. Abstract from IMC17 - International Mobility Conference, Gothenburg , Sweden.Research output: Contribution to conference › Conference abstract for conference › Research › peer-review
Harvard
APA
Vancouver
Author
Bibtex
}
RIS
TY - ABST
T1 - The challenges of inclusion in an ocular-centric environment
AU - Nielsen, Ann Merrit Rikke
AU - Due, Brian Lystgaard
AU - Lüchow, Louise
PY - 2021
Y1 - 2021
N2 - The challenges of inclusion in an ocular-centric environment: How visually impaired people use AI mobile object recognition when shopping. Brian Due, Rikke Nielsen, Louise Lüchow. University of Copenhagen. Background: Self-sufficiency when shopping is a known challenge for visually impaired people in an ocular-centric society. Smartphones with AI computer vision and NLP technology offer great potential and can be used to scan objects and provide their user with verbal information about them, thus promoting inclusion in an important aspect of everyday life without having the stigma of assistive technology. However, understanding the relation between the human being, the object and the environment in which it exists is complex when sight is impaired. Method: The paper is based on video ethnography (Heath et al., 2010) of visually impaired people’s daily lives and usage of mobile scannings of products from grocery stores in Denmark. The paper presents video clips that are analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). Results: The identified challenges for visually impaired people when scanning objects are to a) hold the phone correctly, b) find the correct angle of the camera, c) find the correct distance to the object, d) hold the object in the correct angle and position and e) understand the intrinsic nature of the object. We show just how visually impaired people accomplish this in-action despite their impaired sight. Discussion: Although AI technologies holds promises, this research shows how the technology needs to be adjusted and social practices learned in order for visually impaired people to achieve social inclusion. Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in Qualitative Research. SAGE Publications Ltd. Streeck, J., Goodwin, C., & LeBaron, C. (2011). Embodied Interaction: Language and Body in the Material World. Cambridge University Press.
AB - The challenges of inclusion in an ocular-centric environment: How visually impaired people use AI mobile object recognition when shopping. Brian Due, Rikke Nielsen, Louise Lüchow. University of Copenhagen. Background: Self-sufficiency when shopping is a known challenge for visually impaired people in an ocular-centric society. Smartphones with AI computer vision and NLP technology offer great potential and can be used to scan objects and provide their user with verbal information about them, thus promoting inclusion in an important aspect of everyday life without having the stigma of assistive technology. However, understanding the relation between the human being, the object and the environment in which it exists is complex when sight is impaired. Method: The paper is based on video ethnography (Heath et al., 2010) of visually impaired people’s daily lives and usage of mobile scannings of products from grocery stores in Denmark. The paper presents video clips that are analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). Results: The identified challenges for visually impaired people when scanning objects are to a) hold the phone correctly, b) find the correct angle of the camera, c) find the correct distance to the object, d) hold the object in the correct angle and position and e) understand the intrinsic nature of the object. We show just how visually impaired people accomplish this in-action despite their impaired sight. Discussion: Although AI technologies holds promises, this research shows how the technology needs to be adjusted and social practices learned in order for visually impaired people to achieve social inclusion. Heath, C., Hindmarsh, J., & Luff, P. (2010). Video in Qualitative Research. SAGE Publications Ltd. Streeck, J., Goodwin, C., & LeBaron, C. (2011). Embodied Interaction: Language and Body in the Material World. Cambridge University Press.
M3 - Conference abstract for conference
Y2 - 23 April 2021 through 25 April 2021
ER -
ID: 261510023