The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

Standard

The praxeology of bypassing ocular-centric spatial relations : How visually impaired people use AI mobile object recognition when shopping. / Due, Brian Lystgaard; Nielsen, Ann Merrit Rikke; Lüchow, Louise.

2021. 55 Abstract from Digitalizing Social Practices: Changes and Consequences, Odense / Online, Denmark.

Research output: Contribution to conferenceConference abstract for conferenceResearchpeer-review

Harvard

Due, BL, Nielsen, AMR & Lüchow, L 2021, 'The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping', Digitalizing Social Practices: Changes and Consequences, Odense / Online, Denmark, 23/02/2021 - 24/02/2021 pp. 55.

APA

Due, B. L., Nielsen, A. M. R., & Lüchow, L. (2021). The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping. 55. Abstract from Digitalizing Social Practices: Changes and Consequences, Odense / Online, Denmark.

Vancouver

Due BL, Nielsen AMR, Lüchow L. The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping. 2021. Abstract from Digitalizing Social Practices: Changes and Consequences, Odense / Online, Denmark.

Author

Due, Brian Lystgaard ; Nielsen, Ann Merrit Rikke ; Lüchow, Louise. / The praxeology of bypassing ocular-centric spatial relations : How visually impaired people use AI mobile object recognition when shopping. Abstract from Digitalizing Social Practices: Changes and Consequences, Odense / Online, Denmark.

Bibtex

@conference{3f052b6638f141b39668eaf8199d9e5b,
title = "The praxeology of bypassing ocular-centric spatial relations: How visually impaired people use AI mobile object recognition when shopping",
abstract = "This paper explores physical shopping conducted by visually impaired people using AI-technology in their smartphones. Based on computer vision and object recognition, smartphones can be used to scan objects and provide their user with verbal information about them. However, mobile scanning involves highly complex embodied actions. A perspicuous setting for studying the ordered complexity of object-scanning are when visually impaired people goes shopping, using their phones with object-recognition functionalities to scan grocery products. Whereas sighted people can adjust a handheld camera to an object using their vision, thus unnoticeably accomplishing a scanning, visually impaired people are observably orienting to the required actions for the accomplishment of a successful scan. Thus, studying visually impaired people enables us to establish new understandings about the spatial relation between the body, the object and the environment thus contributing to new insights into the use of new AI technology and the interactional and situational practices which are involved. The paper is based on data from the BlindTech project, an ongoing video ethnographic study of visually impaired people{\textquoteright}s daily lives and usage of new technologies in Denmark. Data is analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). The analysis provides evidence for what we suggest to call the praxeology of bypassing ocular-centric spatial relations, a study of how blind people navigate in a visual dominant world. Cognitive aspects of spatial relations have been described extensively in neuropsychological research (Postma & Ham, 2016). However, spatial relations, i.e. the relation between sensory systems and objects in space, are firstly direct, non-representational and action-based (Gibson, 2002; Briscoe & Grush, 2020). We show how establishing an object-space relation is an observable situated achievement. Findings in the paper relates to embodied actions: a) holding the phone correctly in the hand, b) finding the correct angle of the camera, c) finding the correct distance to the object, d) holding the object in the correct angle and position and e) understanding the intrinsic nature of the object. We show just how visually impaired people accomplish these in-action as locally produced phenomena of order.",
author = "Due, {Brian Lystgaard} and Nielsen, {Ann Merrit Rikke} and Louise L{\"u}chow",
year = "2021",
language = "English",
pages = "55",
note = "null ; Conference date: 23-02-2021 Through 24-02-2021",
url = "https://www.conferencemanager.dk/resemina/home",

}

RIS

TY - ABST

T1 - The praxeology of bypassing ocular-centric spatial relations

AU - Due, Brian Lystgaard

AU - Nielsen, Ann Merrit Rikke

AU - Lüchow, Louise

PY - 2021

Y1 - 2021

N2 - This paper explores physical shopping conducted by visually impaired people using AI-technology in their smartphones. Based on computer vision and object recognition, smartphones can be used to scan objects and provide their user with verbal information about them. However, mobile scanning involves highly complex embodied actions. A perspicuous setting for studying the ordered complexity of object-scanning are when visually impaired people goes shopping, using their phones with object-recognition functionalities to scan grocery products. Whereas sighted people can adjust a handheld camera to an object using their vision, thus unnoticeably accomplishing a scanning, visually impaired people are observably orienting to the required actions for the accomplishment of a successful scan. Thus, studying visually impaired people enables us to establish new understandings about the spatial relation between the body, the object and the environment thus contributing to new insights into the use of new AI technology and the interactional and situational practices which are involved. The paper is based on data from the BlindTech project, an ongoing video ethnographic study of visually impaired people’s daily lives and usage of new technologies in Denmark. Data is analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). The analysis provides evidence for what we suggest to call the praxeology of bypassing ocular-centric spatial relations, a study of how blind people navigate in a visual dominant world. Cognitive aspects of spatial relations have been described extensively in neuropsychological research (Postma & Ham, 2016). However, spatial relations, i.e. the relation between sensory systems and objects in space, are firstly direct, non-representational and action-based (Gibson, 2002; Briscoe & Grush, 2020). We show how establishing an object-space relation is an observable situated achievement. Findings in the paper relates to embodied actions: a) holding the phone correctly in the hand, b) finding the correct angle of the camera, c) finding the correct distance to the object, d) holding the object in the correct angle and position and e) understanding the intrinsic nature of the object. We show just how visually impaired people accomplish these in-action as locally produced phenomena of order.

AB - This paper explores physical shopping conducted by visually impaired people using AI-technology in their smartphones. Based on computer vision and object recognition, smartphones can be used to scan objects and provide their user with verbal information about them. However, mobile scanning involves highly complex embodied actions. A perspicuous setting for studying the ordered complexity of object-scanning are when visually impaired people goes shopping, using their phones with object-recognition functionalities to scan grocery products. Whereas sighted people can adjust a handheld camera to an object using their vision, thus unnoticeably accomplishing a scanning, visually impaired people are observably orienting to the required actions for the accomplishment of a successful scan. Thus, studying visually impaired people enables us to establish new understandings about the spatial relation between the body, the object and the environment thus contributing to new insights into the use of new AI technology and the interactional and situational practices which are involved. The paper is based on data from the BlindTech project, an ongoing video ethnographic study of visually impaired people’s daily lives and usage of new technologies in Denmark. Data is analyzed using ethnomethodological multimodal conversation analysis (Streeck et al., 2011). The analysis provides evidence for what we suggest to call the praxeology of bypassing ocular-centric spatial relations, a study of how blind people navigate in a visual dominant world. Cognitive aspects of spatial relations have been described extensively in neuropsychological research (Postma & Ham, 2016). However, spatial relations, i.e. the relation between sensory systems and objects in space, are firstly direct, non-representational and action-based (Gibson, 2002; Briscoe & Grush, 2020). We show how establishing an object-space relation is an observable situated achievement. Findings in the paper relates to embodied actions: a) holding the phone correctly in the hand, b) finding the correct angle of the camera, c) finding the correct distance to the object, d) holding the object in the correct angle and position and e) understanding the intrinsic nature of the object. We show just how visually impaired people accomplish these in-action as locally produced phenomena of order.

M3 - Conference abstract for conference

SP - 55

Y2 - 23 February 2021 through 24 February 2021

ER -

ID: 257300924