Smartphone tooling: Achieving perception by positioning a smartphone for object scanning

Research output: Chapter in Book/Report/Conference proceedingBook chapterResearchpeer-review

People have been using tools for thousands of years. These practices of “tooling” have been described as having a “mechanical effect” on an object (e.g., chopping wood). In this chapter we propose that tooling may also have an “informational effect”. To make this argument we explore how visually impaired people (VIP) carry out physical shopping in grocery stores using their smartphones and the SeeingAI application (app). Using a smartphone for scanning means using it as a tool, hence the chapter title “smartphone tooling”. The data consists of a collection of cases in which a VIP is using the smartphone and app to scan products, and the app then provides audible information. The chapter is based on video ethnographic methodology and ethnomethodological multimodal conversation analysis. The chapter contributes to studies of tools and object-centred sequences by showing how VIPs achieve perception of relevant object information in and through a practice we suggest calling “positioning for object scanning”. This is configured by three distinct actions: (1) aligning, (2) adjusting and (3) inspecting. Studying the practices of VIPs enables us to establish new understandings about the accomplishment of spatial relations between body, object and technology in situ, without visual perception. This research contributes to EM/CA studies of perception as practical action, visual impairment and object-centred sequences.

Original languageEnglish
Title of host publicationPeople, Technology, and Social Organization : Interactionist Studies of Everyday Life
Place of PublicationAbingdon, Oxon
PublisherRoutledge
Publication date1 Jan 2023
Pages250-273
ISBN (Print)9781032230689
ISBN (Electronic)9781000967074
DOIs
Publication statusPublished - 1 Jan 2023

Bibliographical note

Publisher Copyright:
© 2024 selection and editorial matter, Dirk vom Lehn, Will Gibson and Natalia Ruiz-Junco; individual chapters, the contributors.

ID: 373614055