If you would like to continue working on the submission, click on the "Edit" tab at the top of the window.
If you have not resolved the issues listed above, your draft will be declined again and potentially deleted.
If you need extra help, please ask us a question at the AfC Help Desk or get live help from experienced editors.
Please do not remove reviewer comments or this notice until the submission is accepted.
Where to get help
If you need help editing or submitting your draft, please ask us a question at the AfC Help Desk or get live help from experienced editors. These venues are only for help with editing and the submission process, not to get reviews.
If you need feedback on your draft, or if the review is taking a lot of time, you can try asking for help on the
talk page of a
relevant WikiProject. Some WikiProjects are more active than others so a speedy reply is not guaranteed.
To improve your odds of a faster review, tag your draft with relevant
WikiProject tags using the button below. This will let reviewers know a new draft has been submitted in their area of interest. For instance, if you wrote about a female astronomer, you would want to add the Biography, Astronomy, and Women scientists tags.
This draft's references do not show that the subject
qualifies for a Wikipedia article. In summary, the draft needs multiple published sources that are:
in-depth (not just passing mentions about the subject)
Make sure you add references that meet these criteria before resubmitting. Learn about
mistakes to avoid when addressing this issue. If no additional references exist, the subject is not suitable for Wikipedia.
Various related applications have been explored in the design of games[2], video-learning[3] and subtle interaction techniques[4]. This vision suggests a potential solution to address issues arising from competing activities between digital and real-world interactions. For instance, the phenomenon known as “smartphone zombie” highlights how using a mobile phone while walking can diminish situational awareness[5]. Instead, Heads-Up Computing aims to position digital interactions as complementary to real-world activities.
It is important to note that Heads-Up Computing represents an evolving field of research and development, with ongoing exploration into its practical applications and implications. While the long-term vision may involve embedding computing capabilities directly into human bodies, the current definition of Heads-Up Computing primarily involves
wearable technology incorporating body-compatible hardware. It also includes
multimodal interactions[6] and resource-aware interactions that dynamically adjust based on the user's context[7]
The human's co-evolution with tools
Characteristics
Heads-Up Computing is defined by three characteristics:
Body-compatible hardware components. This design principle aligns the device's input and output modules with human sensory channels[8]. Recognizing our head and hands as key sensing and actuating hubs, the design includes a head-piece for visual and audio output (such as smart glasses or earphones), a hand-piece (like a ring or wristband) for manual input and haptic feedback, and potentially a body-piece (like a robot) that can perform additional physical tasks for the user.
Multimodal voice, gaze, and gesture interaction. With the head-, hand-, and body-pieces in place, users can input commands via voice, gaze, or subtle gestures involving the head, mouth, and fingers. These modalities are chosen as they can largely be performed during scenarios when the eyes and hands are busy, therefore covering a broad range of interaction needs in daily activities.
Resource-aware interaction model. The interface of Heads-Up Computing needs to be dynamically created according to the available resources a user has at any given moment. Therefore, the system needs to monitor and be aware of the current activity the user is engaged in, as well as the environmental constraints faced at that given moment. An important area of development for this paradigm is a quantitative model that optimizes interactions by predicting the relationship between human perceptual space constraints and primary tasks. This model will be responsible for delivering just-in-time information to and from the head-, hand-, and body-pieces.
^Zhao, Shengdong; Tan, Felicia; Fennedy, Katherine (September 2023). "Heads-Up Computing Moving Beyond the Device-Centered Paradigm". Communications of the ACM. 66 (9): 56–63.
doi:
10.1145/3571722.
^Soute, Iris; Markopoulos, Panos; Magielse, Remco (July 2010). "Head Up Games: combining the best of both worlds by merging traditional and digital play". Personal and Ubiquitous Computing. 14 (5): 435–444.
doi:
10.1007/s00779-009-0265-0.
^Ram, Ashwin; Zhao, Shengdong (19 March 2021). "LSVP: Towards Effective On-the-go Video Learning Using Optical Head-Mounted Displays". Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies. 5 (1): 1–27.
doi:
10.1145/3448118.
^Sapkota, Shardul; Ram, Ashwin; Zhao, Shengdong (27 September 2021). "Ubiquitous Interactions for Heads-Up Computing: Understanding Users' Preferences for Subtle Interaction Techniques in Everyday Settings". Proceedings of the 23rd International Conference on Mobile Human-Computer Interaction. pp. 1–15.
doi:
10.1145/3447526.3472035.
ISBN978-1-4503-8328-8.
^Appel, Markus; Krisch, Nina; Stein, Jan-Philipp; Weber, Silvana (June 2019). "Smartphone zombies! Pedestrians' distracted walking as a function of their fear of missing out". Journal of Environmental Psychology. 63: 130–133.
doi:
10.1016/j.jenvp.2019.04.003.
S2CID150545607.
^Ghosh, Debjyoti; Foong, Pin Sym; Zhao, Shengdong; Liu, Can; Janaka, Nuwan; Erusu, Vinitha (21 April 2020). "EYEditor: Towards On-the-Go Heads-Up Text Editing Using Voice and Manual Input". Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems. pp. 1–13.
doi:
10.1145/3313831.3376173.
ISBN978-1-4503-6708-0.
S2CID218483565.