By HCDE graduate student Jihoon Suh
Thanks to a travel grant from the department of Human Centered Design & Engineering, I attended Google IO 2017, in Mountainview, California. The two strongest emphasis of the conference were machine-learning and virtual reality, along with the existing areas that Google excels in (smart home, mobile OS, web OS, search, entertainment, and cloud service). At the event, Google announced new products such as Google Lens, newer Google Assistant, Android O, and Android Auto, which were very exciting news for fortifying and expanding Google product’s ecosystem. There were many design-related sessions in this year’s IO including Progressive Web Application (Web design), Google Home (voice UI), Google Assistant (multi-modal conversational UI), Android Things (IoT), Android Auto (Car UX), Daydream/Tango (VR/AR).
I attended all VR-related sessions, and found the VR sessions “VR, AR, and paths to immersive computing” and “Designing screen interfaces for VR” to be the most insightful and interesting to me, as I am interested in designing interactions for VR environments. Another session I found helpful was Google Assistant team’s session: “Defining Multimodal Interactions: One Size Does Not Fit All.” This session showed the design choices and constraints that the team explored to create a uniform and appropriate interaction for voice-controlled AI in multiple screens.
All sessions of Google IO can be viewed at https://events.google.com/io/.