What is Spatial Computing?
Spatial computing uses cameras and sensors to create a digital model or digital twin of people, objects, machines, and the environments they’re in, to enable users to interact with them. Interaction and control in spatial computing often uses gestures, body movements and/or voice commands. Spatial computing combines, and builds upon, elements of virtual reality (VR), augmented reality (AR), mixed reality (MR) and digital twin technologies.
Spatial computing is a broad concept addressing the ways people and technology interact, but in this specific case it’s also the latest interface technology being promoted as the future of Human Machine Interface (HMI).
What Is An HMI And Why Should You Care?
An HMI is a user interface that acts as the communicator between a user and the machine, computer program, or system with which they are interacting. It’s a broad term, sure, and can be linked to several home appliances (something like the ill-fated Wii U that Nintendo sold in the 2010s), but we typically refer to HMIs in an industrial context for larger machinery.
An HMI is essentially an advanced user interface to help manufacturers control their machines efficiently to execute a task. In the interface, HMIs can display data, track production times, color code messages, and, of course, start and stop the machinery at play. If it sounds like an advanced remote control, well, that’s because it is, sort of. These days, HMIs can function like tablets in the sense that there’s software with a touch-screen allowing you to communicate with the machinery however the programming allows. They aren’t always limited to the tablet form, however, as they can also simply be applications on traditional computers.