Wearables 1.0 are reasonably entrenched consumer goods, despite the ostensible failure of Google Glass, although albeit in mostly prosaic forms such as fitness trackers. As the range of applications rises and the data generated increases, these devices are likely to become ever more interlinked and, thanks to machine learning, adaptive and personalised.
Health is one prominent area of wearables application, with bioengineers creating sweat-based sensors to monitor glucose[i]. Other wearables have been developed that monitor both biochemical and electric signals in the human body[ii] and even provide (as a skin implant) 16 years of birth control.
As the field of neuroscience develops, mental health and wellbeing could become the dominant part of the health ecosystem. Researchers have developed temporary nanotechnology ‘tattoos’ able to map emotions[iii] whilst new devices have been created that purportedly stimulates the brain to boost both academic and athletic performance[iv].’With allusions to productivity and hitherto unexplored areas of neuro-management, it is perhaps unsurprising that 81% of CIOs believe wearables will perform in the workplace and that retailers see wearables as forming a key part of their future immersive retail vision.
As a result, products that create a tailored experience for both enterprise users and consumers are likely to emerge, channelling and compiling input and data from multiple wearable devices and generating actionable insights. The ever widening range of wearable utility along with the sheer growth in the number of both wearable devices and the IoT traffic they generate could easily overwhelm individual consumers, as well as the information infrastructure/architecture of many businesses and organisations looking to capitalise on such data. Developing a hub for controlling, correlating, synthesizing and extracting key-takeaways is clearly of import.
Whilst the smartphone is likely to feature as such a hub in the short term, the emergence of virtual personal assistants not bound to a screen are likely to proliferate. Virtual, augmented and mixed reality combined with haptics hold potential for displaying data in a more meaningful and personalised way that can be done with a screen. With numerous wearables including contact lenses able to take photos and provide social media updates, already in the labs – the human body is set to become the next interface, rather than the screen. All of which begs the question – at which point do wearables in all their diversity – implantables, ingestibles and injectables become part of us, and no longer ‘wearable?’