Robot innovation is moving forward at a fast pace – to the point where some manufacturers are now deploying sensors in their robots, giving them the ability to “feel” and “touch”.
Innovation in robotics is moving ahead at a fast pace, spearing ahead the expected proliferation of robots in new and existing applications. Deploying sensors in robotics allows for the creation of robots that can “see” and “feel”, in a biomimetic way, like humans do. These sensor enabled, advanced robots are now able to undertake more complicated tasks and are being deployed into industrial, commercial, domestic, logistic and other sectors where robot penetration was previously limited. The market for robotic vision and force sensing alone is expected to reach over $16.1 billion by 2027, as described in the newly launched IDTechEx report Sensors for Robotics: Technologies, Markets and Forecasts 2017-2027. The graph below is a plot of the forecasted short term growth of the value of vision systems deployed in industrial and collaborative robots, which represent only a segment of robotic systems impacted by the development of sensing platforms with extensive capabilities.
But why are we experiencing such fast adoption at this point in time?
Revenues of vision systems in industrial and collaborative robots
Software advances hand in hand with hardware innovation
Early robots were limited in performing tasks under highly organized conditions, with increased safety measures, due to their limited perception of any changes in their operational environments. Applying Artificial Intelligence (AI) concepts in robotics, allows for reducing these limitations. Performance optimization through robotic sensing is allowing for decision making capabilities of new generations of robots based on processing of sensor data (such as visual, tactile etc.) gathered from their operational environment. Data-driven task performance is allowing for higher precision even under conditions of increased randomness. In essence, robots with increased awareness are becoming better at performing the actions they are tasked with and capable of performing additional actions, previously thought of as too complex for robotic systems. These improvements are enabling robot deployment in more demanding application spaces, a key explanation for the expected accelerated proliferation of sensor driven robotic systems.
Key enablers of this robotic revolution can be found in both software and hardware development efforts that have allowed for the creation of advanced sensor platforms and processing algorithms, along with intuitive, user friendly interfaces.
Henrik Christensen, the Executive Director of the Institute for Robotics and Intelligent Machines at Georgia Institute of Technology in Atlanta, Georgia, sees tremendous potential for robots coupled with such capabilities, especially as prices are coming down. He said back in 2015: “We’re getting much cheaper sensors than we had before. It’s coming out of cheap cameras for cell phones, where today you can buy a camera for a cell phone for $8 to $10. And we have enough computer power in our cell phones to be able to process it. The same thing is happening with laser ranging sensors. Ten years ago, a modest quality laser range sensor would be $10,000 or more. Now they’re $2,000.”
All in all, machine vision and force sensing enable the design of more versatile, safer robots for a wider range of applications. Of course, different sensing systems fit different application spaces, hence, the variety of robots under development each have specific requirements and sensor platforms with the right type of features.
End effector force sensing revenues in industrial and collaborative robots
CES: Most useless gadgets of all
Choosing the best of show is a popular pastime, but the worst gadgets of CES also deserve their moment of infamy, writes ARTHUR GOLDSTUCK.
It’s fairly easy to choose the best new gadgets launched at the Consumer Electronics Show (CES) in Las Vegas last week. Most lists – and there are many – highlight the LG roll-up TV, the Samsung modular TV, the Royole foldable phone, the impossible burger, and the walking car.
But what about the voice assisted bed, the smart baby dining table, the self-driving suitcase and the robot that does nothing? In their current renditions, they sum up what is not only bad about technology, but how technology for its own sake quickly leads us down the rabbit hole of waste and futility.
The following pick of the worst of CES may well be a thinly veneered attempt at mockery, but it is also intended as a caution against getting caught up in hype and justification of pointless technology.
1. DUX voice-assisted bed
The single most useless product launched at CES this year must surely be a bed with Alexa voice control built in. No, not to control the bed itself, but to manage the smart home features with which Alexa and other smart speakers are associated. Or that any smartphone with Siri or Google Assistant could handle. Swedish luxury bedmaker DUX thinks it’s a good idea to manage smart lights, TV, security and air conditioning through the bed itself. Just don’t say Alexa’s “wake word” in your sleep.
2. Smart Baby Dining Table
Ironically, the runner-up comes from a brand that also makes smart beds: China’s 37 Degree Smart Home. Self-described as “the world’s first smart furniture brand that is transforming technology into furniture”, it outdid itself with a Smart Baby Dining Table. This isa baby feeding table with a removable dining chair that contains a weight detector and adjustable camera, to make children’s weight and temperature visible to parents via the brand’s app. Score one for hands-off parenting.
Click here to read about smart diapers, self-driving suitcases, laundry folders, and bad robot companions.
CES: Tech means no more “lost in translation”
Talking to strangers in foreign countries just got a lot easier with recent advancements in translation technology. Last week, major companies and small startups alike showed the CES technology expo in Las Vegas how well their translation worked at live translation.
Most existing translation apps, like Bixby and Siri Translate, are still in their infancy with live speech translation, which brings about the need for dedicated solutions like these technologies:
Babel’s AIcorrect pocket translator
The AIcorrect Translator, developed by Beijing-based Babel Technology, attracted attention as the linguistic king of the show. As an advanced application of AI technology in consumer technology, the pocket translator deals with problems in cross-linguistic communication.
It supports real-time mutual translation in multiple situations between Chinese/English and 30 other languages, including Japanese, Korean, Thai, French, Russian and Spanish. A significant differentiator is that major languages like English being further divided into accents. The translation quality reaches as high as 96%.
It has a touch screen, where transcription and audio translation are shown at the same time. Lei Guan, CEO of Babel Technology, said: “As a Chinese pathfinder in the field of AI, we designed the device in hoping that hundreds of millions of people can have access to it and carry out cross-linguistic communication all barrier-free.”
Click here to read about the Pilot, Travis, Pocketalk, Google and Zoi translators.