Full-day Workshop at

ICRA2021 Organized by: Kourosh Darvish, Joao Ramos, Rafael Cisneros, Luigi Penco, Serena Ivaldi, Eiichi Yoshida, Daniele Pucci

Where: ICRA2021, fully online. The link will be provided soon

When: May 31st 2021


Call for papers and contributions

Submission deadline for extended abstracts and online demos: May 1st 2021

Notification of acceptance: May 10th 2021

Submission of the final version: May 20th 2021


Outline

Objectives

Program

Invited speakers

Organizers

Call for Papers and Contributions


Objectives

Teleoperation allows the integration of the cognitive skills and domain expertise of humans with the physical capabilities of legged robots. Teleoperation contributes significantly in a wide range of applications including manipulation in hazardous environments, remote telepresence, microinvasive telesurgery, and space exploration. In addition, the teleoperation of legged robots permits the combination of locomotion and manipulation, which is a crucial skill in real applications. However, the complexity of legged robots imposes more challenges for teleoperation, particularly in unstructured dynamic environments with limited communication. For successful task completion, the level of autonomy, team organization, information exchange between the operator and the robot, as well as the mapping between the human and robot motions play a vital role in teleoperation performance. Moreover, immersive teleoperation systems consider different technologies for perceiving the human motion or commands; and provide effective feedback to the human, including haptic and visual feedback. The intuitive and natural design of the teleoperation interfaces increases the chance of successful execution of the task.

In that regard, the teleoperation of legged robots establishes new questions that should be answered when designing a teleoperation system. How to determine the proper autonomy level of the robot for teleoperation scenarios? How should we control the robot in real-time in dynamic and unstructured environments? How should we handle time-varying communication delays in teleoperation scenarios? What is the human role in a teleoperation scenario? Does the robot need to predict human motions and intentions in teleoperation scenarios? How can we provide a balance between the robot dexterity, maneuverability, and robot stability while it follows the human commands? How can we map the human motions to the robot motions in situations in which the human and robot dynamics are largely different scenarios? How can we design intuitive and natural interfaces for humans, to increase the performance and effectiveness while providing an immersive teleoperation experience to the human?

This workshop aims at gathering researchers from legged robotics, teleoperation, and human motions studies together in order to present the latest results in humanoid and quadruped robot teleoperation and draw the challenges that the research community faces to effectively teleoperate the robot for different scenarios. Previously, we have organized a successful workshop on “Teleoperation of Humanoid Robots” at Humanoids Conference 2019 in Toronto, Canada, regarding humanoid robot teleoperation. As a result of exciting discussions during ithis eventt, we are submitting a survey paper about the theme to IEEE Transactions on Robotics (T-RO). With the currently proposed workshop, we decided to broaden the scope to the dynamic and legged robot teleoperation in real-world scenarios, in order to enlarge the audience and share the findings among different communities.

Back to the outline


Program

Due to the ongoing pandemic restrictions, the workshop will be a virtual event.

Day: May 31st

Schedule:

Time Zone 1st Session 2nd Session
Central Europe (CEST) 7:45 - 12:50 14:25 - 19:15
Japan (JST) 14:45 - 19:50 21:25 - 02:15(+1)
USA, LA (PDT) 22:45(-1) - 03:50 05:25 - 10:15
USA, NY (EDT) 01:45 - 06:50 08:25 - 13:15

1st Session:

Time (CEST) Time (JST) Talk Media
   
7:45 - 8:00 14:45 - 15:00 Introduction  
8:00 - 8:20 15:00 - 15:20 Halodi Robotics
 
8:20 - 8:40 15:20 - 15:40 Susumu Tachi
Telexistence —From Concept to TELESAR VI and Beyond
 
8:40 - 9:00 15:40 - 16:00 Jun Ho Oh
 
9:00 - 9:30 16:00 - 16:30 Live Demo (Asia)  
9:30 - 10:00 16:30 - 17:00 Papers (pre-recorded videos)  
10:00 - 10:30 17:00 - 17:30 Break  
10:30 - 10:50 17:30 - 17:50 Ryo Kikuuwe
An interactive-simulator-based study on the teleoperation of bipedal robots through inexpensive haptic devices
 
10:50 - 11:10 17:50 - 18:10 Yohei Kakiuchi
 
11:10 - 11:30 18:10 - 18:30 Michael Mistry
 
11:30 - 11:50 18:30 - 18:50 Abderrahmane Kheddar
Teleoperation, telepresence and embodiment of humanoid robots
 
11:50 - 12:50 18:50 - 19:50 Panel discussion 1  

2nd Session:

Time (CEST) Time (PDT) Time (EDT) Talk Media
     
14:20 - 14:40 05:20 - 05:40 08:20 - 08:40 Marco Hutter
 
14:40 - 15:00 05:40 - 06:00 08:40 - 09:00 Claudio Semini
The IIT-INAIL Teleoperation project: Quadrupedal Tele-Manipulation in Hazardous Environments
 
15:00 - 15:20 06:00 - 06:20 09:00 - 09:20 Avik De
Shared autonomy and user interaction in commercial legged robots
 
15:20 - 15:40 06:20 - 06:40 09:20 - 09:40 Ludovic Righetti
Controlling legged robots over a 5G wireless link
 
15:40 - 16:10 06:40 - 07:10 09:40 - 10:10 Live Demo (Europe)  
16:10 - 16:40 07:10 - 07:40 10:10 - 10:40 Break  
16:40 - 17:00 07:40 - 08:00 10:40 - 11:00 ANA Avatar XPRIZE
The ANA Avatar XPRIZE: Inspiring creators, inventors and futurists to design and construct the future of robotic avatars
 
17:00 - 17:30 08:00 - 08:30 11:00 - 11:30 Live Demo (America)  
17:30 - 17:50 08:30 - 08:50 11:30 - 11:50 Jerry Pratt
Towards humanoid teleoperation of multi-contact maneuvers in constrained spaces
 
17:50 - 18:10 08:50 - 09:10 11:50 - 12:10 Robin Murphy
Two robots are better than 1: successful teleoperation in extreme environments
 
18:10 - 19:10 09:10 - 10:10 12:10 - 13:10 Panel discussion 2  
19:10 - 19:15 10:10 - 10:15 13:10 - 13:15 Final words  

Back to the outline


Invited speakers


ANA Avatar XPRIZE

bio & abstract. The ANA Avatar XPRIZE was launched in March 2018, and is sponsored by All Nippon Airways (ANA). It seeks to incentivize Teams to integrate a range of diverse, cutting-edge technologies to create a physical robotic Avatar System that will transport an operator’s senses, actions and presence to a remote location in real time.

This challenge will extend the capabilities of robots by integrating a broad range of technologies that can increase the application of robotics in the future. The winner of this XPRIZE will demonstrate a functional Avatar System, which consists of a human operator controlling a robotic Avatar at a real and/or Simulated Distance that allows the operator to interact with other humans or the environment, receiving sensory information from the robotic Avatar. The ultimate goal is for a person to feel as if they are truly where the Avatar is, experiencing a sense of Presence through the Avatar.

XPRIZE will present the goals of this competition and invite feedback and discussion from workshop attendees.

Presenters.


Halodi Robotics


Susumu Tachi

bio. Susumu Tachi is Professor Emeritus of The University of Tokyo and is the Founding President of the Virtual Reality Society of Japan. Dr. Tachi received his Ph.D. degree from The University of Tokyo in 1973, and joined the Faculty of Engineering of The University of Tokyo. In 1975, he moved to the Mechanical Engineering Laboratory, MITI, where he served as Director of the Telerobotics Division. From 1979 to 1980, Dr. Tachi was a Japanese Government Award Senior Visiting Scientist at the Massachusetts Institute of Technology. In 1989, he rejoined The University of Tokyo, and served as Professor at the Department of Information Physics and Computing till March 2009. He served also as Professor and Director of the International Virtual Reality Center at Keio University from April 2009 till March 2015. One of his scientific achievements was the invention of Guide Dog Robot in 1975, an intelligent mobile robot system for the blind. This system was the first of its kind and came to be known as MELDOG. In 1980, Dr. Tachi invented the concept of Telexistence, which enables a highly realistic sensation of existence in a remote place without any actual travel, and has been working on the realization of telexistence since then. Other achievements include Haptic Primary Colors, Optical Camouflage, and autostereoscopic VR displays such as TWISTER, Repro3D and HaptoMIRAGE. He was the recipient of the 2007 IEEE Virtual Reality Career Award.

Telexistence —From Concept to TELESAR VI and Beyond.

abstract. Telexistence is a human-empowerment concept that enables a human in one location to virtually exist in another location and to act freely there. The term also refers to the system of science and technology that enables realization of the concept. Telexistence was conceptualized in 1980, and its feasibility has been demonstrated through the construction of alter-ego robot systems called Telexistence Surrogate Anthropomorphic Robot (TELESAR) I–VI. Recently, as start-ups specializing in telexistence have been established, momentum has built toward the industrialization of telexistence and its application to a variety of industrial fields. Adding to this trend, the XPRIZE Foundation has launched the ANA Avatar XPRIZE as its next challenge. Intense worldwide competition within the emerging telexistence industry, which integrates AI, robotics, VR, and networking, has just begun. In this keynote, 40 years of research on telexistence is overviewed and cutting-edge technologies of telexistence are explained. We will also discuss several problems to be solved to attain the telexistence society and foresee the future that telexistence will develop and create.


Claudio Semini

bio. Dr. Claudio Semini (MSc 2005, PhD 2010) is the head of the Dynamic Legged Systems (DLS) lab at Istituto Italiano di Tecnologia (IIT) that developed a number of high-performance hydraulic robots, including HyQ, HyQ2Max, and HyQReal. He holds an MSc degree from ETH Zurich in electrical engineering and information technology. He spent 2 years in Tokyo for his research: MSc thesis at the Hirose Lab at Tokyo Tech and staff engineer at the Toshiba R&D center in Kawasaki working on mobile service robotics. During his PhD and subsequent PostDoc at IIT, he developed the quadruped robot HyQ and worked on its control. Since 2012 he leads the DLS lab. Claudio Semini is the author and co-author of more than 100 peer-reviewed publications in international journals and conferences. He is also a co-founder of the Technical Committee on Mechanisms and Design of the IEEE-RAS Society. He is/was the coordinator/partner of several EU-, National and Industrial projects (including HyQ-REAL, INAIL Teleop, Moog@IIT joint lab, etc). His research interests include the construction and control of highly dynamic and versatile legged robots for field application in real-world operations, locomotion, hydraulic drives, and others.


Ludovic Righetti

bio. Ludovic Righetti is an Associate Professor in the Electrical and Computer Engineering Department and in the Mechanical and Aerospace Engineering Department at the Tandon School of Engineering of New York University and a Senior Researcher at the Max-Planck Institute for Intelligent Systems in Germany. He holds an engineering diploma in Computer Science and a Doctorate in Science from the Ecole Polytechnique Fédérale de Lausanne, Switzerland. Prior to joining NYU, he was a postdoctoral fellow at the University of Southern California and a research group leader at the Max-Planck Institute. His research focuses on the planning, learning and control of movements for autonomous robots, with a special emphasis on legged locomotion and manipulation.

abstract. The 5th generation wireless technology offers unprecedented access to high data bandwidth at low latencies (down to the millisecond) with a unique opportunity to offload real-time action-perception loops from the robot local computer to the network edge. In this presentation, I will discuss these opportunities in the context of legged robots and the challenges that need to be addressed to really benefit from this technology. I will present our recent work aiming to move computationally-intensive optimization involved in legged locomotion controllers to the network edge while ensuring robustness of operation in case of communication loss. I will also discuss our on-going work related to the perception of wireless signals and how this can affect planning and control.


Ryo Kikuuwe

bio. Ryo Kikuuwe received his B.S., M.S., and Ph.D.(Eng.) degrees in mechanical engineering from Kyoto University, Kyoto, Japan, in 1998, 2000, and 2003, respectively. From 2003 to 2007, he was an Endowed-Chair Research Associate at Nagoya Institute of Technology, Japan. From 2007 to 2017, he was an Associate Professor at Kyushu University, Japan. From 2014 to 2015, he was a Visiting Researcher at Institut National de Recherche en Informatique et en Automatique (INRIA) Grenoble Rhône-Alpes, France. He is currently a Professor at Hiroshima University, Japan. His research interests include force control of robot manipulators, real-time simulation for physics-based animation, and engineering applications of nonsmooth system theory.

An interactive-simulator-based study on the teleoperation of bipedal robots through inexpensive haptic devices

abstract. This talk introduces our ongoing work on teleoperated bipedal robots. The study has been conducted solely with realtime, interactive simulators each of which is combined with a pair of inexpensive haptic devices, Novint Falcons. We use the haptic devices to command the swing foot position relative to the supporting foot position. The operator is supposed to manipulate the robot’s feet at every single step. This teleoperation scheme would be suited for heavily uncertain terrains, which cannot be handled by automatic or semi-automatic gait planners. The haptic devices used in our setup are easy-to-use, non-restraining, inexpensive ones. The devices unilaterally send commands to the robot, and thus the system is free from the feedback instability issues. This talk overviews our controller structure, which allows for the operator’s maneuvers while maintaining the balance, and also our interactive simulator structure, which is fully penalty-based.


Avik De

bio. Avik is co-founder and CTO of Ghost Robotics, a startup company commercializing legged robotics in Philadelphia. The company currently has two products, a 45kg quadruped aimed at industrial applications, and a 12kg quadruped aimed at research applications. Previously, Avik completed a postdoc at Harvard SEAS advised by Rob Wood, where he researched design of micro-scale flapping robots, as well as strategies for their control. He received his PhD in Sep 2017, at the GRASP laboratory (Kodlab) in the University of Pennsylvania advised by Dan Koditschek. The main thread tying all of his work has been bio-inspired design and control strongly anchored in empirical robotics. His research has focused on examining the strengths and weaknesses of modular and hierarchical control strategies, as well as demonstrating efficient and effective control of dynamic locomotion in a way that generalizes across platforms (quadruped, tailed biped, …) and behaviors (hopping, running, …).

Shared autonomy and user interaction in commercial legged robots

abstract. As we begin using quadruped robots for automation, we must cautiously add autonomy in incremental steps, in order to maintain reliability and user confidence. In this talk, I will discuss different current ways in which Ghost Robotics’ customers prefer to interface with our legged robots, ranging from teleoperation to autonomous missions. In the former case, there are challenges in presenting the many different locomotion gaits and environment-specific settings to the user, while in the latter case, reliably selecting the best option may present a challenge to the available perception systems.


Yohei Kakiuchi


Jerry Pratt

bio. Jerry Pratt (Ph.D., M.Eng., and B.S. degrees from M.I.T. in Computer Science and B.S. degree from M.I.T. in Mechanical Engineering) leads a research group at IHMC that concentrates around the understanding and modeling of human gait and the applications of that understanding in the fields of robotics, human assistive devices, and man-machine interfaces. Current projects include Humanoid Avatar Robots for Co-Exploration of Hazardous Environments, FastRunner Robot, and Exoskeletons for Restoration of Gait in Paralyzed Individuals. Jerry was the team lead for Team IHMC in the DARPA Robotics Challenge (DRC) project. In 2015 IHMC won second place in the DRC finals. In 2013 Team IHMC achieved first place in the Virtual Robotics Challenge and second place in DARPA Robotics Challenge Trials. Before coming to IHMC, Jerry was the President of Yobotics, Inc., a small company that he cofounded in 2000. At Yobotics, Jerry helped develop the RoboKnee, a powered exoskeleton that allowed one to carry large loads while hiking over rough terrain with little effort. Prior to founding Yobotics, Jerry worked at the M.I.T. Leg Laboratory, where he designed, built, and controlled several bipedal robots. His approach of maximizing speed, agility, and biological similarity through the understanding of biological counterparts, is helping to remove the stereotype of robots as being clunky, jerky-moving machines.


Jun Ho Oh


Robin Murphy

bio. Dr. Robin R. Murphy is the Raytheon Professor of Computer Science and Engineering at Texas A&M University, a TED speaker, and an IEEE and ACM Fellow. She helped create the fields of disaster robotics and human-robot interaction, deploying robots to 29 disasters in five countries including the 9/11 World Trade Center, Fukushima, the Syrian boat refugee crisis, Hurricane Harvey, and the Kilauea volcanic eruption. Murphy’s contributions to robotics have been recognized with the ACM Eugene L. Lawler Award for Humanitarian Contributions, a US Air Force Exemplary Civilian Service Award medal, the AUVSI Foundation’s Al Aube Award, and the Motohiro Kisoi Award for Rescue Engineering Education (Japan). She has written the best-selling textbook Introduction to AI Robotics (2nd edition 2019) and the award-winning Disaster Robotics (2014), plus serving an editor for the science fiction/science fact focus series for the journal Science Robotics.

Two robots are better than 1: successful teleoperation in extreme environments

abstract. Tremendous advances have been made in practical climbing and walking robots and these robots are ripe to revolutionize disaster robotics. But during a disaster, responders remain engaged with the robots so that they can perceive in real-time through the robots sensors and to opportunistically adapt mission strategies. This talk will describe mission and traversability challenges for ground robots at disasters such as earthquakes, building collapses, landslides, mine explosions, or burned areas. It will also describe two impacts on the human operators: impaired sensemaking abilities and fatigue. One common solution to impaired sensemaking is to use two robots, one to perform the task, the other to provide an external view. Ground-ground combinations appear the most frequent and were especially useful at the Fukushima Daiichi nuclear accident, but ground-air and marine combinations are possible as well. Our recent research provides a formal model of where to place the secondary robot based on a study conducted with 31 bomb squad operators. The talk will be highlighted with videos and examples from disasters all over the world and the use of a second robot to help the first.


Marco Hutter


Michael Mistry


Abderrahmane Kheddar

bio. Abderrahmane Kheddar received the BSCS from the Institut National d’Informatique, Algiers, MSc and PhD in robotics, both from the University of Paris 6. He is presently at CNRS and the Codirector of the CNRS-AIST Joint Robotic Laboratory, Tsukuba, Japan. His research interests include haptics, humanoids and thought-based control using brain machine interfaces. He is a founding member of the IEEE/RAS chapter on haptics, the cochair and founding member of the IEEE/RAS Technical committee on model-based optimization, he is a member of the steering committee in charge of IEEE RAS liaison of the IEEE Brain Initiative, Editor of the IEEE Transactions on Robotics (2013-2018), Editor of IEEE Robotics and Automation Letters (sept 2019) and within the editorial board of other robotics journals; he is a founding member of the IEEE Transactions on Haptics (2007-2010). He is an IEEE senior member and titular full member of the National Academy of Technology of France and Knight in the National Order of Merits of France.

Teleoperation, telepresence and embodiment of humanoid robots

abstract. Humanoid robots have gained noticeable maturity in terms of hardware and embedded software, but also in terms of complexity, versatility and embedded sensors. Each year we witness the improvement of an existing series or the revelation of new, more advanced ones. This talk focuses on problems inherent to operating humanoids through direct control by a human operator. We will distinguish on the distinction between teleoperation, telepresence and embodiment in terms of requirements in terms of human control and humanoid feedback schemes and what makes well-known techniques in bilateral coupling schemes difficult to apply in humanoid robots. We will also denote reasons why humanoid teleoperation is gaining a particular attention very recently and how seamless teleoperation, telepresence or embodiment can be approached.


Back to the outline


Organizers

Kourosh Darvish, Joao Ramos, Rafael Cisneros, Luigi Penco, Eiichi Yoshida, Serena Ivaldi, Daniele Pucci


Kourosh Darvish

bio. Kourosh Darvish received his B.Sc. and M.Sc. degrees in Aerospace Engineering from K.N. Toosi University of Technology and Sharif University of Technology (Tehran, Iran), in 2012 and 2014, respectively.  He finished his PhD in Bioengineering & Robotics from University of Genoa, Italy in 2019. During his PhD, he has developed a hierarchical architecture for flexible Human-Robot Cooperation, in specific for factory and workshop environments. From November 2018, he is working as a post doc at Italian Institute of Technology (IIT) in Dynamic Interaction Control (DIC) Lab, and collaborate on H2020 European project AnDy. His research interests include Human-Robot Collaboration, Telexistence, and Manipulation.


Joao Ramos

bio. Joao Ramos is the director of the RoboDesign Lab and an Assistant Professor at the University of Illinois at Urbana-Champaign. He previously worked as a Postdoctoral Associate working at the Biomimetic Robotics Laboratory, at the Massachusetts Institute of Technology. He received a PhD from the Department of Mechanical Engineering at MIT in 2018. During his doctoral research, he developed teleoperation systems and strategies to dynamically control a humanoid robot utilizing human whole-body motion via bilateral feedback. His research focuses on the design and control of robotic systems for dynamic manipulation and agile locomotion, such as the HERMES humanoid, a prototype platform for disaster response. In addition, his research interests include Human-Machine Interfaces, legged locomotion dynamics, mechanism design, and actuation systems.


Rafael Cisneros

bio. Rafael Cisneros received the B.Eng. degree from the University of the Americas - Puebla (UDLA-P), Mexico in 2006, the M.Sc. degree from the Center of Research and Advanced Studies of the National Polytechnic Institute (CINVESTAV-IPN), Mexico in 2009, and the Ph.D. degree from the University of Tsukuba, Japan in 2015. Since then, he has been working at the National Institute of Advanced Industrial Science and Technology (AIST), Japan, from 2015 to 2018 as a post-doc and from 2018 until now as a researcher. He is currently a member of CNRS-AIST JRL (Joint Robotics Laboratory), IRL, AIST. His research interests include torque control, whole-body multi-contact motion control of humanoid robots, multi-body collision dynamics, teleoperation, and tactile feedback.


Luigi Penco

bio. Luigi Penco is a PhD student at Inria Nancy, France. He received his M.Sc. degree in Artificial Intelligence and Robotics from La Sapienza University of Rome in 2018, and his B.Sc. degree in Electronics Engineering from Roma Tre University in 2015. During his doctoral research in the team Larsen at Inria, he has contributed to the EU H2020 AnDy project on human-robot collaboration. His research is focused on humanoid robotics, with a particular interest in teleoperation and machine learning techniques used to improve the control and interaction skills of robots.


Eiichi Yoshida

bio. Eiichi Yoshida received M.E and Ph.D degrees on Precision Machinery Engineering from Graduate School of Engineering, the University of Tokyo in 1993 and 1996 respectively. In 1996 he joined former Mechanical Engineering Laboratory, later reorganized as National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Japan. He served as Co-Director of AIST/IS-CNRS/ST2I Joint French-Japanese Robotics Laboratory (JRL) at LAAS-CNRS, Toulouse, France, from 2004 to 2008. Since 2009, he has been serving as Co-Director of CNRS-AIST JRL (Joint Robotics Laboratory), IRL at Tsukuba, Japan, and appointed as Director since 2017 to date. He also served as Deputy Director of Intelligent Systems Research Institute (IS-AIST) from 2015 to 2018 and as Director of Planning Office, Department of Information Technology and Human Factors from 2018 to 2020, at AIST. He is currently Deputy Director of Industrial Cyber-Physical Systems Research Center, and TICO-AIST Cooperative Research Laboratory for Advanced Logistics in AIST. His research interests include robot task and motion planning, human modeling, humanoid robots and advanced logistics technology.


Serena Ivaldi

bio. Serena Ivaldi is a tenured research scientist at Inria, leading the humanoid and human-robot interaction activities of the Team Larsen in Inria Nancy, France. She earned her Ph.D. in Humanoid Technologies in 2011 at the Italian Institute of Technology. Prior to joining Inria, she was post-doctoral researcher in UPMC in Paris, France, then at the University of Darmstadt, Germany. She has been PI of several European and national projects concerning humanoids, teleoperation and human-robot collaboration, in particular the EU projects CoDyCo (FP7) and AnDy (H2020) where she developed multimodal teleoperation for the humanoid robot iCub. Her research is focused on humanoid robotics, human-robot collaboration and human-centered technologies. She is interested in using machine learning techniques to improve the control, prediction and interaction skills of robots. She is also interested in user evaluation, i.e., making potential end-users evaluate the robotics technologies to improve explainability, usability, trust and acceptance.


Daniele Pucci

bio. Daniele Pucci received the bachelor and master degrees in Control Engineering with highest honors from “Sapienza”, University of Rome, in 2007 and 2009, respectively. In 2013 he received the PhD title in Information and Communication Technologies from University of Nice Sophia Antipolis, France, and in Control Engineering from “Sapienza” University of Rome, Italy. Since then, Daniele has been a post-doctoral fellow at the Italian Institute of Technology at the Dynamic Interaction Control laboratory. Within the CoDyCo project (FP7-ICT-2011.2.1, number: 600716), he has been the principal scientific contributor and he has developed innovative control techniques for whole-body motion control with tactile and force/torque sensing. Daniele’s research interests include control of nonlinear systems and its applications to aerial vehicles and robotics.

Back to the outline


Call for Papers and Contributions

We accept two types of contributions to be presented at the workshop: online demos and extended abstracts.

Extended Abstracts: We will welcome prospective participants of the workshop to submit extended abstracts (up to 4 pages) to be presented at the workshop. The manuscripts should use IEEE two-column conference format. The accepted abstracts will be posted on the workshop website and will not appear in the official IEEE proceedings.

Online Demos: The demos will be performed online during the workshop event. The length of each accepted demo will be flexible between 10-20 minutes including an interactive discussion. Each demo submission should include a video media file and the description of the demo. A rehearsal session (before the workshop date) will be dedicated to each accepted demo, in order to check the demo for online presentation.

We encourage researchers as well as companies (both hardware and software companies) to contribute to the workshop. The reviewing is single-blind, and will be carried out by the workshop organizers. The papers and demos will be selected based on their originality, relevance to the workshop topics, contributions, technical clarity, and presentation.

How to contribute to the workshop

In order to submit a manuscript or demo please send an Email to “kourosh.darvish@iit.it”, “jlramos@illinois.edu”, “rafael.cisneros@aist.go.jp”, and “luigi.penco@inria.fr” with the subject “ICRA 2021 Teleoperation Workshop Contribution”.

Topics of interest include, but are not limited to:

Back to the outline

Acknowledgements

This workshop is supported by EU AnDy Project and EU SoftManBot Project.