13th International Conference on Computer Vision Systems

-, 2021, Virtual Conference

Vienna Ringstraße
© WienTourismus/Christian Stemper

Keynote Speakers

Prof. Peter Corke, Queensland University of Technology

Prof. Peter Corke
Prof. Peter Corke, Queensland University of Technology

Robotic hand-eye coordination

Hand-eye coordination is an under-appreciated human super power. This talk will cover the robot equivalent, robot hand-camera coordination, where computer vision meets robotic manipulation. This robotic skill is needed wherever the robot’s workpiece is not precisely located, or is moving, or the robot moving. The talk will motivate the problem and review recent progress.

Bio

Peter Corke is the distinguished professor of robotic vision at Queensland University of Technology, director of the ARC Centre of Excellence for Robotic Vision, director of the QUT Centre for Robotics, and the Chief Scientist of Dorabot. His research is concerned with enabling robots to see, and the application of robots to mining, agriculture and environmental monitoring. He is well known for his robotics toolbox software for MATLAB, the best selling textbook "Robotics, Vision, and Control", MOOCs and the online Robot Academy. He trained in electrical and mechanical engineering at the University of Melbourne.

Dr. Donald G. Dansereau, University of Sydney

Dr. Donald G. Dansereau
Dr. Donald G. Dansereau, University of Sydney

Robotic Imaging: From Photons to Actions

We are on the cusp of a robotics revolution that will transform how we work and live. Manufacturing, health, and service robots including autonomous cars and drones are set to profoundly impact our lives, and visual sensing will play a pivotal role in this transformation. However, there are deep challenges in how to effectively achieve robotic autonomy with visual perception.

This talk explores recent developments in the emerging field of Robotic Imaging, bringing together optics, algorithms, and robotic embodiment to allow robots to see and do. I will touch on newly developed visual sensors, active perception schemes, and how we might automatically integrate new capabilities into robotic systems. This work promises to allow robots to work in new ways and over a broader range of conditions. The talk concludes with a perspective on key challenges and opportunities in robotic imaging.

Bio

Dr Donald Dansereau is a continuing academic at the Sydney Institute for Robotics and Intelligent Systems at the University of Sydney. His work explores how new approaches to visual sensing can help robots see and do, encompassing the design, fabrication, and deployment of novel imaging technologies. In 2004 he completed an MSc at the University Calgary, receiving the Governor General’s Gold Medal for his pioneering work in light field processing. In 2014 he completed a PhD on underwater robotic vision at the Australian Centre for Field Robotics, followed by postdoctoral appointments at QUT and Stanford University. Dr Dansereau authored the widely-used Light Field Toolbox, and his field work includes marine archaeology on a Bronze Age city in Greece, hydrothermal vent mapping in the Sea of Crete, habitat monitoring off the coast of Tasmania, and wreck exploration in Lake Geneva.

Prof. Renaud Detry, KULeuven

Prof. Renaud Detry
Prof. Renaud Detry, KULeuven

Autonomous Robot Manipulation for Planetary and Terrestrial Application

In this talk, I will discuss the experimental validation of autonomous robot behaviors that support the exploration of Mars' surface, lava tubes on Mars and the Moon, icy bodies and ocean worlds, and operations on orbit around the Earth. I will frame the presentation with the following questions: What new insights or limitations arise when applying algorithms to real-world data as opposed to benchmark datasets or simulations? How can we address the limitations of real-world environments—e.g., noisy or sparse data, non-i.i.d. sampling, etc.? What challenges exist at the frontiers of robotic exploration of unstructured and extreme environments? I will discuss our approach to validating autonomous machine-vision capabilities for the notional Mars Sample Return campaign, for autonomously navigating lava tubes, and for autonomously assembling modular structures on orbit. The talk will highlight the thought process that drove the decomposition of a validation need into a collection of tests conducted on off-the-shelf datasets, custom/application-specific datasets, and simulated or physical robot hardware, where each test addressed a different range of experimental parameters for sensing/actuation fidelity, breadth of environmental conditions, and breadth of jointly-tested robot functions.

Bio

Renaud Detry is a Professor of Embodied Learning at UCLouvain & KULeuven, Belgium. His research interests are in perception and learning for robot manipulation and mobility, for terrestrial and planetary applications. Formerly Detry was a research scientist and the group lead for Perception Systems at NASA's Jet Propulsion Laboratory (JPL, 2016-2021). At JPL, Detry was the machine-vision lead for the Mars Sample Return campaign, and conducted research in autonomous robot mobility, navigation, sampling, climbing for the exploration of Mars, Europa, Enceladus, and the Moon. Prior to JPL, Detry was a post-doc at KTH/Stockholm and ULiege (2010-2016), supported by two starting grants earned from the Belgian FNRS and from the Swedish VR. The work focused on the development of robot agents that learn to predict object graspability from visual data. Detry earned his Master's and Ph.D. degrees in computer engineering and robot learning from ULiege in 2006 and 2010.

Germain Haessig, Austrian Institute of Technology

Germain Haessig
Dr. Germain Haessig, Austrian Institute of Technology

Neuromorphic computation and sensing: a paradigm shift that enables efficient computer vision?

Event-driven sensors (silicon retina or cochlea, bio-inspired tactile sensors, ...) are a new kind of sensors relying on a paradigm shift in data representation. Used in an ingenious manner, this representation offers high dynamic range, precise temporal resolution, as well as a sensor-level data compression. During this talk, I will try to demonstrate how one can leverage efficiency with simple yet elegant approaches through multiple practical examples, from optical flow to active depth perception, high-speed tracking, and low latency fiducial marker extraction. This work promises to be particularly well suited for closed-loop systems, bridging the gap in the perception-action loop.

Bio

Germain Haessig is a Scientist within the Austrian Institute of Technology, applying his knowledge on event-based sensors to robotic applications. His work includes exploring the high temporal capabilities of event-based sensors to conduct low latency visual servoing tasks.

After received the M.Sc degree in advanced systems and robotics from University Pierre and Marie Curie/Ecole Normale Superieure de Cachan (2015), he completed a PhD on neurally inspired hardware systems and sensors within the Vision Institute, Sorbonne Université in Paris (2018). He was also a postdoctoral researcher at the Institute of Neuroinformatics, UZH/ETH Zürich, pursuing research on neurally inspired event-based computation.