Robotics Startup Pitch Competition | Summary and Q&A

15.1K views
July 27, 2017
by
TechCrunch
YouTube video player
Robotics Startup Pitch Competition

TL;DR

Researchers have developed new algorithms to enable robots to predict human motion and learn manipulation tasks through demonstrations, paving the way for improved collaboration between humans and robots in industrial settings.

Install to Summarize YouTube Videos and Get Transcripts

Key Insights

  • 🤖 Motion prediction algorithms can greatly enhance the ability of robots to collaborate with humans by anticipating their movements, particularly in industrial settings.
  • 👻 Learning from demonstrations allows robots to acquire manipulation skills from humans without the need for manual programming, enabling faster knowledge transfer and adaptation.
  • 🦺 Integration of safety measures is crucial to ensure the reliability and safety of human-robot collaboration, especially when mistakes are made.

Transcript

so what we're doing now is a pitch off which is kind of a slimmed down version of that but we're slimming it down again because this time we're focusing strictly on the technology we've got three company / projects that are going to present and four judges that are going to assess those projects so with that I'm going to introduce our very esteemed... Read More

Questions & Answers

Q: How accurate is the motion prediction algorithm in predicting human movement?

The algorithm can predict the target of a reaching motion with around 70% accuracy. However, by classifying the target into larger areas, the accuracy can potentially reach 100%, depending on the specific task.

Q: In which industry do the researchers believe this technology will be most beneficial initially?

The researchers suggest that car manufacturing plants could be an ideal initial target for this technology. For example, it could be used to assist in tasks such as part installation on car dashboards.

Q: Is there a need for 3D models of objects in the constraint learning algorithm?

Yes, the algorithm is model-based, so it requires knowledge of geometric constraints for specific objects. However, with advanced perception capabilities, it may be possible to generalize these constraints without the need for explicit 3D models.

Q: How does the algorithm work with robots that have continuous joints?

The algorithm operates in the end-effector space, focusing on how the robot's hand needs to move with respect to the objects being manipulated. Nonlinearities in robot joints are abstracted, assuming quasi-static settings and rigid objects.

Q: Is the motion prediction algorithm a standalone solution or part of a larger system?

The motion prediction algorithm is intended to be part of a larger hierarchical system. Multiple predictors may be utilized, and safety mechanisms at the actuation and perception levels will be necessary to ensure reliable collaboration.

Summary

This video showcases three different technology projects being presented in a pitch-off event. The first project presented is about using virtual reality and shape displays to make information tangible. The second project is about developing a solar-powered robot called Turtle that can weed gardens, providing a chemical-free approach to weed control. The third project focuses on enabling robots to predict human motion and learn manipulation tasks through demonstrations.

Questions & Answers

Q: What is the goal of the project presented by Daniel Fitzgerald?

The goal is to bring physical form to computation and make information tangible using virtual reality and shape displays. By combining physical and digital worlds, users can have a more immersive and interactive experience.

Q: How does the shape display technology work?

The shape display consists of an array of pins that can move up and down. By coordinating the movement of these pins, the display can recreate the shape of computer models. This technology has been used in various applications like physical telepresence, computer-aided design, medical imagery, and more.

Q: What are the challenges with scaling the shape display technology?

The current shape display devices are big, bulky, and expensive. To have a room-scale experience with virtual reality and shape displays, a display as large as the room would be needed, which is impractical. The focus now is on making smaller displays that can intelligently move around the room to anticipate user interaction.

Q: Can the shape display technology provide a sense of touch?

Yes, the goal of the project is to make the virtual world tangible through the shape display technology. By positioning the display where the user is about to reach and rendering the virtual geometry before they touch it, the user can have a sense of touch in the virtual world.

Q: What industries can benefit from the shape display technology?

The shape display technology has applications in various industries. It can be useful in fields like computer-aided design, architectural planning, medical imaging, mathematical functions, and more. It offers a way for designers and users to have physical control and interaction with virtual objects.

Q: How close to scalability is the shape display technology?

While the current technology is not easily scalable due to its size and cost, the advantage is that only one display and two robots are needed for a user's hand. This makes it more feasible compared to having entire rooms filled with shape displays. However, further research and development are needed to make scalable and cost-effective versions.

Q: Have they considered adding membranes to the shape display surface for smoother interaction?

Yes, there are various ways to smooth out the surface of the shape display and improve the interaction experience. One approach is using membranes to provide interpolation between pins. The goal is to find a good compromise between resolution and smoothness.

Q: How do they handle dynamic objects or interactions where the objects are moving?

The current system focuses on static scenes and interactions, but they do have plans to handle dynamic objects and interactions as well. The idea is to add more degrees of freedom to the system and create a more interactive and realistic experience. This can allow for objects to be moving while still being able to interact with them using the shape display.

Q: How does the synchronization between the virtual reality (VR) and shape display work?

The shape display acts as an interface to the VR world. Whatever is happening in the VR world is reflected in the shape display, and there is synchronization between them. The shape display positions itself where the user is about to reach in the VR world, rendering the virtual geometry before the user touches it.

Q: What is the potential killer app for the shape display technology?

The potential killer app for the technology includes gaming, where users can have physical interaction with virtual objects in VR games. Additionally, computer-aided design is another area where the technology can be highly beneficial, allowing designers to have more control and manipulation over their designs in a physical and tangible way.

Q: How far away is the commercialization of this technology?

The presenter is from a lab, not a commercial company, so they cannot provide insights into the commercialization timeline.

Q: What are the challenges with weeding in gardens?

Weeding can be time-consuming, frustrating, and physically demanding. As people get older, it becomes harder to bend over and pull out weeds. Traditional approaches like using chemicals or doing manual weeding are not ideal for the environment or personal health. There is a need for a better approach to weed control in gardens.

Q: What is Turtle, and what is its purpose?

Turtle is a solar-powered robot designed for outdoor gardens. It moves around the garden, avoiding obstacles and plants taller than an inch. Its purpose is to autonomously find and cut weeds using a built-in weed whacker. The robot's presence in the garden eliminates the need for manual weeding or using harmful chemicals.

Q: What are the advantages of using Turtle for weeding?

The advantages of using Turtle include chemical-free weed control, time efficiency, and reduced physical strain for gardeners. It allows individuals to have a healthy and productive garden without the hassle of manual weeding. Turtle can autonomously cut weed tops, leading to their eventual death.

Q: Can Turtle handle uneven ground or obstacles like rocks?

Turtle is designed to handle typical household gardens, including uneven ground and small obstacles like rocks. It is tested on different surfaces like soil and mulch to ensure mobility and performance. If an obstacle is tall enough, Turtle will bump into it and avoid weed whacking it.

Q: How does Turtle address the issue of weed whacker thread replacement?

Turtle doesn't have an automatic string mechanism for the weed whacker. The robot runs the weed whacker less frequently and at a slower speed compared to traditional weed whackers. This reduces the need for frequent string replacement. The runtime of the weed whacker is around 5-10 minutes a day, which helps prolong the string's lifespan.

Q: Are there other alternative methods for weed control in gardens?

Yes, there are alternative methods like using mulch or plastic to suppress weed growth. Mulching requires regular application, and plastic covers can be less visually appealing. Turtle offers a chemical-free approach to weed control, which is environmentally friendly and efficient for maintaining a healthy garden.

Q: Is the motion prediction accuracy of Turtle sufficient for weed control?

The motion prediction accuracy of Turtle is dependent on the specific tasks involved. For some tasks, 70% accuracy may not be enough. However, if the robot is able to classify larger areas of movement instead of specific targets, then it can achieve 100% accuracy. This level of accuracy is sufficient for the robot to plan and execute weed control tasks effectively.

Q: What industry or application would benefit the most from motion prediction and manipulation task learning?

The car manufacturing industry, particularly in the assembly process, could benefit from motion prediction and manipulation task learning. Tasks like installing parts on dashboards require human-robot collaboration. By predicting human motion and enabling robots to learn manipulation tasks through demonstrations, efficiency and productivity can be increased in car manufacturing plants.

Q: Can the prediction algorithms be used in real-time scenarios?

Yes, the prediction algorithm developed for motion prediction works in real-time. With a response time of around 400 milliseconds, the algorithm can predict human motions with about 70% accuracy. This enables the robot to anticipate and plan its actions accordingly.

Q: How does the constraints learning algorithm teach robots to execute manipulation tasks?

The constraints learning algorithm uses data from human demonstrations to recognize keyframes and infer geometric constraints for manipulation tasks. The algorithm clusters the data to identify important moments in the manipulation process. These constraints are then used to plan and execute the desired manipulation tasks. It allows robots to learn tasks directly from demonstrations instead of relying on manual programming.

Q: Can the learned constraints be transferred to robots with different physical characteristics?

Yes, the learned geometric constraints can be transferred to robots with different bodies and kinematics. The constraints are based on geometric relationships between objects and the robot's end-effector space, making it possible to transfer the knowledge to robots with different body structures. This enables the knowledge to be applied to various robot configurations, allowing for flexibility and adaptability.

Q: How does the algorithm account for safety and react to mistakes?

Safety measures are implemented at different levels in the hierarchical system. The algorithm incorporates safety by actuation, where the robot can halt its actions if it detects a mistake. Safety by perception is also implemented, where the robot can recognize errors and stop its movements. The goal is to ensure safety and efficient collaboration between humans and robots.

Q: How does the user communicate with the robot when a mistake is made?

In the current framework, the user would need to discover and communicate the mistake when the robot shows a visualization of the learned actions. Through a video-like user interface or 3D representation, the robot can present its knowledge and actions to the user. If the user identifies a mistake, they can provide feedback to the robot and stop the incorrect action.

Takeaways

The three technology projects presented in the video showcase advancements in virtual reality and shape displays, solar-powered weeding robots, and motion prediction and manipulation task learning for human-robot collaboration. These projects have the potential to revolutionize industries like gaming, garden maintenance, and manufacturing by providing more immersive experiences, efficient and environmentally-friendly weeding solutions, and improved collaboration between humans and robots. The key takeaway is that technological advancements continue to create exciting possibilities for enhancing human-machine interactions and finding innovative solutions to real-world challenges.

Summary & Key Takeaways

  • Researchers have developed a motion prediction algorithm that allows robots to better anticipate human movement, enabling them to be more effective collaborators in the factory setting.

  • Another algorithm, called "see learn," allows robots to learn multi-step manipulation tasks by observing human demonstrations and inferring geometric constraints.

  • These advancements have the potential to improve efficiency and productivity in manufacturing, particularly in tasks such as part installation on car dashboards.

Share This Summary 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on:

Explore More Summaries from TechCrunch 📚

Summarize YouTube Videos and Get Video Transcripts with 1-Click

Download browser extensions on: