asuagar

joined 1 year ago
3
submitted 1 year ago* (last edited 1 year ago) by asuagar to c/[email protected]
[–] asuagar 1 points 1 year ago* (last edited 1 year ago)

Please, do not beg pardon. It was only a misunderstanding. You can find the creators on X. I have added the source in the description. Bests

[–] asuagar 1 points 1 year ago (2 children)

Are you asking if I am a bot?

5
Extreme Parkour with Legged Robots (extreme-parkour.github.io)
submitted 1 year ago* (last edited 1 year ago) by asuagar to c/[email protected]
 

... In this paper, we take a similar approach to developing robot parkour on a small low-cost robot with imprecise actuation and a single front-facing depth camera for perception which is low-frequency, jittery, and prone to artifacts. We show how a single neural net policy operating directly from a camera image, trained in simulation with largescale RL, can overcome imprecise sensing and actuation to output highly precise control behavior end-to-end. We show our robot can perform a high jump on obstacles 2x its height, long jump across gaps 2x its length, do a handstand and run across tilted ramps, and generalize to novel obstacle courses with different physical properties.

 

... the field of robotics is still under development (it is an active research area), the basic principles of robot design (modeling, perception, planning, and control) are well understood. In Modern Robotics I, we will use both theory and practice to learn these basics specifically for arm-type manipulators. You will have the opportunity to work with a real robotic arm that is controlled by the Robot Operating System (ROS) to learn about these topics through hands-on experience.

 

Welcome to the online hub for the book: Robotics, Vision & Control: fundamental algorithms in Python (3rd edition) by Peter Corke, published by Springer-Nature 2023.

Jupyter Notebooks link

 

This work proposes an autonomous robot system for precise pouring of various liquids into transparent containers. The approach leverages RGB input and pre-trained vision models for zero-shot capability, eliminating the need for additional data or manual annotations. Additionally, it integrates ChatGPT for user-friendly interaction, enabling easy pouring requests. The experiments prove the system's success in pouring various beverages into containers based on visual input alone.

 

A Dev Robot for exploring ROS2 and Robotics using the Raspberry PI Pico and Raspberry PI 4.

view more: next ›