asuagar

joined 10 months ago
 

A Dev Robot for exploring ROS2 and Robotics using the Raspberry PI Pico and Raspberry PI 4.

 

Inertial odometry is a cost-effective solution for quadrotor state estimation, but it suffers from drift. This study introduces a learning-based odometry algorithm for drone racing, combining inertial measurements with a model-based filter. Results show it outperforms other methods and has potential for agile quadrotor flight research.

Source: Davide Scaramuzza

 

The video shows the latest generation of robots, the evoBOT. The researchers and developers talk about the versatile applications of this robot. They explain how it is able to mimic human movements and adapt to different environments. They also explain the technology behind it. The evoBOT is a milestone in robotics and offers numerous advantages. For example, it can be used in industry to perform repetitive tasks. It could also play an important role in healthcare.

 

We found that, over a six-year period (2016-2021), the percentages of papers with code at major machine learning, robotics, and control conferences have at least doubled. Moreover, high-impact papers were generally supported by open-source codes. As an example, the top 1% of most cited papers at the Conference on Neural Information Processing Systems (NeurIPS) consistently included open-source codes. In addition, our analysis shows that popular code repositories generally come with high paper citations, which further highlights the coupling between code sharing and the impact of scientific research.

Source: Learning Sytems and Robitcs Lab

 

RL's most impressive achievements are beyond the reach of existing optimal control-based systems. However, less attention has been paid to the systematic study of fundamental factors that have led to the success of reinforcement learning or have limited optimal control. This question can be investigated along the optimization method and the optimization objective. Our results indicate that RL outperforms OC because it optimizes a better objective: OC decomposes the problem into planning and control with an explicit intermediate representation, such as a trajectory, that serves as an interface. This decomposition limits the range of behaviors that can be expressed by the controller, leading to inferior control performance when facing unmodeled effects. In contrast, RL can directly optimize a task-level objective and can leverage domain randomization to cope with model uncertainty, allowing the discovery of more robust control responses. This work is a significant milestone in agile robotics and sheds light on the pivotal roles of RL and OC in robot control.

Source: Davide Scaramuzza

view more: ‹ prev next ›