This paper is focusing on the development of a system based on computer vision to estimate the movement of an MAV (X, Y, Z and yaw). The system integrates elements such as: a set of cameras, image filtering (physical and digital), and estimation of the position through the calibration of the system and the application of an algorithm based on experimentally found equations. The system represents a low cost alternative, both computational and economic, capable of estimating the position of an MAV with a significantly low error using a scale in millimeters, so that almost any type of camera available in the market can be used. This system was developed in order to offer an affordable form of research and development of new autonomous and intelligent systems for closed environments.
Convolutional Neuronal Networks Based Monocular Object Detection and Depth Perception for Micro UAVs
In this work, we present the development of a system for the detection and depth estimation of objects in real time using the on-board camera in a micro-UAV through convolutional neuronal networks. Traditionally for the detection of obstacles shows the use of SLAM visual systems. However, to solve this problem, this level of complexity is not necessary, saving resources and execution time. The training with convolutional neural networks using stereo images for the depth estimation and in the same way training the detection of common observable objects can obtain an accurate detection of obstacles in a real time.
A fundamental element for the determination of the position (pose) of an object is to be able to determine the rotation and translation of the same in space. Visual odometry is the process of determining the location and orientation of a camera by analyzing a sequence of images. The algorithm allowed tracing the trajectory of a body in an open environment by comparing the mapping of points of a sequence of images to determine the variation of translation or rotation. The use of Lane detection is proposed to feed back the Visual Odometry algorithm, allowing more robust results. The algorithm was programmed on OpenCV 3.0 in Python 2.7 and was run on Ubuntu 16.04. The algorithm allowed tracing the trajectory of a body in an open environment by comparing the mapping of points of a sequence of images to determine the variation of translation or rotation. With the satisfactory results obtained, the development of a computational platform capable of determining the position of a vehicle in the space for assistance in parking is projected.
Robust Motion Estimation Based on Multiple Monocular Camera for Indoor Autonomous Navigation of Micro Aerial Vehicle
Simulation System Based on Augmented Reality for Optimization of Training Tactics on Military Operations
In this article, we proposed an augmented reality system that was developed in Unity-Vuforia. The system simulates a war environment using three-dimensional objects and audiovisual resources to create a real war conflict. Vuforia software makes use of the database for the creation of the target image and, in conjunction with the Unity video game engine resources, animation algorithms are developed and implemented in 3D objects. That is used at the hardware level are physical images and a camera of a mobile device that combined with the programming allows to visualize the interaction of the objects through the recognition and tracking of images, said algorithms are belonging to Vuforia. The system allows the user to interact with the physical field and the digital objects through the virtual button. To specify, the system was tested and designed for mobile devices that have the Android operating system as they show acceptable performance and easy integration of applications.
Unmanned aircraft vehicles applications are directly related to payload installed. Electro-optical/infrared payload is required for law enforcement, traffic spotting, reconnaissance surveillance and target acquisition. A commercial off-the-shelf electro-optical/infrared camera is presented as a case study for the development of interface to control the UAV payload. Based on an architecture proposed, the interface shows the information from the sensor and combines data from UAV systems. The interface is validated in UAV flight tests. The software interface enhances the original performance of the camera with a fixed-point automatic tracking feature. Results of flight tests present the possibility to adapt the interface to implement electro-optical cameras in different aircrafts.
In this article, we present the use of depth estimation in real time using the on-board camera in a micro-UAV through convolutional neuronal networks. The experiments and results of the implementation of the system in a micro-UAV are presented to verify the unsupervised model improvement with monocular cameras and the error regarding real model.
This article will establish the physical design of a tetrapod robot, highlighting its own characteristics of low-level three-dimensional movement to move from one point to another. The navigation system was also examined in environments not defined from a top-down perspective, making the analysis and processing of the images with the purpose of avoiding collisions between the robot and static obstacles, and using probabilistic techniques and partial information on the environment, RRT generate paths that are less artificial.