Week 7: How The Drone Net Works - Software Pt. 1 - Code
Hello Everyone! This week, I have been coordinating resources for the upcoming Drone Net tests. Because Embry-Riddle Aeronautical University is only 1/4 mile from the Prescott airport, the research team has to file a flight plan in order to fly drones on the campus. Also, we can't fly the drones while classes are in session, so the tests are quite unique, designed to quickly gather data on drone flight patterns.
This week, I would like to begin a discussion of the Drone Net software.
Also, Drone Net uses three separate cameras that are positioned in physically distinct locations. Instead of analyzing each camera separately, we fuse the camera inputs together and feed them into the software as one source of data. Our human eyes are two separate inputs that our brain interprets together, allowing us to perceive things like depth and visual acceleration. Likewise, OpenCV allows the Drone Net software to see all three inputs as an amalgam and 'know' its environment better. For example, the following images were taken using the Drone Net prototype in Alaska, to show how the input from the sensor can be fused and analyzed in real time.
There are far too many lines of code to discuss in detail. Also, I don't want to turn this blog into a Coding 101 tutorial. If you are interested in viewing the current code for the Drone Net, please visit the repository on GitHub.
This week, I would like to begin a discussion of the Drone Net software.
What software does Drone Net utilize?
Currently, Drone Net utilizes an open-source computer vision library as the foundational library for its software. This library is called OpenCV and the current language of the Drone Net code is C++.
Computer vision describes the ways that computers can gain an 'understanding' of digital images or videos. Let's take a minute to discuss this concept. A computer obviously does not see things as we humans see things; therefore, we must design some way of creating meaningful data from images or videos. OpenCV is a publicly-available library that provides programming code that can be used to analyze and interpret input from cameras and other visual sources.
Also, Drone Net uses three separate cameras that are positioned in physically distinct locations. Instead of analyzing each camera separately, we fuse the camera inputs together and feed them into the software as one source of data. Our human eyes are two separate inputs that our brain interprets together, allowing us to perceive things like depth and visual acceleration. Likewise, OpenCV allows the Drone Net software to see all three inputs as an amalgam and 'know' its environment better. For example, the following images were taken using the Drone Net prototype in Alaska, to show how the input from the sensor can be fused and analyzed in real time.
An image of a moose, captured by the Drone Net prototype while it was deployed in Alaska. |
There are far too many lines of code to discuss in detail. Also, I don't want to turn this blog into a Coding 101 tutorial. If you are interested in viewing the current code for the Drone Net, please visit the repository on GitHub.
Next week, I plan to continue the discussion of the Drone Net's software by discussing the basics of how the software distinguishes between aerial objects and environmental sources of movement.
Comments
Post a Comment