You provide the aircraft.
KEF provides the autonomy.
Since our founding we’ve focused on how to provide flexible autonomy solutions that work on a wide range of aircraft. Our goal is “any computer, any camera, any aircraft”. Striving for seamless integration drives our approach.
KEF’s mission is to leverage the visual spectrum to provide onboard safety and autonomy systems for new and existing aircraft. These systems will allow our customers to solve existing problems and tackle new challenges. While we are focusing on deployed autonomy and fly our systems constantly, we’re also tackling fundamental challenges in software and simulation to allow any aircraft to leverage cameras more effectively.
This is your Project Page. It's a great opportunity to help visitors understand the context and background of your latest work. Double click on the text box to start editing your content and make sure to add all the relevant details you want to share.
We use a nadir-pointing camera and optical flow to provide 'relative' navigation to an aircraft. The video below is from a demonstration flight for an Air Force customer. The paired trajectories traced in the left frame show GPS (which we don't use, but capture for analytic purposes) versus our software's state estimate (red). Altitudes up to 400' (FAA limited) at speeds of 30 miles per hour, with drift of 1% (Aug 2021).
Hazard detection and avoidance
Stereo vision allows our software to detect hazards and plan safe trajectories at short ranges. Last winter (Dec 2020) we showcased this capability in a forested environment on a customers' drone. Top left image is a single camera output with visualizations of hazards (voxels), waypoints and trajectories. Top right shows an 'over the shoulder view', while the bottom center image is from a 'GoPro' camera mounted on the aircraft (artificially lightened). No maps, no operator, no GPS, on a ~4' and ~35 pound aircraft. We update and forget the map constantly as we fly, running alongside the other software modules.
Leveraging Machine Learning on Aircraft
We can take the same data we're using to navigate and leverage Machine Learning to understand mission-relevant context. KEF’s first use of machine learning was to recognize ‘racing gates’ during AlphaPilot. Today we’re training our lightweight networks to classify, recognize, and localize objects of interest to customers and pilots.
Since we don't build aircraft, we need to make our capabilities as easy as possible to deploy. In the video below, we're running our visual navigation on a gimbal with rolling shutter cameras. We achieved drift rates under 2% at elevations up to 400'.
Terrain Relative Navigation for GPS-denied Environments
Without GPS, aircraft can use visual landmarks to determine their position. KEF is building new tools to provide a solution regardless of the season or time of day, and robust to weather and reduced visibility. As of Fall 2021, we're engaged in an active test campaign to develop and prove our software works at up 7,000' in multiple spectrums, and hoping snow makes the ground nearly unrecognizable. Nearly... if we can extract a position during winter from a map captured in summer, that will be something novel (as far as we know). Funded by the Air Force, we are just about ready to test on fixed wing aircraft, and we are seeking partners.
Our software can handle the distinctly different images without getting confused, albeit in limited and preliminary testing.