Contact form
Your request has been sent.
Please enter your name
Please enter your email
Please enter a correct email address
Please enter your company name
Please enter your message

Internal R&D: pushing state-of-the-art AI technology forward

← All resources
Technical Takes
Internal R&D: pushing state-of-the-art AI technology forward

11/02/2021 by Preligens

How do you make sure to always reach the state-of-the-art when building AI solutions? 

 

Literature sure is vast and continuously fed, it requires time and human resources to read through in order to select the right content to adapt to your technology. 

It is especially true for a topic as trendy and complex as AI, gathering techniques such as Machine Learning, Deep Learning, but that can also imply the use of GANs, 3D technology, Convolutional Neural Networks, …

At Preligens, not only do we conduct intensive research on foreign defense doctrines, but we provide an even greater effort in implementing state-of-the-art AI technology into our products for our customers to enjoy world-class AI solutions. And it all starts with an extensive internal R&D team, whose missions are numerous: conducting upstream research for production, leading the technological watch of the company while managing connexion with Academia, and eventually making our expertise known through publications and outreach. 

For once, we are taking you on the other side of the curtain, where the upstream magic happens. Let’s dig into our internal R&D and discuss an ongoing project of the team’s: image simulation!

Image Simulation, or in other words, creating synthetic data to complement real data

We have mentioned it previously, Artificial Intelligence is all about data! But not just any kind of data: the performance of neural networks actually heavily depends on the quality and quantity of data they are fed. 

In the case of Preligens’ object identification algorithms, cornerstone data are satellite images. Basically, the more images algorithms see and the more diverse they are, the better algorithms’ performances will end up being.
Though the number of sensors providing the world with Earth images keeps increasing, it may sometimes be difficult to gather enough data or to label it properly to cover scarce cases — such as a brand new aircraft like the S-70 or the J-20, new camouflage patterns, or even known objects in a new environment.

Simulation is hence used to fill the gap and generate labelled images to train or finetune a neural network on specific data, in order to cover those edge cases.

The bulk of the research team’s studies is hence to find the most efficient way to generate such images and how to use them during the training process to maximize their effectiveness. 

This is how it works:

On the one hand, a set of background images is created, using real satellite images (in 2D) in order to mix all landscapes (snow, grass, ice, desert, …). 
On the other hand, 3D models of dedicated objects of interest to train are generated. By mixing both 2D and 3D data, the team recreates virtual scenes to be processed in the simulation pipeline. 

The latter hence produces two types of outputs: 

  • a new simulated image in 2D that contains both background and objects to detect, but in a flat environment,  
  • a label file that will then be used as ground truth to train algorithms

Using image simulation to recreate precise rare observables (such as specific aircraft), the research team showed that fine tuning a neural network on a mix of a subset of real data and simulated data focused on a problematic object can significantly improve the performance of the network, hence the detection results. As an example, using this technique on specific drone models has enabled an increase of detection performance of up to 27%. 

Example of real background images + [mq1, mq9, su70] simulated drones at plausible locations

Moreover, they also have the capacity to recreate some meteorological variations such as snow, dust or  cloud masks to be used on any image as an augmentation technique (see example below), in order to increase the variety of virtual scenes available to the simulation pipeline.

Pretty awesome, right? And that’s just one project amongst the variety of other ongoing R&D projects at Preligens. 

Care to learn more about them? Get in touch, we’d be happy to deepen the conversation!

Related articles
Submarine monitoring at Bandar Abbas
Submarine monitoring at Bandar Abbas
Submarine monitoring at Bandar Abbas
The analyst’s expertise, cornerstone of the surveillance process
The analyst’s expertise, cornerstone of the surveillance process
The analyst’s expertise, cornerstone of the surveillance process
A concrete AI application: Multi sources data analysis to localize planes
A concrete AI application: Multi sources data analysis to localize planes
A concrete AI application: Multi sources data analysis to localize planes