
A New Dataset for Geospatial Visual Localisation: egenioussBench
Determining a camera’s pose from images – known as visual localisation- is fundamental to applications from autonomous driving and robotics to augmented reality, yet existing datasets face two key issues. They either lack the scale needed for large-scale scenes, limiting progress towards truly scalable methods. Second, when they do cover large scenes, they often provide imprecise ground truth poses for the query image data. egenioussBench overcomes these limitations by pairing a high-resolution aerial 3D mesh and a CityGML LoD2 model as geospatial referee data and a map-independent ground-level smartphone imagery with centimetre-accurate poses obtained via PPK and GCP/CP-aided adjustment as query data.
Read more »