Feels like a "draw a circle. draw the rest of the owl" situation. How much metadata is stored in the video file? Are they storing a GPS update per frame, per second, once at the start of the file? I'm totally unfamiliar with the dashcam used, so I've just never seen the GPS data embedding as describe.
Tesla started recently embedding gps and other telemetry into their dash cam videos (wheel turn percent, accelerator percent, etc). It's stored as a separate data stream in the video file and was very easy to extract and parse
In theory machine vision could extract the coordinates overlayed in the frames of the video.
You're unlikely to get good data from all frames automatically due to the changing background but I'd have thought you could get enough good data to make it work.
Hah. Panoramax requires you to convert your video to geotagged images; Google Street View requires you to convert your images to video. (For the "blue line" continuous coverage. They also require 360 panos, but still.)
Feels like a "draw a circle. draw the rest of the owl" situation. How much metadata is stored in the video file? Are they storing a GPS update per frame, per second, once at the start of the file? I'm totally unfamiliar with the dashcam used, so I've just never seen the GPS data embedding as describe.
You're unlikely to get good data from all frames automatically due to the changing background but I'd have thought you could get enough good data to make it work.