For many, gazing at an old photo of a city can evoke feelings of both nostalgia and wonder — what was it like to walk through Manhattan in the 1940s? How much has the street one grew up on changed? While Google Street View allows people to see what an area looks like in the present day, what if you want to explore how places looked in the past?
To create a rewarding “time travel” experience for both research and entertainment purposes, we are launching rǝ (pronounced as re”turn"), an open source, scalable system running on Google Cloud and Kubernetes that can reconstruct cities from historical maps and photos, representing an implementation of our suite of open source tools launched earlier this year. Referencing the common prefix meaning again or anew, rǝ is meant to represent the themes of reconstruction, research, recreation and remembering behind this crowdsourced research effort, and consists of three components:
- A crowdsourcing platform, which allows users to upload historical maps of cities, georectify (i.e., match them to real world coordinates), and vectorize them
- A temporal map server, which shows how maps of cities change over time
- A 3D experience platform, which runs on top of the rǝ map server, creating the 3D experience by using deep learning to reconstruct buildings in 3D from limited historical images and maps data.
Our goal is for rǝ to become a compendium that allows history enthusiasts to virtually experience historical cities around the world, aids researchers, policy makers and educators, and provides a dose of nostalgia to everyday users.
Crowdsourcing Data from Historical Maps
Reconstructing how cities used to look at scale is a challenge — historical image data is more difficult to work with than modern data, as there are far fewer images available and much less metadata captured from the images. To help with this difficulty, the rǝ maps module is a suite of open source tools that work together to create a map server with a time dimension, allowing users to jump back and forth between time periods using a slider. These tools allow users to upload scans of historical print maps, georectify them to match real world coordinates, and then convert them to vector format by tracing their geographic features. These vectorized maps are then served on a tile server and rendered as slippy maps, which lets the user zoom in and pan around.
Sub-modules of the rǝ suite of tools |
The entry point of the rǝ maps module is Warper, a web app that allows users to upload historical images of maps and georectify them by finding control points on the historical map and corresponding points on a base map. The next app, Editor, allows users to load the georectified historical maps as the background and then trace their geographic features (e.g., building footprints, roads, etc.). This traced data is stored in an OpenStreetMap (OSM) vector format. They are then converted to vector tiles and served from the Server app, a vector tile server. Finally, our map renderer, Kartta, visualizes the spatiotemporal vector tiles allowing the users to navigate space and time on historical maps. These tools were built on top of numerous open source resources including OpenStreetMap, and we intend for our tools and data to be completely open source as well.
Warper and Editor work together to let users upload a map, anchor it to a base map using control points, and trace geographic features like building footprints and roads. |
3D Experience
The 3D Models module aims to reconstruct the detailed full 3D structures of historical buildings using the associated images and maps data, organize these 3D models properly in one repository, and render them on the historical maps with a time dimension.
In many cases, there is only one historical image available for a building, which makes the 3D reconstruction an extremely challenging problem. To tackle this challenge, we developed a coarse-to-fine reconstruction-by-recognition algorithm.
High-level overview of rǝ’s 3D reconstruction pipeline, which takes annotated images and maps and prepares them for 3D rendering. |
Starting with footprints on maps and façade regions in historical images (both are annotated by crowdsourcing or detected by automatic algorithms), the footprint of one input building is extruded upwards to generate its coarse 3D structure. The height of this extrusion is set to the number of floors from the corresponding metadata in the maps database.
In parallel, instead of directly inferring the detailed 3D structures of each façade as one entity, the 3D reconstruction pipeline recognizes all individual constituent components (e.g., windows, entries, stairs, etc.) and reconstructs their 3D structures separately based on their categories. Then these detailed 3D structures are merged with the coarse one for the final 3D mesh. The results are stored in a 3D repository and ready for 3D rendering.
The key technology powering this feature is a number of state-of-art deep learning models:
- Faster region-based convolutional neural networks (RCNN) were trained using the façade component annotations for each target semantic class (e.g., windows, entries, stairs, etc), which are used to localize bounding-box level instances in historical images.
- DeepLab, a semantic segmentation model, was trained to provide pixel-level labels for each semantic class.
- A specifically designed neural network was trained to enforce high-level regularities within the same semantic class. This ensured that windows generated on a façade were equally spaced and consistent in shape with each other. This also facilitated consistency across different semantic classes such as stairs to ensure they are placed at reasonable positions and have consistent dimensions relative to the associated entry ways.
Key Results
Street level view of 3D-reconstructed Chelsea, Manhattan |
Conclusion
With rǝ, we have developed tools that facilitate crowdsourcing to tackle the main challenge of insufficient historical data when recreating virtual cities. The 3D experience is still a work-in-progress and we aim to improve it with future updates. We hope rǝ acts as a nexus for an active community of enthusiasts and casual users that not only utilizes our historical datasets and open source code, but actively contributes to both.
Acknowledgements
This effort has been successful thanks to the hard work of many people, including, but not limited to the following (in alphabetical order of last name): Yale Cong, Feng Han, Amol Kapoor, Raimondas Kiveris, Brandon Mayer, Mark Phillips, Sasan Tavakkol, and Tim Waters (Waters Geospatial Ltd).
No comments:
Post a Comment