The GIS 'lightbulb'
We’re all familiar using GIS tools used for visualizing data on a two-dimensional map. Whether it’s thematic overlays of data that change color based on some variable Red State/Blue State, or clustered points of density as a heat map, you get the basic idea. We can overlay a representation of data and from high up to see a trend but also zoom in to see the details. And unlike a bar chart showing the same information, we connect with maps because they merge abstract data with the world we live in. Maps are a great way to tell a story.
Three-dimensional maps like Google Earth strike an even more personal chord. We fly above them or go to the street to look around. We feel like we can touch these objects. We’ve all done virtual tourism this way, and if we’re lucky, we even go to places we found solely by map. We look for our homes and see if we can get a rooftop view of our neighbor’s backyard or the building across the street, and so on.
Maps have a unique nature of being a virtual model of something that’s simultaneously immense and infinitely detailed. Put simply, you zoom in. You zoom out. To make them manageable, when you zoom out, the tools filter out information like streets from miles up, because the detail doesn’t make sense and just gets in the way. And things being in the way obscure information we might want to overlay to tell a story. Our minds work this way too. We filter out information from all our senses that we don’t need at any given moment. It helps what we need to see, stand out.
Picture looking at a crowd of people. Up high you just see a crowd and maybe you can estimate the size of it. Closer in you see faces and individual people but you don’t think much about what they’re wearing. Closer still and you are looking at a few individuals and may be thinking about their clothes, age, and what they’re doing. And then finally, you see a unique person you might interact with.
Theoretically, you could go closer still and switch to an anatomical view to see inside one’s body, but at this level of detail, there’s a lot of information you need to filter out to see anything. You’ve played with these interactive body models: if you want to see the heart, you have a lot of bones and muscles to move out of the way.
Economies of scale, technology limits, and practicality define what are useful zoom boundaries as well as the number of steps between
We don’t all get to zoom in to each other’s bodies. And we can’t zoom out and see the whole universe either. Economies of scale, technology limits, and practicality define what are useful zoom boundaries as well as the number of steps between. When you zoom into a human body on some human atlas software, you’re looking at a model of someone who donated their body to science to be painstakingly imaged. If you wanted it to be you, you’d have to go for an expensive full-body MRI. The data could then be merged into a spatial view, and if everyone did it and someone bothered to pull it all together, you’d be able to zoom into people. But it’s not worth the effort or expense. It’s a ridiculous idea.
But as technology improves, it’s easier to get and store lots of data and uses for that data dictate when it becomes economically feasible to build ways to look closer, or further out. Expanding the available resolution when we zoom around is a matter of time and opportunity.
The popular GIS tools focus on the most manageable sweet spot. They don’t try to map the galaxy and they don’t zoom to the minute details. They’re practical. And extreme zooms that go past the boundaries are often handed off to different tools, such as switching to a street photo when you zoom into the limit of a satellite image or clicking on a property and seeing its restaurant review. Like our world, our attention and vision work best at certain zoom levels too. Too far away and things blend. Too close and you need assistance – either a magnifier or microscope or simply a different way to represent the data.
But conceptually maps, especially 3D maps, could be much more useful at both macro and micro resolutions than we currently see them. The question is whether we have data to add and enough opportunity to make gathering it worthwhile. Fortunately, it looks like there will be plenty of both thanks to the demand to operate commercial Real Estate more efficiently using IoT.
Internet of Things is simply the interconnection of small computing devices that can be embedded in everyday objects. Sensors, switches, trackers and so on that monitor basically anything, and then have ways to share and receive data to work in tandem or to be collected in some large back end. There is a multitude of opportunities for IoT in Commercial Real Estate from electronic locks and security, fire safety, more efficient elevators, and so on. My favorite is an IoT toilet dispenser that orders more toilet paper when it spins faster and is almost empty. (That’s not real, I just made it up, but I might try to build it for fun.)
And of course, once you can collect this data regularly, you can start to use it to optimize how your property runs. You could have your AC better match your building’s usage patterns by combining the building entry system or infra-red sensors with your very expensive to run cooling system. Maybe you add in weather forecast and municipal data feeds, so your building is aware of what’s going on around it. After running this for a time, you should have enough data to be able to predict busy days in advance, which is helpful because cooling a building takes time and needs to start before workers show up.
But what isn’t necessarily obvious, is the opportunity that exists in dealing with the interdependence between properties. A city or community is like an organism where the level of connectedness is a huge source of benefits as well as an identifier of inefficiencies. What opportunity brings that together?
Just within the construction of a single property, change requests are incredibly costly in construction, and many vendors are working on tools to track these dependencies in their virtual architectural models and software. An architect and corresponding engineering and construction firm can visualize in detail every aspect of a building before it’s even built and change it when it’s cheapest. This leads to greater ability to detect, adjust and resolve issues faster and more cheaply. But the fidelity of their model ends at the boundary of the property.
According to a NY Magazine article, one of the biggest contributors of the high price tag of the 2nd Avenue Subway project in NYC, which cost $4.5B to add 1.5 miles of new service to the Upper East Side, were the inefficiencies vendors, owners, utilities, and the city have sharing information. If someone found a pipe or wire where it wasn’t expected, finding out who could deal with it led to frequent work stoppages. And vendors knowing this was inevitable, factored that into their estimates. The article went on to point out NYC has the highest construction cost per mile of any subway in the world.
What about if a fire alarm went off – how would one coordinate notifying who might be affected in neighboring buildings? Could the fire department immediately call up detailed plans of not just the one building but also the neighboring buildings and nearby infrastructure? How could we better prepare for or avoid disasters? The upside of coordinating activities, risks, and services of all types between properties in a community is incredible and could lead to safer and more efficient real estate.
So who might solve this bigger goal of coordinating our real estate data? Mostly, the companies who are already household names for their cloud services. They see the potential of IoT to store and process ever greater amounts of information as future revenue. And many of the best uses for IoT apply to Real Estate.
The big cloud vendors want to be the backbone and store every point of data that can be collected, host the operational tools to run them, and provide analysis services to find insights in all this data. Some go further and want to also be your pantry. But these services require not only unambiguous knowledge of where your properties are but the ability to track objects inside them in 3 dimensions.
Something that might make the lightbulb go off for you is to picture it. Literally. Some of these companies are working to build Digital Twins of properties and cities. At the simplest level, it’s a visual replica of a city. Not just one building but many, even all, as well as the connecting roads, parks, and infrastructure.
The visualizations combine rendering concepts architects have nearly perfected, with techniques Google pioneered to show 3D maps over the web efficiently, and then throw in optimized frameworks used in massive online games to render imaginary worlds for millions of users. Together these can create efficient and scalable visual models of cities. You can move around the city at a bird’s eye view, but you could also zoom into the internals to see inside. But they’re not just visual - just as a GIS system is more than maps.
A proper Digital Twin would also be an information hub. It would have big pipes to accept data via feeds from sensor or operating data you’d want to track, with each sensor’s location providing the commonality. And like other maps that show complicated data, overlaying data on to something resembling the real world makes the data instantly relatable. And that makes it easier for us to understand and optimize our world.
Imagine the fire department looking at all the smoke sensors on a city block? That’s a lot of data but you easily can picture it. Or imagine a city planning commissions weighing the impact of a large change. Imagine how much easier it would be to create a mile of subway or provide public WIFI? What if you could zoom into any floor, room, duct, or conduit and monitor devices and usage without being there? And as a tenant, wouldn’t you want to know how well you laid out your space by watching traffic patterns and usage over time?
It’s not only visual tools. Like other complex big data applications, AI and machine learning would be large parts of the architecture. How else can you make sense of so much data, filter out the noise, and focus on what requires our attention most?
It’s kind of opposite the of the Cyberspace William Gibson proposed - a virtual world we’d escape to. Digital Twins offer the ability to optimize the real world.