Company Blog


Weekend Projects: Aerial StreetView

Last week we looked at testing stabilized video from an initial flight on a new remote control UAV platform. Today we put together a quick test of imagery from the same platform, displayed with the Google Custom StreetView API.

Custom StreetView from above the SICS offices in Florence Alabama's Industrial Park.

Custom StreetView from near the Tennessee River in Florence, Alabama.

HTTP Requests & Image Sprites

When we redesigned our main website ( last month, we took the opportunity to add a few optimizations to the site's design to help it load more quickly for our visitors. One of these optimizations involves a technique known as CSS Image Sprites.

While it might be a questionable use of time spending hours just to shave a few dozen milliseconds off the load time of a website, a few milliseconds here and a few there, repeated over and over again, add up quickly.

In a nutshell, when we use CSS Image Sprites we take multiple small images and combine them into a single larger image.

Why is it better to load a single larger image rather than many small images? 

HTTP Requests Are Slow
When you visit a website, your computer first sends a request to the server hosting the website, asking for the main HTML web page. Once the page is downloaded, your computer begins requesting all of the extra files the page needs to format itself and display properly. These are javascript files, css files, and images---lots of images. For each of these files, your computer identifies the file's web address and then sends an individual request over the internet to the web server, asking the server for that file. When the server receives the request, it sends the file back to your computer for download and display in the web page. 

All of these requests and responses take time. How much time? 

It depends on your computer, the distance between you and the web server, the network pathway through all the switches and routers of the internet, and the web server itself. But we can do some quick calculations to get a rough idea. 

Trying a simple "ping" to a few websites, showing the initial requests and three responses from each:

PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=43 time=18.254 ms
64 bytes from icmp_seq=1 ttl=43 time=20.264 ms
64 bytes from icmp_seq=2 ttl=43 time=24.005 ms

PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=49 time=108.740 ms
64 bytes from icmp_seq=1 ttl=49 time=94.720 ms
64 bytes from icmp_seq=2 ttl=49 time=95.774 ms

PING ( 56 data bytes
64 bytes from icmp_seq=0 ttl=47 time=189.442 ms
64 bytes from icmp_seq=1 ttl=47 time=188.228 ms
64 bytes from icmp_seq=2 ttl=47 time=191.969 ms

From this we can see that, on my internet connection here, the simplest request can take anywhere from about 20 milliseconds to nearly 200 milliseconds. Again, this depends on many factors, from geographic proximity to time of day to your internet connection and computer. But this is a basic idea of how long it can take the simplest bits of information to navigate the internet.

When you consider that an HTTP (TCP/IP) request involves sending multiple packets of data back and forth, the load on a web server, and other complexities, a single request-response could take as much as half a second or more under adverse conditions, even assuming the smallest of files. And considering that many websites load hundreds of these files, even if they're loaded asynchronously the time adds up quickly. 

So, again, HTTP requests are slow. There is a built-in time overhead associated with each request and response. If we can minimize the number of requests a website demands, it can significantly increase the responsiveness and decrease the load time of the page.

CSS Image Sprites
This is an example of one of the combined images, to be used for image sprites, used for the partners section of our site. Traditionally, we would use 5 individual HTTP requests for these images, adding up all the overhead of each request and response 5 times. With this method, we combine all the images into one file, then, just make the request once and use css styles to position the image so that the right image is showing in the right place.

We use the same technique for our icons, our clients, our partner logos, and our project highlights.

All together, we've taken 31 HTTP requests and reduced it to 4 with just these examples. It could be reduced further by combining the similarly sized icons and client logos, and perhaps the partners and project highlights. But when we get down to these small numbers we get diminishing returns to optimization. 

Visitor experience is the most significant improvement. But when we're building websites, we also have to think about server load and planning to manage network traffic. Reducing the total number of HTTP requests per visit is one of the most significant optimizations to server and network load as well.

Accuracy Standards and Statistical Tests

As part of a recent project, I had to do some research on accuracy, standards, and methods of assessing accuracy.  The following paragraphs are the results of that research.

Horizontal Accuracy Tests

There are several horizontal accuracy tests, but the most prominent are the Circular Error of 90%, the Root Mean Square Error, and 1 Sigma.  The following describes each of these methods.


Circular Error of 90% (CE90) is commonly used for quoting and validating geodetic image registration accuracy.  A CE90 value is the minimum diameter of the horizontal circle that can be centered on all photo-identifiable Ground Control Points (GCPs) and also contain 90% of their respective twin counterparts acquired in an independent geodetic survey.  It can be stated as the radial error which 90% of all errors in a circular distribution will not exceed.  Circular error may be defined as the circle radius, R, that satisfies the conditions of the equation below, where C.L. is the desired confidence level (Ross, 2004).


Equation 1 - CE90 (Greenwalt and Shultz, 1962)


RMSE is commonly used for quoting and validating geodetic image registration accuracy. A RMSE value is a single summary statistic that describes the square-root of the mean horizontal distance between all photo-identifiable GCPs and their respective twin counterparts acquired in an independent geodetic survey. 


RMSE is the square root of the average of the set of squared differences between dataset coordinate values and coordinate values from an independent source of higher accuracy for identical points. Accuracy is reported in ground distances at the 95% confidence level. Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value. The reported accuracy value reflects all uncertainties, including those introduced by geodetic control coordinates, compilation, and final computation of ground coordinate values in the product (FGDC, 1998). 


Equation 2 - RMSE 1 Dimensional (Ross, 2004)

Equation 3 - RMSE 2 Dimensional (Ross, 2004)


1-Sigma (Standard Deviation Error) is used for quoting and validating geodetic image registration accuracy.  1-Sigma is the minimum diameter of the horizontal circle that, when centered on all of the photo-identifiable GCPs, would contain one Standard Deviation (i.e.: ~68%) of the population of all available twin counterparts acquired in an independent geodetic survey.  This is provided that the GCP population is sufficiently large for their relationship to be "normally" distributed.


Accuracy Standards

For the use of geographic data to be consistent and dependable there must be standards.  Standards exist to ensure that the experience is the same, regardless of location, when using geographic data and to allow for the better use of different datasets in conjunction with one another.  They must provide a foundation in which expectations can be measured.  The following standards are the most prevalently referred to for aerial photography and photogrammetry currently today.


The National Map Accuracy Standards were published in 1941 by the U.S. Bureau of the Budget in an attempt to provide a foundation for maps that were being generated throughout the U.S.  The document was surprisingly short and was only revised twice since then in 1943 and in 1947.  The portions of the document that are relevant to this assessment are as follows (U.S. BUREAU OF THE BUDGET, 1947):

“Horizontal accuracy. For maps on publication scales larger than 1:20,000, not more than 10 percent of the points tested shall be in error by more than 1/30 inch, measured on the publication scale; for maps on publication scales of 1:20,000 or smaller, 1/50 inch. These limits of accuracy shall apply in all cases to positions of well-defined points only. Well-defined points are those that are easily visible or recoverable on the ground, such as the following: monuments or markers, such as bench marks, property boundary monuments; intersections of roads, railroads, etc.; corners of large buildings or structures (or center points of small buildings); etc. In general what is well defined will be determined by what is plottable on the scale of the map within 1/100 inch. Thus while the intersection of two road or property lines meeting at right angles would come within a sensible interpretation, identification of the intersection of such lines meeting at an acute angle would obviously not be practicable within 1/100 inch. Similarly, features not identifiable upon the ground within close limits are not to be considered as test points within the limits quoted, even though their positions may be scaled closely upon the map. In this class would come timber lines, soil boundaries, etc.”

“The accuracy of any map may be tested by comparing the positions of points whose locations or elevations are shown upon it with corresponding positions as determined by surveys of a higher accuracy. Tests shall be made by the producing agency, which shall also determine which of its maps are to be tested, and the extent of the testing.”


The American Society for Photogrammetry and Remote Sensing (ASPRS) created these standards in July of 1990 in a report title “ASPRS Accuracy Standards for Large-Scale Maps”.  These standards were a response to the need for scale-independent accuracy standards.

The ASPRS standards explicitly used the statistical term, Root Mean Square Error (RMSE), and described a method of testing and reporting that related this more modern statistical language to map classes and contour intervals (ASPRS, 1990).

Table 2 - ASPRS Standards for Maps in Feet (ASPRS, 1990)



Map Scale

Class I

Class II

Class III
























































The National Standard for Spatial Data Accuracy (NSSDA) implements a statistic and testing methodology for positional accuracy of maps and geospatial data derived from sources such as aerial photographs, satellite imagery, or maps. Accuracy is reported in ground units. The testing methodology is comparison of data set coordinate values with coordinate values from a higher accuracy source for points that represent features readily visible or recoverable from the ground. While this standard evaluates positional accuracy at points, it applies to geospatial data sets that contain point, vector, or raster spatial objects. Data content standards, such as FGDC Standards for Digital Orthoimagery and Digital Elevation Data, will adapt the NSSDA for particular spatial object representations.

The standard insures flexibility and inclusiveness by omitting accuracy metrics, or threshold values, that data must achieve. However, agencies are encouraged to establish "pass-fail" criteria for their product standards and applications and for contracting purposes. Ultimately, users must identify acceptable accuracies for their applications (FGDC, 2008).

ASPRS. 1990. “ASPRS ACCURACY STANDARDS FOR LARGE-SCALE MAPS”. The American Society for Photogrammetry and Remote Sensing.

Congalton, R.G., and K. Green. 2008. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Second Edition. Mapping Science. Taylor & Francis.

Edgar Falkner, and Dennis Morgan. 2002. Aerial Mapping: Methods and Applications, Second Edition.

FGDC. 1998. “Geospatial Positioning Accuracy Standards Part 3: National Standard for Spatial Data Accuracy”. Federal Geographic Data Committee.

FGDC. 2008. “Geospatial Positioning Accuracy Standards, Part 3: National Standard for Spatial Data Accuracy — Federal Geographic Data Committee.” August 19.

“Geospatial Positioning Accuracy Standards, Part 3: National Standard for Spatial Data Accuracy — Federal Geographic Data Committee.” 2014. Accessed March 2.

Greenwalt, Clyde R, and Melvin E Shultz. 1962. Principles of Error Theory and Cartographic Applications.

Karen Schuckman, and Mike Renslow. 2014. “Accuracy Standards”. The Pennsylvania State University.

Kenton Ross. 2004. “Geopositional Statistical Methods” presented at the High Spatial Resolution Commercial Imagery Workshop, November 8, Reston, Virginia.

U.S. BUREAU OF THE BUDGET. 1947. “USGS - National Geospatial Data Standards - United States National Map Accuracy Standards.”

“USGS - National Geospatial Data Standards - United States National Map Accuracy Standards.” 2014. Accessed March 2.

Google Earth Enterprise Server: Exporting a Structured Layers List

In the Virtual Alabama program for the State of Alabama, the majority of users employ the Google Earth Enterprise Desktop Client to connect to and use the Virtual Alabama system. But a number of users required access directly via web browser, either because the desktop client was not locally installed or to enable access to Virtual Alabama at remote locations.

So we wrote a javascript web viewer for Virtual Alabama, using the Google Earth Plug-in, rendering the entire Virtual Alabama data library in a web browser, with no desktop client installation required.

The challenge, then, became how to maintain a duplicate copy of the layers database for display in our javascript viewer, for searching, and for integration with layers from other sources. While the geographic data is stored on the enterprise servers and rendered by the plug-in, we needed a copy of the layer tree's structure to import into our databases to create a tree menu in the javascript viewer similar to what is displayed in Google Earth, allowing users to find layers and turn layers on and off. A further complication is that this duplicate layers structure had to be updated frequently, whenever new data was published or reorganized within the Virtual Alabama system.

The key was in files named dbroot.v5.postamble.DEFAULT on the enterprise servers, newly created with each database publish. 

This is an example of one row from one of those files:

<etNestedLayer> [More Imagery]
		118 	"0b14797e-a3b3-11e2-9e11-b8ff64cb455c" 	"More Imagery" 	"" 	true 	true 	true 	false 	true 	"All Layers" 	24 	"icons/More_Imagery.png" 	"" 	"" 	"" 	"" 	true 	"" 	"" 	"" 	"" 	"" 	"" 	-1 	-1 

Contained within is a list of all published layers, including the layer name, in this example, More Imagery, the Google Earth ID, just before the name, which is used to enable the layer via javascript and the plug-in, the parent layer name, to determine where the layer belongs in the tree's hierarchy, in this example, just below All Layers, and the path to the layer's icon. This provided all the information we need for our system to build a complete layer tree.

Next, we created a service to automatically retrieve this file as need, process the data to create a list of layers, organize them hierarchically, associate appropriate icons, enable layer searching and integration, and update our web-based javascript Google Earth client to reflect new or updated layers.

Now, with the press of a button, our Desktop and Web clients are automatically in sync with the current state of the Virtual Alabama Google Earth Enterprise Servers.

Google Maps Engine: Python Basics - Part 1

In case you haven't heard of it, Google has been working on a project for a little over a year or so call Google Maps Engine (GME).  GME is a really powerful cloud-based mapping system that is maturing at a really nice pace.  One of the nice features about it is its accessibility via multiple API's.  One of those API's is Python.  There are some pretty good tutorials ( and documentation including an API reference with some examples (

However, one thing that is a little lacking is documentation on how the OAuth2 authentication protocol is leveraged in Python.  I have to admit that I struggled some here until I was aided by my friend at Google, Sean Wohltman.  So Sean, much thanks for all your help and guidance.

Before we begin, there is one thing that you must understand to make sure we are consistent in our references, that is the terminology (

So with this as the foundation of the discussion, we can loosely refer to these as the accessible objects through the API.

The approach to for the purposes of this article are relatively simple.  We will authenticate and then we will retrieve a collection of each of the objects.  This will demonstrate the core functionality that we have through the Python API.

To start the process of authentication, let's first refer to some of the basics in documentation.

Now that we have a basic understanding of the OAuth2 protocol, you can get the API downloaded and installed.  The Google APIs Client Library for Python is located at (

In Part 2 of this series we will cover the set up and use of the API in Python.

Safe Schools: Indoor Google StreetView

With the Virtual Alabama School Safety Systems (VAS3), there are multiple projects going simultaneously.  One of these projects is the "Indoor Google StreetView" project.  This project includes the creation of "walk throughs" of rooms and hallways through the building.

The process begins with the collection of photos using a very specific device.  This camera system allows for the remote triggering via wifi and download of captured photographs to a user's cell phone or tablet.  The transferred files are automatically stitched together to produce a high quality panoramic image.  Here is a sample of the raw output from our camera system.

Source Image Post Stitching

The next step include the mapping of the positions of where each set of images were captured.  This is done using a mapping interface designed and developed by my team we call the Floor Plan Annotation Tool (FPAT).

Floor Plan Annotation Tool (FPAT)

In the FPAT we are able to actually generate the tiles needed for ingestion into Google StreetView using the "360 View Manager" module by selecting the panoramic image and queue it for tiling.  The tiled Google StreetView dataset is then associated to each point on the map, therefore completing the process.

The following is an example of a final product from the process.