Company Blog

SCIENCE - TECHNOLOGY - SOLUTIONS

Weekend Projects: Aerial StreetView

Last week we looked at testing stabilized video from an initial flight on a new remote control UAV platform. Today we put together a quick test of imagery from the same platform, displayed with the Google Custom StreetView API.

Custom StreetView from above the SICS offices in Florence Alabama's Industrial Park.


Custom StreetView from near the Tennessee River in Florence, Alabama.

Google Maps Engine: Python Basics - Part 2

A few weeks ago I posted regarding the basics of terminology and authentication for Google Maps Engine (GME) via the Python API.  In this post we will cover the installation of the API, including verification of dependencies.  This is one lesson that I learned the hard way, so hopefully I can make it a little easier on you.

To begin let's look at the API architecture and try to understand the core dependencies.  This will help us out in debugging our installation and use.  The PyDoc documentation of the API is a good place to start (http://google-api-python-client.googlecode.com/hg/docs/epy/frames.html).  As documented in the previous link, here are some of the dependencies to be aware of.

  1. Python 2.7 or higher is suggested
  2. pyOpenSSL is required
    1. Subdependencies include libcffi in order to be able to install Cryptography
  3. pyCrypto is required

Depending on your operating system there are different ways to install the API.  For Linux and Mac, make sure you have a current installation of Setuptools or pip.  Then execute the following;

easy_install --upgrade google-api-python-client

OR

pip install --upgrade google-api-python-client

For Microsoft Windows;
  1. Download the API from here.
  2. Unzip the package to a location where you have rights
  3. Open a command prompt and navigate to the new directory
  4. Execute python setup.py install
To test your installation, open a Python prompt and execute the following;
from oauth2client.client import SignedJwtAssertionCredentials

If this returns an error, you most likely have a problem with your pyOpenSSL configuration.  Go here to start to debug problems with the pyOpenSSL installation.

Otherwise you have successfully installed the API and are now ready to proceed with setup your code.

In part 3, we will cover the actual code and the required elements to interface with GME.

Weekend Projects: Rapid Response Aerial Photography

At SICS we're always looking for new ways to get sources of rich, up-to-date information from the real world into our mapping systems as quickly as possible.

One of these developing sources of interest suggests itself with the advancement of technologies related to remote control unmanned aerial vehicles (UAVs).

Over the last several years a growing list of companies has offered several consumer grade models, generally in the form of quadcopters, such as the Parrot AR.Drone, allowing anyone with an Android or iOS device and $300 to take to the skies with a mounted camera, albeit with low resolution, an unstable picture, and extremely limited range.

These are interesting, but the picture quality, camera instability, and limited range do not allow their use for serious aerial photography applications. Also, while hobbyists are free to fly, the FAA has so far tried to prevent their use for commercial purposes, although that is changing.

We've worked with these models and have experimented with fixed-wing UAVs for aerial photography as well.

But this morning, after an initial test flight, we took a new UAV up over the SICS offices to get a first look at what the platform offers. We'll have more details soon, but from our initial tests it's clear the stability and capabilities offered by this system are far above any of our previous platforms.

This is still an area of research, but it's exciting to watch this technology develop and imagine potential future uses in the field of mapping and GIS.

HTTP Requests & Image Sprites

When we redesigned our main website (www.sicsconsultants.com) last month, we took the opportunity to add a few optimizations to the site's design to help it load more quickly for our visitors. One of these optimizations involves a technique known as CSS Image Sprites.

While it might be a questionable use of time spending hours just to shave a few dozen milliseconds off the load time of a website, a few milliseconds here and a few there, repeated over and over again, add up quickly.

In a nutshell, when we use CSS Image Sprites we take multiple small images and combine them into a single larger image.

Why is it better to load a single larger image rather than many small images? 

HTTP Requests Are Slow
When you visit a website, your computer first sends a request to the server hosting the website, asking for the main HTML web page. Once the page is downloaded, your computer begins requesting all of the extra files the page needs to format itself and display properly. These are javascript files, css files, and images---lots of images. For each of these files, your computer identifies the file's web address and then sends an individual request over the internet to the web server, asking the server for that file. When the server receives the request, it sends the file back to your computer for download and display in the web page. 

All of these requests and responses take time. How much time? 

It depends on your computer, the distance between you and the web server, the network pathway through all the switches and routers of the internet, and the web server itself. But we can do some quick calculations to get a rough idea. 

Trying a simple "ping" to a few websites, showing the initial requests and three responses from each:

PING google.com (74.125.196.113): 56 data bytes
64 bytes from 74.125.196.113: icmp_seq=0 ttl=43 time=18.254 ms
64 bytes from 74.125.196.113: icmp_seq=1 ttl=43 time=20.264 ms
64 bytes from 74.125.196.113: icmp_seq=2 ttl=43 time=24.005 ms

PING yahoo.com (98.138.253.109): 56 data bytes
64 bytes from 98.138.253.109: icmp_seq=0 ttl=49 time=108.740 ms
64 bytes from 98.138.253.109: icmp_seq=1 ttl=49 time=94.720 ms
64 bytes from 98.138.253.109: icmp_seq=2 ttl=49 time=95.774 ms

PING news.bbc.co.uk (212.58.244.119): 56 data bytes
64 bytes from 212.58.244.119: icmp_seq=0 ttl=47 time=189.442 ms
64 bytes from 212.58.244.119: icmp_seq=1 ttl=47 time=188.228 ms
64 bytes from 212.58.244.119: icmp_seq=2 ttl=47 time=191.969 ms

From this we can see that, on my internet connection here, the simplest request can take anywhere from about 20 milliseconds to nearly 200 milliseconds. Again, this depends on many factors, from geographic proximity to time of day to your internet connection and computer. But this is a basic idea of how long it can take the simplest bits of information to navigate the internet.


When you consider that an HTTP (TCP/IP) request involves sending multiple packets of data back and forth, the load on a web server, and other complexities, a single request-response could take as much as half a second or more under adverse conditions, even assuming the smallest of files. And considering that many websites load hundreds of these files, even if they're loaded asynchronously the time adds up quickly. 

So, again, HTTP requests are slow. There is a built-in time overhead associated with each request and response. If we can minimize the number of requests a website demands, it can significantly increase the responsiveness and decrease the load time of the page.

CSS Image Sprites
This is an example of one of the combined images, to be used for image sprites, used for the partners section of our site. Traditionally, we would use 5 individual HTTP requests for these images, adding up all the overhead of each request and response 5 times. With this method, we combine all the images into one file, then, just make the request once and use css styles to position the image so that the right image is showing in the right place.

We use the same technique for our icons, our clients, our partner logos, and our project highlights.

All together, we've taken 31 HTTP requests and reduced it to 4 with just these examples. It could be reduced further by combining the similarly sized icons and client logos, and perhaps the partners and project highlights. But when we get down to these small numbers we get diminishing returns to optimization. 

Visitor experience is the most significant improvement. But when we're building websites, we also have to think about server load and planning to manage network traffic. Reducing the total number of HTTP requests per visit is one of the most significant optimizations to server and network load as well.

Accuracy Standards and Statistical Tests

As part of a recent project, I had to do some research on accuracy, standards, and methods of assessing accuracy.  The following paragraphs are the results of that research.

Horizontal Accuracy Tests

There are several horizontal accuracy tests, but the most prominent are the Circular Error of 90%, the Root Mean Square Error, and 1 Sigma.  The following describes each of these methods.

CE90

Circular Error of 90% (CE90) is commonly used for quoting and validating geodetic image registration accuracy.  A CE90 value is the minimum diameter of the horizontal circle that can be centered on all photo-identifiable Ground Control Points (GCPs) and also contain 90% of their respective twin counterparts acquired in an independent geodetic survey.  It can be stated as the radial error which 90% of all errors in a circular distribution will not exceed.  Circular error may be defined as the circle radius, R, that satisfies the conditions of the equation below, where C.L. is the desired confidence level (Ross, 2004).

 

Equation 1 - CE90 (Greenwalt and Shultz, 1962)



RMSE

RMSE is commonly used for quoting and validating geodetic image registration accuracy. A RMSE value is a single summary statistic that describes the square-root of the mean horizontal distance between all photo-identifiable GCPs and their respective twin counterparts acquired in an independent geodetic survey. 

 

RMSE is the square root of the average of the set of squared differences between dataset coordinate values and coordinate values from an independent source of higher accuracy for identical points. Accuracy is reported in ground distances at the 95% confidence level. Accuracy reported at the 95% confidence level means that 95% of the positions in the dataset will have an error with respect to true ground position that is equal to or smaller than the reported accuracy value. The reported accuracy value reflects all uncertainties, including those introduced by geodetic control coordinates, compilation, and final computation of ground coordinate values in the product (FGDC, 1998). 

 

Equation 2 - RMSE 1 Dimensional (Ross, 2004)


Equation 3 - RMSE 2 Dimensional (Ross, 2004)


1-Sigma

1-Sigma (Standard Deviation Error) is used for quoting and validating geodetic image registration accuracy.  1-Sigma is the minimum diameter of the horizontal circle that, when centered on all of the photo-identifiable GCPs, would contain one Standard Deviation (i.e.: ~68%) of the population of all available twin counterparts acquired in an independent geodetic survey.  This is provided that the GCP population is sufficiently large for their relationship to be "normally" distributed.

 

Accuracy Standards

For the use of geographic data to be consistent and dependable there must be standards.  Standards exist to ensure that the experience is the same, regardless of location, when using geographic data and to allow for the better use of different datasets in conjunction with one another.  They must provide a foundation in which expectations can be measured.  The following standards are the most prevalently referred to for aerial photography and photogrammetry currently today.

NMAS

The National Map Accuracy Standards were published in 1941 by the U.S. Bureau of the Budget in an attempt to provide a foundation for maps that were being generated throughout the U.S.  The document was surprisingly short and was only revised twice since then in 1943 and in 1947.  The portions of the document that are relevant to this assessment are as follows (U.S. BUREAU OF THE BUDGET, 1947):


“Horizontal accuracy. For maps on publication scales larger than 1:20,000, not more than 10 percent of the points tested shall be in error by more than 1/30 inch, measured on the publication scale; for maps on publication scales of 1:20,000 or smaller, 1/50 inch. These limits of accuracy shall apply in all cases to positions of well-defined points only. Well-defined points are those that are easily visible or recoverable on the ground, such as the following: monuments or markers, such as bench marks, property boundary monuments; intersections of roads, railroads, etc.; corners of large buildings or structures (or center points of small buildings); etc. In general what is well defined will be determined by what is plottable on the scale of the map within 1/100 inch. Thus while the intersection of two road or property lines meeting at right angles would come within a sensible interpretation, identification of the intersection of such lines meeting at an acute angle would obviously not be practicable within 1/100 inch. Similarly, features not identifiable upon the ground within close limits are not to be considered as test points within the limits quoted, even though their positions may be scaled closely upon the map. In this class would come timber lines, soil boundaries, etc.”

“The accuracy of any map may be tested by comparing the positions of points whose locations or elevations are shown upon it with corresponding positions as determined by surveys of a higher accuracy. Tests shall be made by the producing agency, which shall also determine which of its maps are to be tested, and the extent of the testing.”

ASPRS

The American Society for Photogrammetry and Remote Sensing (ASPRS) created these standards in July of 1990 in a report title “ASPRS Accuracy Standards for Large-Scale Maps”.  These standards were a response to the need for scale-independent accuracy standards.

The ASPRS standards explicitly used the statistical term, Root Mean Square Error (RMSE), and described a method of testing and reporting that related this more modern statistical language to map classes and contour intervals (ASPRS, 1990).

Table 2 - ASPRS Standards for Maps in Feet (ASPRS, 1990)

 

RMSE

Map Scale

Class I

Class II

Class III

1:60

0.05

0.1

0.2

1:120

0.1

0.2

0.3

1:240

0.2

0.4

0.6

1:360

0.3

0.6

0.9

1:480

0.4

0.8

1.2

1:600

0.5

1.0

1.5

1:1200

1.0

2.0

3.0

1:2400

2.0

4.0

6.0

1:4800

4.0

8.0

12.0

1:6000

5.0

10.0

15.0

1:9600

8.0

16.0

24.0

1:12000

10.0

20.0

30.0

1:20000

16.7

33.4

50.1

 

 

NSSDA

The National Standard for Spatial Data Accuracy (NSSDA) implements a statistic and testing methodology for positional accuracy of maps and geospatial data derived from sources such as aerial photographs, satellite imagery, or maps. Accuracy is reported in ground units. The testing methodology is comparison of data set coordinate values with coordinate values from a higher accuracy source for points that represent features readily visible or recoverable from the ground. While this standard evaluates positional accuracy at points, it applies to geospatial data sets that contain point, vector, or raster spatial objects. Data content standards, such as FGDC Standards for Digital Orthoimagery and Digital Elevation Data, will adapt the NSSDA for particular spatial object representations.

The standard insures flexibility and inclusiveness by omitting accuracy metrics, or threshold values, that data must achieve. However, agencies are encouraged to establish "pass-fail" criteria for their product standards and applications and for contracting purposes. Ultimately, users must identify acceptable accuracies for their applications (FGDC, 2008).


 References
ASPRS. 1990. “ASPRS ACCURACY STANDARDS FOR LARGE-SCALE MAPS”. The American Society for Photogrammetry and Remote Sensing. http://www.asprs.org/a/society/committees/standards/1990_jul_1068-1070.pdf.

Congalton, R.G., and K. Green. 2008. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices, Second Edition. Mapping Science. Taylor & Francis. http://books.google.com/books?id=T4zj2bnGldEC.

Edgar Falkner, and Dennis Morgan. 2002. Aerial Mapping: Methods and Applications, Second Edition. http://gis-lab.info/docs/books/aerial-mapping/cr1557_08.pdf.

FGDC. 1998. “Geospatial Positioning Accuracy Standards Part 3: National Standard for Spatial Data Accuracy”. Federal Geographic Data Committee. https://www.fgdc.gov/standards/projects/FGDC-standards-projects/accuracy/part3/chapter3.

FGDC. 2008. “Geospatial Positioning Accuracy Standards, Part 3: National Standard for Spatial Data Accuracy — Federal Geographic Data Committee.” August 19. https://www.fgdc.gov/standards/projects/FGDC-standards-projects/accuracy/part3.

“Geospatial Positioning Accuracy Standards, Part 3: National Standard for Spatial Data Accuracy — Federal Geographic Data Committee.” 2014. Accessed March 2. https://www.fgdc.gov/standards/projects/FGDC-standards-projects/accuracy/part3.

Greenwalt, Clyde R, and Melvin E Shultz. 1962. Principles of Error Theory and Cartographic Applications.

Karen Schuckman, and Mike Renslow. 2014. “Accuracy Standards”. The Pennsylvania State University. https://www.e-education.psu.edu/lidar/l6_p7.html.

Kenton Ross. 2004. “Geopositional Statistical Methods” presented at the High Spatial Resolution Commercial Imagery Workshop, November 8, Reston, Virginia. http://calval.cr.usgs.gov/JACIE_files/JACIE04/files/1Ross16.pdf.

U.S. BUREAU OF THE BUDGET. 1947. “USGS - National Geospatial Data Standards - United States National Map Accuracy Standards.” http://nationalmap.gov/standards/nmas647.html.

“USGS - National Geospatial Data Standards - United States National Map Accuracy Standards.” 2014. Accessed March 2. http://nationalmap.gov/standards/nmas647.html.

Google Earth Enterprise Server: Exporting a Structured Layers List

In the Virtual Alabama program for the State of Alabama, the majority of users employ the Google Earth Enterprise Desktop Client to connect to and use the Virtual Alabama system. But a number of users required access directly via web browser, either because the desktop client was not locally installed or to enable access to Virtual Alabama at remote locations.

So we wrote a javascript web viewer for Virtual Alabama, using the Google Earth Plug-in, rendering the entire Virtual Alabama data library in a web browser, with no desktop client installation required.

The challenge, then, became how to maintain a duplicate copy of the layers database for display in our javascript viewer, for searching, and for integration with layers from other sources. While the geographic data is stored on the enterprise servers and rendered by the plug-in, we needed a copy of the layer tree's structure to import into our databases to create a tree menu in the javascript viewer similar to what is displayed in Google Earth, allowing users to find layers and turn layers on and off. A further complication is that this duplicate layers structure had to be updated frequently, whenever new data was published or reorganized within the Virtual Alabama system.

The key was in files named dbroot.v5.postamble.DEFAULT on the enterprise servers, newly created with each database publish. 

This is an example of one row from one of those files:

<etNestedLayer> [More Imagery]
	{
		118 	"0b14797e-a3b3-11e2-9e11-b8ff64cb455c" 	"More Imagery" 	"" 	true 	true 	true 	false 	true 	"All Layers" 	24 	"icons/More_Imagery.png" 	"" 	"" 	"" 	"" 	true 	"" 	"" 	"" 	"" 	"" 	"" 	-1 	-1 
	}

Contained within is a list of all published layers, including the layer name, in this example, More Imagery, the Google Earth ID, just before the name, which is used to enable the layer via javascript and the plug-in, the parent layer name, to determine where the layer belongs in the tree's hierarchy, in this example, just below All Layers, and the path to the layer's icon. This provided all the information we need for our system to build a complete layer tree.

Next, we created a service to automatically retrieve this file as need, process the data to create a list of layers, organize them hierarchically, associate appropriate icons, enable layer searching and integration, and update our web-based javascript Google Earth client to reflect new or updated layers.

Now, with the press of a button, our Desktop and Web clients are automatically in sync with the current state of the Virtual Alabama Google Earth Enterprise Servers.