Company Blog

SCIENCE - TECHNOLOGY - SOLUTIONS

GISA 2014: Making Maps with JavaScript

Presentation delivered this afternoon at the GISA conference by John Sercel of SICS. 



Google Maps API
 
 
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
   html, body, #map-canvas { height: 100%; margin: 0; padding: 0 } 
</style>
<script type="text/javascript" src="https://maps.googleapis.com/maps/api/js?sensor=false"></script>
<script type="text/javascript">
   function initialize() {
       var map = new google.maps.Map(document.getElementById("map-canvas"), {
           mapTypeId: google.maps.MapTypeId.ROADMAP,
           center: new google.maps.LatLng(30.272184, -87.691470),
           zoom: 16
       });
   }
</script>
</head>
<body onload="initialize()">
   <div id="map-canvas"></div>
</body>
</html>
OpenLayers API 
 
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
   html, body, #map-canvas { height: 100%; margin: 0; padding: 0 } 
</style>
<link rel="stylesheet" href="http://ol3js.org/en/master/css/ol.css" type="text/css" />
<script src="http://ol3js.org/en/master/build/ol.js" type="text/javascript"></script>
<script type="text/javascript">
   function initialize(){
       var map = new ol.Map({
           target: 'map-canvas',
           layers: [new ol.layer.Tile({source: new ol.source.OSM()})],
           view: new ol.View2D({
               center: ol.proj.transform([-87.691470, 30.272184], 'EPSG:4326', 'EPSG:3857'),
               zoom: 16
           })
       });
   }
</script>
</head>
<body onload="initialize()">
   <div id="map-canvas"></div>
</body>
</html>
ArcGIS API 
 
<!DOCTYPE html>
<html>
<head>
<style type="text/css">
   html, body, #map-canvas { height: 100%; margin: 0; padding: 0 } 
</style>
<link rel="stylesheet" href="http://js.arcgis.com/3.9/js/esri/css/esri.css">
<script src="http://js.arcgis.com/3.9/"></script>
<script>
   require(["esri/map", "dojo/domReady!"], function(Map) {
       var map = new Map("map-canvas", {
           basemap: "streets",
           center: [-87.691470, 30.272184],
           zoom: 16
       });
   });
</script>
</head>
<body>
   <div id="map-canvas"></div>
</body>
</html>

Weekend Projects: Aerial StreetView

Last week we looked at testing stabilized video from an initial flight on a new remote control UAV platform. Today we put together a quick test of imagery from the same platform, displayed with the Google Custom StreetView API.

Custom StreetView from above the SICS offices in Florence Alabama's Industrial Park.


Custom StreetView from near the Tennessee River in Florence, Alabama.

Weekend Projects: Rapid Response Aerial Photography

At SICS we're always looking for new ways to get sources of rich, up-to-date information from the real world into our mapping systems as quickly as possible.

One of these developing sources of interest suggests itself with the advancement of technologies related to remote control unmanned aerial vehicles (UAVs).

Over the last several years a growing list of companies has offered several consumer grade models, generally in the form of quadcopters, such as the Parrot AR.Drone, allowing anyone with an Android or iOS device and $300 to take to the skies with a mounted camera, albeit with low resolution, an unstable picture, and extremely limited range.

These are interesting, but the picture quality, camera instability, and limited range do not allow their use for serious aerial photography applications. Also, while hobbyists are free to fly, the FAA has so far tried to prevent their use for commercial purposes, although that is changing.

We've worked with these models and have experimented with fixed-wing UAVs for aerial photography as well.

But this morning, after an initial test flight, we took a new UAV up over the SICS offices to get a first look at what the platform offers. We'll have more details soon, but from our initial tests it's clear the stability and capabilities offered by this system are far above any of our previous platforms.

This is still an area of research, but it's exciting to watch this technology develop and imagine potential future uses in the field of mapping and GIS.

HTTP Requests & Image Sprites

When we redesigned our main website (www.sicsconsultants.com) last month, we took the opportunity to add a few optimizations to the site's design to help it load more quickly for our visitors. One of these optimizations involves a technique known as CSS Image Sprites.

While it might be a questionable use of time spending hours just to shave a few dozen milliseconds off the load time of a website, a few milliseconds here and a few there, repeated over and over again, add up quickly.

In a nutshell, when we use CSS Image Sprites we take multiple small images and combine them into a single larger image.

Why is it better to load a single larger image rather than many small images? 

HTTP Requests Are Slow
When you visit a website, your computer first sends a request to the server hosting the website, asking for the main HTML web page. Once the page is downloaded, your computer begins requesting all of the extra files the page needs to format itself and display properly. These are javascript files, css files, and images---lots of images. For each of these files, your computer identifies the file's web address and then sends an individual request over the internet to the web server, asking the server for that file. When the server receives the request, it sends the file back to your computer for download and display in the web page. 

All of these requests and responses take time. How much time? 

It depends on your computer, the distance between you and the web server, the network pathway through all the switches and routers of the internet, and the web server itself. But we can do some quick calculations to get a rough idea. 

Trying a simple "ping" to a few websites, showing the initial requests and three responses from each:

PING google.com (74.125.196.113): 56 data bytes
64 bytes from 74.125.196.113: icmp_seq=0 ttl=43 time=18.254 ms
64 bytes from 74.125.196.113: icmp_seq=1 ttl=43 time=20.264 ms
64 bytes from 74.125.196.113: icmp_seq=2 ttl=43 time=24.005 ms

PING yahoo.com (98.138.253.109): 56 data bytes
64 bytes from 98.138.253.109: icmp_seq=0 ttl=49 time=108.740 ms
64 bytes from 98.138.253.109: icmp_seq=1 ttl=49 time=94.720 ms
64 bytes from 98.138.253.109: icmp_seq=2 ttl=49 time=95.774 ms

PING news.bbc.co.uk (212.58.244.119): 56 data bytes
64 bytes from 212.58.244.119: icmp_seq=0 ttl=47 time=189.442 ms
64 bytes from 212.58.244.119: icmp_seq=1 ttl=47 time=188.228 ms
64 bytes from 212.58.244.119: icmp_seq=2 ttl=47 time=191.969 ms

From this we can see that, on my internet connection here, the simplest request can take anywhere from about 20 milliseconds to nearly 200 milliseconds. Again, this depends on many factors, from geographic proximity to time of day to your internet connection and computer. But this is a basic idea of how long it can take the simplest bits of information to navigate the internet.


When you consider that an HTTP (TCP/IP) request involves sending multiple packets of data back and forth, the load on a web server, and other complexities, a single request-response could take as much as half a second or more under adverse conditions, even assuming the smallest of files. And considering that many websites load hundreds of these files, even if they're loaded asynchronously the time adds up quickly. 

So, again, HTTP requests are slow. There is a built-in time overhead associated with each request and response. If we can minimize the number of requests a website demands, it can significantly increase the responsiveness and decrease the load time of the page.

CSS Image Sprites
This is an example of one of the combined images, to be used for image sprites, used for the partners section of our site. Traditionally, we would use 5 individual HTTP requests for these images, adding up all the overhead of each request and response 5 times. With this method, we combine all the images into one file, then, just make the request once and use css styles to position the image so that the right image is showing in the right place.

We use the same technique for our icons, our clients, our partner logos, and our project highlights.

All together, we've taken 31 HTTP requests and reduced it to 4 with just these examples. It could be reduced further by combining the similarly sized icons and client logos, and perhaps the partners and project highlights. But when we get down to these small numbers we get diminishing returns to optimization. 

Visitor experience is the most significant improvement. But when we're building websites, we also have to think about server load and planning to manage network traffic. Reducing the total number of HTTP requests per visit is one of the most significant optimizations to server and network load as well.

Google Earth Enterprise Server: Exporting a Structured Layers List

In the Virtual Alabama program for the State of Alabama, the majority of users employ the Google Earth Enterprise Desktop Client to connect to and use the Virtual Alabama system. But a number of users required access directly via web browser, either because the desktop client was not locally installed or to enable access to Virtual Alabama at remote locations.

So we wrote a javascript web viewer for Virtual Alabama, using the Google Earth Plug-in, rendering the entire Virtual Alabama data library in a web browser, with no desktop client installation required.

The challenge, then, became how to maintain a duplicate copy of the layers database for display in our javascript viewer, for searching, and for integration with layers from other sources. While the geographic data is stored on the enterprise servers and rendered by the plug-in, we needed a copy of the layer tree's structure to import into our databases to create a tree menu in the javascript viewer similar to what is displayed in Google Earth, allowing users to find layers and turn layers on and off. A further complication is that this duplicate layers structure had to be updated frequently, whenever new data was published or reorganized within the Virtual Alabama system.

The key was in files named dbroot.v5.postamble.DEFAULT on the enterprise servers, newly created with each database publish. 

This is an example of one row from one of those files:

<etNestedLayer> [More Imagery]
	{
		118 	"0b14797e-a3b3-11e2-9e11-b8ff64cb455c" 	"More Imagery" 	"" 	true 	true 	true 	false 	true 	"All Layers" 	24 	"icons/More_Imagery.png" 	"" 	"" 	"" 	"" 	true 	"" 	"" 	"" 	"" 	"" 	"" 	-1 	-1 
	}

Contained within is a list of all published layers, including the layer name, in this example, More Imagery, the Google Earth ID, just before the name, which is used to enable the layer via javascript and the plug-in, the parent layer name, to determine where the layer belongs in the tree's hierarchy, in this example, just below All Layers, and the path to the layer's icon. This provided all the information we need for our system to build a complete layer tree.

Next, we created a service to automatically retrieve this file as need, process the data to create a list of layers, organize them hierarchically, associate appropriate icons, enable layer searching and integration, and update our web-based javascript Google Earth client to reflect new or updated layers.

Now, with the press of a button, our Desktop and Web clients are automatically in sync with the current state of the Virtual Alabama Google Earth Enterprise Servers.