Commit ca04a13a authored by Alex Nunes's avatar Alex Nunes
Browse files

Cleaned up uneccessary notebooks and split thw workshop into 4 parts

 Changes to be committed:
	modified:   Access and Analysis of Acoustic Telemetry Data.ipynb
	new file:   Part I - Acoustic Telemetry Data.ipynb
	new file:   Part II - R Programming Language.ipynb
	new file:   Part III - Data Cleaning and Preprocessing.ipynb
	new file:   Part IV - Data Analysis.ipynb
	deleted:    Requirements Installation Instructions.ipynb
	deleted:    html/Blue Sharks 2014.html
	deleted:    html/blue_sharks_2014.json
	deleted:    resonATe - OTN's Data Analysis Toolbox.ipynb
parent 0acb1b3c
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BIOL 4323 Data Lab: Accessing and Analyzing Acoustic Telemetry Data\n",
" \n",
"## Signals and Noise\n",
"Acoustic telemetry relies on a series of timed pings to transmit unique values from an implanted or affixed acoustic tag to a receiving station. These pings are all transmitted on the same small set of frequencies, and are subject to being confounded by noise interference, barriers to physical propagation in the water column, and collisions between two pinging tags. \n",
"\n",
"![Picture of a Vemco Acoustic Receiver and acoustic tag](media/vemco_receiver_tag.jpg)\n",
" \n",
"For noise interference or physical propagation issues, the result is nearly always a false negative, no detection is recorded at the receiver, but there could have been a tag in proximity to the receiver. For collisions between two pinging tags A and B, it is sometimes the case that the two pinging tags at the same freqency create a valid series of pings between them that generates a third code that is neither tag A nor B. This false positive is screened out of the acoustic detection data sets post-processing using a fairly straightforward analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gathering the Data\n",
"Recovering the data from an acoustic telemetry study usually involves collecting the deployed listening instruments (acoustic receivers), downloading their onboard data into a vendor-supplied program (like Vemco VUE), and extracting a detection summary - which is often a square matrix of which receivers saw which tags and when.\n",
"\n",
"<span style='color:black;font-family:Courier'>Detection Time</span> , <span style='color:green;font-family:Courier'>Tag Code</span> , <span style='color:royalblue;font-family:Courier'>Rcvr Serial #</span>\n",
"\n",
"The researcher must then substitute in the information they have about the animal according to the tag codes, and information about the receiver according to the receiver's serial no. \n",
"\n",
"There's a lot of information that isn't contained in the vendor-supplied datasets, a lot of `metadata` telling us about: \n",
"* **when** and **where** receivers and tags were deployed and recovered; and \n",
"* when they could have been in position to create a **valid detection event**. \n",
"\n",
"At the <a href=\"http://oceantrackingnetwork.org\"> Ocean Tracking Network</a>, we track a lot of extra variables for all of our researchers to help us handle the more complicated aspects of working with lots of interchangeable receivers in the field, handling redeployment of receivers or tags, or working with active detection platforms like aquatic underwater vehicles (AUV) or animal-mounted receivers. For our purposes today, we'll keep it simpler than that. We'll start from a detection extract datafile from the OTN data system, one that's already matched up tag to animal and receiver to location, and that knows a few other things you might need to do a thorough analysis of this dataset.\n",
" \n",
"<span style='color:black;font-family:Courier'>Detection Time</span> , <span style='color:green;font-family:Courier'> [ Tag Code , Species , Individual Name ]</span> , <span style='color:royalblue;font-family:Courier'> [ Rcvr Serial # , Latitude , Longitude ]</span>\n",
"\n",
"\n",
"Today we'll take a shortcut on combining this data by using the detection data that OTN extracts from our database for researchers to use, combining the tags from Brendal Townsend's [blue shark project](https://members.oceantrack.org/project?ccode=NSBS) you've already heard about, and two of OTN's own receiver lines, our [Halifax Line](https://members.oceantrack.org/project?ccode=HFX) and the [Cabot Strait Line](https://members.oceantrack.org/project?ccode=CBS). This matches station location to serial number to detection event to tag ID to tagged animal.\n",
"\n",
" Once we load the detection extract and look around, we'll run a filtering algorithm on the data and see if all the detections found in the OTN database can be attributed to this project fairly and we can have confidence in them. Then we'll plot the detection set a few different ways using the `glatos` acoustic telemetry analysis and visualization package.\n",
" \n",
" If we get through all that we'll get into the OTN-supported python package `resonATe` that does a lot of these things too, as well as other analyses."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"options(repr.matrix.max.cols=500)\n",
"data <- read.csv(\"data//nsbs_matched_detections_2014.csv\")\n",
"data\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# R: A Statistical Programming Language\n",
"R is a language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible.\n",
"\n",
"*Source: https://www.r-project.org/about.html*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## R Scripts\n",
"\n",
"An R script is simply a text file containing (almost) the same commands that you would enter on the command line of R. ( almost) refers to the fact that if you are using sink() to send the output to a file, you will have to enclose some commands in print() to get the same output as on the command line.\n",
"\n",
"*Source: https://cran.r-project.org/doc/contrib/Lemon-kickstart/kr_scrpt.html*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"variable <- \"Your name\"\n",
"\n",
"print(variable)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## R Packages\n",
"\n",
"Packages are collections of R functions, data, and compiled code in a well-defined format. The directory where packages are stored is called the library. R comes with a standard set of packages. Others are available for download and installation. Once installed, they have to be loaded into the session to be used.\n",
"\n",
"*Source: https://www.statmethods.net/interface/packages.html*\n",
"\n",
"### Install the R package stringr"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"install.packages(\"stringr\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### stringr\n",
"A consistent, simple and easy to use set of wrappers around the fantastic 'stringi' package. All function and argument \n",
"names (and positions) are consistent, all functions deal with \"NA\"'s and zero length vectors in the same way, and the \n",
"output from one function is easy to feed into the input of another."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(stringr)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use ``stringr`` to find substrings using Regular Expressions or strings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stringr::str_detect(variable, \"[aeiou]\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``string`` also has a function to count the occurance of substrings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stringr::str_count(variable, \"[aeiou]\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### string + dplyr\n",
"Let's import some data a find a sepcific string in a column"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"data <- read.csv(\"data//nsbs_matched_detections_2014.csv\")\n",
"stringr::str_detect(data$unqdetecid, \"release\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Cleaning and Preprocessing\n",
"\n",
"When analyzing data, 80% of time is spent cleaning and manipulating data and only 20% actually analyzing it. For this reason, it is critical to become familiar with the data cleaning process and getting your data into a format that can be analyzed.\n",
"\n",
"Let's begin with reading in our data using ``GLATOS`` (which will be explained below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(stringr)\n",
"library(dplyr)\n",
"library(glatos)\n",
"\n",
"detections <- glatos::read_otn_detections(\"data/nsbs_matched_detections_2014.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Types\n",
"\n",
"R has a wide variety of data types including scalars, vectors (numerical, character, logical), matrices, data frames, and lists. Check out a short explanation here: https://www.statmethods.net/input/datatypes.html\n",
"\n",
"Our data has been read in and the columns have been converted to their proper data types."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sapply(detections, class)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering\n",
"\n",
"We can use ``dplyr::filter()`` to find rows/cases where conditions are true. Combining this with ``stringr::str_detect()``\n",
"\n",
"*dplyr Filtering: https://dplyr.tidyverse.org/reference/filter.html*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"releases <- detections %>% dplyr::filter(stringr::str_detect(unqdetecid, \"release\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"non_release_detections <- detections %>% dplyr::filter(!stringr::str_detect(unqdetecid, \"release\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Total Detections"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(detections)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Number of releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(releases)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Number of Non-Release Detections (The Good Stuff)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(non_release_detections)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `GLATOS` \n",
"There's an ongoing effort to combine the work done by many researchers worldwide on the creation of these and other analysis and visualization tools so that work is not duplicated, and so that researchers don't have to start from scratch when implementing analysis techniques. \n",
"\n",
"The Great Lakes Acoustic Telemetry Observing System group has gathered a few of their more programming-minded researchers and authored an [R package](https://gitlab.oceantrack.org/GreatLakes/glatos), and invited OTN and some technical people at Vemco to help them maintain and extend this package to ensure that it's useful for telemeters all over the world. There are a few very common methods of looking at acoustic detection data codified in `glatos`, and it serves as a great jumping off point for the design of new methods of analysis and visualization. The Pincock calculation above exists as a prebuilt function in the `glatos` toolbox, and there are a few others we'll peek at now to help us with the visualization of these datasets.\n",
"\n",
"The notebook concept's a bit new to the `glatos` package, so be aware that its functions save most of their output to files. Those files will be in your project folder."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### False Filtering\n",
"\n",
"### False Detection Filtering using the Pincock algorithm.\n",
"Doug Pincock defined a temporal threshhold algorithm to determine whether detections in a set of detections from a station could be considered real (https://www.vemco.com/pdf/false_detections.pdf). The thrust of the filter is that a single detection at a station could very well be false, and it would require multiple detections of a tag by a receiver within a certain time frame to confirm that tag actually existed and was pinging at that station, and its ID was not the result of a collision event between two other tags. \n",
"\n",
"### Tag collision resulting in no detection:\n",
"![tag collision between two tags results in no decodable detection](media/tag-collision.png)\n",
"### Tag collision resulting in false detection:\n",
"![tag collision between two tags results in a false code](media/tag-false.png)\n",
"(figs from False Detections: What They Are and How to Remove Them from Detection Data, Pincock 2012)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nr_detections_with_filter <- glatos::false_detections(non_release_detections, tf = 3600)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"filtered_detections <- nr_detections_with_filter %>% filter(passed_filter != FALSE)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Analysis Using GLATOS"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"library(glatos)\n",
"\n",
"detections <- glatos::read_otn_detections(\"data/nsbs_matched_detections_2014.csv\") \n",
"detections <- detections %>% filter(!stringr::str_detect(unqdetecid, \"release\"))\n",
"detections <- glatos::false_detections(detections, tf = 3600)\n",
"filtered_detections <- detections %>% filter(passed_filter != FALSE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time Series Analysis & Lubridate\n",
"\n",
"Time series show the when, the before, and the after for data points. The ``lubridate`` package is especially useful for handling time calculations.\n",
"\n",
"Date-time data can be frustrating to work with in R. R commands for date-times are generally unintuitive and change depending on the type of date-time object being used. Moreover, the methods we use with date-times must be robust to time zones, leap days, daylight savings times, and other time related quirks, and R lacks these capabilities in some situations. Lubridate makes it easier to do the things R does with date-times and possible to do the things R does not.\n",
"\n",
"*Source: https://lubridate.tidyverse.org*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(lubridate)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Resources - Stuff for Today's Lab\n",
"<hr>\n",
"## Installing Git\n",
"Go to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing R\n",
"Go to https://cloud.r-project.org/ , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing RStudio\n",
"Go to https://www.rstudio.com/products/rstudio/download/#download , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Conda\n",
"Go to https://conda.io/docs/user-guide/install/index.html#regular-installation , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Jupyter\n",
"Open a command prompt and run: `conda install jupyter` after installing conda.\n",
"\n",
"\n",
"## Installing IRkernel\n",
"Go to https://irkernel.github.io/installation/ , choose your operating system and follow the steps."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Why use JuPyTeR Notebooks?\n",
"\n",
"Open source web-ready application for creating shareable documents with embedded live code, equations, visualizations, and explanatory text.\n",
"\n",
"When publishing papers that lean heavily on computation and processing, you can (and should!) provide the Notebook alongside the paper that takes the reader through your data cleaning and analysis.\n",
"\n",
"### Open data and open process\n",
"\n",
"![Workflow for working with data](http://remi-daigle.github.io/2016-04-15-UCSB/git/img/r4ds_data-science.png)\n",
"\n",
"Currently when you're submitting a paper to most journals, you're asked to make your raw data available. Your paper is where you do the work of communicating the result of your process in plain text, but the steps you took to tidy and transform and visualize your data in order to arrive at your conclusions are very rarely fully expressed in a reproducible way. \n",
"\n",
"And they should be, and with JuPyTeR notebooks, they can be.\n",
"\n",
"![Adam Savage science quote](https://i.imgur.com/1h3K2TT.jpg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## More Resources for using R for acoustic telemetry analysis and visualization\n",
"\n",
"### GISinR\n",
"[Materials from a graduate course](https://gitlab.oceantrack.org/otn-statistical-modelling-group/gisinr) teaching GIS techniques in R using OTN acoustic telemetry project data for source data.\n",
"\n",
"### GLATOS package\n",
"[A community-written R package for analyzing acoustic telemetry](https://gitlab.oceantrack.org/GreatLakes/glatos) that is available through the OTN GitLab.\n",
"\n",
"### OTN Statistical Modelling Group\n",
"[A collection of packages](https://gitlab.oceantrack.org/otn-statistical-modelling-group) by the Statistical Modelling Group at OTN using various statistical techniques to analyze acoustic and satellite telemetry.\n",
"\n",
"### ROpenSci\n",
"[A public collection of scientific data packages in R](https://ropensci.org/) with the ability for users to upload their own work to share it with the community.\n",
"\n",
"### Dplyr Cheat Sheet\n",
"[A PDF cheat sheet for common uses of `dplyr`](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
<html>
<head>
<meta charset="utf-8">
<title>Blue Sharks 2014</title>
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/jquery/2.2.3/jquery.min.js"></script>
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.css" />
<style media="screen">
html, body {
height: 100%;
margin:0;
}
.leaflet-left {
width:100%;
}
</style>
<style> .leaflet-control.leaflet-timeline-control{width:96%;box-sizing:border-box;margin:2%;margin-bottom:20px;text-align:center}.leaflet-control.leaflet-timeline-control *{vertical-align:middle}.leaflet-control.leaflet-timeline-control input[type=range]{width:80%}.leaflet-control.leaflet-timeline-control .sldr-ctrl-container{float:left;width:15%;box-sizing:border-box}.leaflet-control.leaflet-timeline-control .button-container button{position:relative;width:20%;height:20px}.leaflet-control.leaflet-timeline-control .button-container button:after,.leaflet-control.leaflet-timeline-control .button-container button:before{content:'';position:absolute}.leaflet-control.leaflet-timeline-control .button-container button.play:before{border:7px solid transparent;border-width:7px 0 7px 10px;border-left-color:#000;margin-top:-7px;background:transparent;margin-left:-5px}.leaflet-control.leaflet-timeline-control .button-container button.pause{display:none}.leaflet-control.leaflet-timeline-control .button-container button.pause:before{width:4px;height:14px;border:4px solid #000;border-width:0 4px;margin-top:-7px;margin-left:-6px;background:transparent}.leaflet-control.leaflet-timeline-control .button-container button.prev:after,.leaflet-control.leaflet-timeline-control .button-container button.prev:before{margin:-8px 0 0;background:#000}.leaflet-control.leaflet-timeline-control .button-container button.prev:before{width:2px;height:14px;margin-top:-7px;margin-left:-7px}.leaflet-control.leaflet-timeline-control .button-container button.prev:after{border:7px solid transparent;border-width:7px 10px 7px 0;border-right-color:#000;margin-top:-7px;margin-left:-5px;background:transparent}.leaflet-control.leaflet-timeline-control .button-container button.next:after,.leaflet-control.leaflet-timeline-control .button-container button.next:before{margin:-8px 0 0;background:#000}.leaflet-control.leaflet-timeline-control .button-container button.next:before{width:2px;height:14px;margin-top:-7px;margin-left:5px}.leaflet-control.leaflet-timeline-control .button-container button.next:after{border:7px solid transparent;border-width:7px 0 7px 10px;border-left-color:#000;margin-top:-7px;margin-left:-5px;background:transparent}.leaflet-control.leaflet-timeline-control.playing button.pause{display:inline-block}.leaflet-control.leaflet-timeline-control.playing button.play{display:none}</style>
</head>
<body>
<div id="map" style="height:100%; min-height:400px;"></div>
<script src="https://cdnjs.cloudflare.com/ajax/libs/leaflet/0.7.7/leaflet.js"></script>
<script type="text/javascript" >
!function(t){function e(n){if(i[n])return i[n].exports;var s=i[n]={exports:{},id:n,loaded:!1};return t[n].call(s.exports,s,s.exports,e),s.loaded=!0,s.exports}var i={};return e.m=t,e.c=i,e.p="",e(0)}([function(t,e,i){"use strict";L.TimelineVersion="1.0.0-beta",i(1),i(3),i(4)},function(t,e,i){"use strict";function n(t){return t&&t.__esModule?t:{"default":t}}var s=function(){function t(t,e){var i=[],n=!0,s=!1,r=void 0;try{for(var a,o=t[Symbol.iterator]();!(n=(a=o.next()).done)&&(i.push(a.value),!e||i.length!==e);n=!0);}catch(u){s=!0,r=u}finally{try{!n&&o["return"]&&o["return"]()}finally{if(s)throw r}}return i}return function(e,i){if(Array.isArray(e))return e;if(Symbol.iterator in Object(e))return t(e,i);throw new TypeError("Invalid attempt to destructure non-iterable instance")}}(),r=i(2),a=n(r);L.Timeline=L.GeoJSON.extend({times:[],displayedLayers:[],ranges:null,initialize:function(t){var e=this,i=arguments.length<=1||void 0===arguments[1]?{}:arguments[1];this.ranges=new a["default"];var n=/^(\d+)(\.(\d+))?(\.(\d+))?(-(.*))?(\+(.*))?$/,r=n.exec(L.version),o=s(r,4),u=o[1],l=o[3];this.isOldVersion=0===parseInt(u,10)&&parseInt(l,10)<=7;var h={drawOnSetTime:!0};L.GeoJSON.proto