...
 
Commits (2)
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# BIOL 4323 Data Lab: Accessing and Analyzing Acoustic Telemetry Data\n",
" \n",
"## Signals and Noise\n",
"Acoustic telemetry relies on a series of timed pings to transmit unique values from an implanted or affixed acoustic tag to a receiving station. These pings are all transmitted on the same small set of frequencies, and are subject to being confounded by noise interference, barriers to physical propagation in the water column, and collisions between two pinging tags. \n",
"\n",
"![Picture of a Vemco Acoustic Receiver and acoustic tag](media/vemco_receiver_tag.jpg)\n",
" \n",
"For noise interference or physical propagation issues, the result is nearly always a false negative, no detection is recorded at the receiver, but there could have been a tag in proximity to the receiver. For collisions between two pinging tags A and B, it is sometimes the case that the two pinging tags at the same freqency create a valid series of pings between them that generates a third code that is neither tag A nor B. This false positive is screened out of the acoustic detection data sets post-processing using a fairly straightforward analysis."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Gathering the Data\n",
"Recovering the data from an acoustic telemetry study usually involves collecting the deployed listening instruments (acoustic receivers), downloading their onboard data into a vendor-supplied program (like Vemco VUE), and extracting a detection summary - which is often a square matrix of which receivers saw which tags and when.\n",
"\n",
"<span style='color:black;font-family:Courier'>Detection Time</span> , <span style='color:green;font-family:Courier'>Tag Code</span> , <span style='color:royalblue;font-family:Courier'>Rcvr Serial #</span>\n",
"\n",
"The researcher must then substitute in the information they have about the animal according to the tag codes, and information about the receiver according to the receiver's serial no. \n",
"\n",
"There's a lot of information that isn't contained in the vendor-supplied datasets, a lot of `metadata` telling us about: \n",
"* **when** and **where** receivers and tags were deployed and recovered; and \n",
"* when they could have been in position to create a **valid detection event**. \n",
"\n",
"At the <a href=\"http://oceantrackingnetwork.org\"> Ocean Tracking Network</a>, we track a lot of extra variables for all of our researchers to help us handle the more complicated aspects of working with lots of interchangeable receivers in the field, handling redeployment of receivers or tags, or working with active detection platforms like aquatic underwater vehicles (AUV) or animal-mounted receivers. For our purposes today, we'll keep it simpler than that. We'll start from a detection extract datafile from the OTN data system, one that's already matched up tag to animal and receiver to location, and that knows a few other things you might need to do a thorough analysis of this dataset.\n",
" \n",
"<span style='color:black;font-family:Courier'>Detection Time</span> , <span style='color:green;font-family:Courier'> [ Tag Code , Species , Individual Name ]</span> , <span style='color:royalblue;font-family:Courier'> [ Rcvr Serial # , Latitude , Longitude ]</span>\n",
"\n",
"\n",
"Today we'll take a shortcut on combining this data by using the detection data that OTN extracts from our database for researchers to use, combining the tags from Brendal Townsend's [blue shark project](https://members.oceantrack.org/project?ccode=NSBS) you've already heard about, and two of OTN's own receiver lines, our [Halifax Line](https://members.oceantrack.org/project?ccode=HFX) and the [Cabot Strait Line](https://members.oceantrack.org/project?ccode=CBS). This matches station location to serial number to detection event to tag ID to tagged animal.\n",
"\n",
" Once we load the detection extract and look around, we'll run a filtering algorithm on the data and see if all the detections found in the OTN database can be attributed to this project fairly and we can have confidence in them. Then we'll plot the detection set a few different ways using the `glatos` acoustic telemetry analysis and visualization package.\n",
" \n",
" If we get through all that we'll get into the OTN-supported python package `resonATe` that does a lot of these things too, as well as other analyses."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"options(repr.matrix.max.cols=500)\n",
"data <- read.csv(\"data//nsbs_matched_detections_2014.csv\")\n",
"data\n"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# R: A Statistical Programming Language\n",
"R is a language and environment for statistical computing and graphics. R provides a wide variety of statistical (linear and nonlinear modelling, classical statistical tests, time-series analysis, classification, clustering, …) and graphical techniques, and is highly extensible.\n",
"\n",
"*Source: https://www.r-project.org/about.html*"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## R Scripts\n",
"\n",
"An R script is simply a text file containing (almost) the same commands that you would enter on the command line of R. ( almost) refers to the fact that if you are using sink() to send the output to a file, you will have to enclose some commands in print() to get the same output as on the command line.\n",
"\n",
"*Source: https://cran.r-project.org/doc/contrib/Lemon-kickstart/kr_scrpt.html*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"variable <- \"Your name\"\n",
"\n",
"print(variable)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## R Packages\n",
"\n",
"Packages are collections of R functions, data, and compiled code in a well-defined format. The directory where packages are stored is called the library. R comes with a standard set of packages. Others are available for download and installation. Once installed, they have to be loaded into the session to be used.\n",
"\n",
"*Source: https://www.statmethods.net/interface/packages.html*\n",
"\n",
"### Install the R package stringr"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"install.packages(\"stringr\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### stringr\n",
"A consistent, simple and easy to use set of wrappers around the fantastic 'stringi' package. All function and argument \n",
"names (and positions) are consistent, all functions deal with \"NA\"'s and zero length vectors in the same way, and the \n",
"output from one function is easy to feed into the input of another."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(stringr)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"We can use ``stringr`` to find substrings using Regular Expressions or strings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stringr::str_detect(variable, \"[aeiou]\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"``string`` also has a function to count the occurance of substrings."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"stringr::str_count(variable, \"[aeiou]\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### string + dplyr\n",
"Let's import some data a find a sepcific string in a column"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"data <- read.csv(\"data//nsbs_matched_detections_2014.csv\")\n",
"stringr::str_detect(data$unqdetecid, \"release\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Cleaning and Preprocessing\n",
"\n",
"When analyzing data, 80% of time is spent cleaning and manipulating data and only 20% actually analyzing it. For this reason, it is critical to become familiar with the data cleaning process and getting your data into a format that can be analyzed.\n",
"\n",
"Let's begin with reading in our data using ``GLATOS`` (which will be explained below)."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(stringr)\n",
"library(dplyr)\n",
"library(glatos)\n",
"\n",
"detections <- glatos::read_otn_detections(\"data/nsbs_matched_detections_2014.csv\")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Data Types\n",
"\n",
"R has a wide variety of data types including scalars, vectors (numerical, character, logical), matrices, data frames, and lists. Check out a short explanation here: https://www.statmethods.net/input/datatypes.html\n",
"\n",
"Our data has been read in and the columns have been converted to their proper data types."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"sapply(detections, class)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Filtering\n",
"\n",
"We can use ``dplyr::filter()`` to find rows/cases where conditions are true. Combining this with ``stringr::str_detect()``\n",
"\n",
"*dplyr Filtering: https://dplyr.tidyverse.org/reference/filter.html*\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"releases <- detections %>% dplyr::filter(stringr::str_detect(unqdetecid, \"release\"))"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"non_release_detections <- detections %>% dplyr::filter(!stringr::str_detect(unqdetecid, \"release\"))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Total Detections"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(detections)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Number of releases"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(releases)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Number of Non-Release Detections (The Good Stuff)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"count(non_release_detections)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## `GLATOS` \n",
"There's an ongoing effort to combine the work done by many researchers worldwide on the creation of these and other analysis and visualization tools so that work is not duplicated, and so that researchers don't have to start from scratch when implementing analysis techniques. \n",
"\n",
"The Great Lakes Acoustic Telemetry Observing System group has gathered a few of their more programming-minded researchers and authored an [R package](https://gitlab.oceantrack.org/GreatLakes/glatos), and invited OTN and some technical people at Vemco to help them maintain and extend this package to ensure that it's useful for telemeters all over the world. There are a few very common methods of looking at acoustic detection data codified in `glatos`, and it serves as a great jumping off point for the design of new methods of analysis and visualization. The Pincock calculation above exists as a prebuilt function in the `glatos` toolbox, and there are a few others we'll peek at now to help us with the visualization of these datasets.\n",
"\n",
"The notebook concept's a bit new to the `glatos` package, so be aware that its functions save most of their output to files. Those files will be in your project folder."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### False Filtering\n",
"\n",
"### False Detection Filtering using the Pincock algorithm.\n",
"Doug Pincock defined a temporal threshhold algorithm to determine whether detections in a set of detections from a station could be considered real (https://www.vemco.com/pdf/false_detections.pdf). The thrust of the filter is that a single detection at a station could very well be false, and it would require multiple detections of a tag by a receiver within a certain time frame to confirm that tag actually existed and was pinging at that station, and its ID was not the result of a collision event between two other tags. \n",
"\n",
"### Tag collision resulting in no detection:\n",
"![tag collision between two tags results in no decodable detection](media/tag-collision.png)\n",
"### Tag collision resulting in false detection:\n",
"![tag collision between two tags results in a false code](media/tag-false.png)\n",
"(figs from False Detections: What They Are and How to Remove Them from Detection Data, Pincock 2012)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"nr_detections_with_filter <- glatos::false_detections(non_release_detections, tf = 3600)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"filtered_detections <- nr_detections_with_filter %>% filter(passed_filter != FALSE)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Analysis Using GLATOS"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(dplyr)\n",
"library(glatos)\n",
"\n",
"detections <- glatos::read_otn_detections(\"data/nsbs_matched_detections_2014.csv\") \n",
"detections <- detections %>% filter(!stringr::str_detect(unqdetecid, \"release\"))\n",
"detections <- glatos::false_detections(detections, tf = 3600)\n",
"filtered_detections <- detections %>% filter(passed_filter != FALSE)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Time Series Analysis & Lubridate\n",
"\n",
"Time series show the when, the before, and the after for data points. The ``lubridate`` package is especially useful for handling time calculations.\n",
"\n",
"Date-time data can be frustrating to work with in R. R commands for date-times are generally unintuitive and change depending on the type of date-time object being used. Moreover, the methods we use with date-times must be robust to time zones, leap days, daylight savings times, and other time related quirks, and R lacks these capabilities in some situations. Lubridate makes it easier to do the things R does with date-times and possible to do the things R does not.\n",
"\n",
"*Source: https://lubridate.tidyverse.org*"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"library(lubridate)"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "R [conda env:anaconda3]",
"language": "R",
"name": "conda-env-anaconda3-r"
},
"language_info": {
"codemirror_mode": "r",
"file_extension": ".r",
"mimetype": "text/x-r-source",
"name": "R",
"pygments_lexer": "r",
"version": "3.4.1"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Resources - Stuff for Today's Lab\n",
"<hr>\n",
"## Installing Git\n",
"Go to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing R\n",
"Go to https://cloud.r-project.org/ , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing RStudio\n",
"Go to https://www.rstudio.com/products/rstudio/download/#download , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Conda\n",
"Go to https://conda.io/docs/user-guide/install/index.html#regular-installation , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Jupyter\n",
"Open a command prompt and run: `conda install jupyter` after installing conda.\n",
"\n",
"\n",
"## Installing IRkernel\n",
"Go to https://irkernel.github.io/installation/ , choose your operating system and follow the steps."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## Why use JuPyTeR Notebooks?\n",
"\n",
"Open source web-ready application for creating shareable documents with embedded live code, equations, visualizations, and explanatory text.\n",
"\n",
"When publishing papers that lean heavily on computation and processing, you can (and should!) provide the Notebook alongside the paper that takes the reader through your data cleaning and analysis.\n",
"\n",
"### Open data and open process\n",
"\n",
"![Workflow for working with data](http://remi-daigle.github.io/2016-04-15-UCSB/git/img/r4ds_data-science.png)\n",
"\n",
"Currently when you're submitting a paper to most journals, you're asked to make your raw data available. Your paper is where you do the work of communicating the result of your process in plain text, but the steps you took to tidy and transform and visualize your data in order to arrive at your conclusions are very rarely fully expressed in a reproducible way. \n",
"\n",
"And they should be, and with JuPyTeR notebooks, they can be.\n",
"\n",
"![Adam Savage science quote](https://i.imgur.com/1h3K2TT.jpg)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## More Resources for using R for acoustic telemetry analysis and visualization\n",
"\n",
"### GISinR\n",
"[Materials from a graduate course](https://gitlab.oceantrack.org/otn-statistical-modelling-group/gisinr) teaching GIS techniques in R using OTN acoustic telemetry project data for source data.\n",
"\n",
"### GLATOS package\n",
"[A community-written R package for analyzing acoustic telemetry](https://gitlab.oceantrack.org/GreatLakes/glatos) that is available through the OTN GitLab.\n",
"\n",
"### OTN Statistical Modelling Group\n",
"[A collection of packages](https://gitlab.oceantrack.org/otn-statistical-modelling-group) by the Statistical Modelling Group at OTN using various statistical techniques to analyze acoustic and satellite telemetry.\n",
"\n",
"### ROpenSci\n",
"[A public collection of scientific data packages in R](https://ropensci.org/) with the ability for users to upload their own work to share it with the community.\n",
"\n",
"### Dplyr Cheat Sheet\n",
"[A PDF cheat sheet for common uses of `dplyr`](https://www.rstudio.com/wp-content/uploads/2015/02/data-wrangling-cheatsheet.pdf)\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
This diff is collapsed.
This diff is collapsed.
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Analysis with pandas and resonATe \n",
"\n",
"resonATe Docs: http://resonate.readthedocs.io/en/latest/\n",
"\n",
"Pandas Cheat Sheet: https://www.datacamp.com/community/blog/python-pandas-cheat-sheet"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from resonate.filter_detections import filter_detections \n",
"from resonate.compress import compress_detections\n",
"import resonate.kessel_ri as ri\n",
"import pandas as pd\n",
"\n",
"df = pd.read_csv('data/nsbs_matched_detections_2014.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Filtering Detections on Distance / Time\n",
"\n",
"*(White, E., Mihoff, M., Jones, B., Bajona, L., Halfyard, E. 2014. White-Mihoff False Filtering Tool)*\n",
"\n",
"OTN has developed a tool which will assist with filtering false detections. The first level of filtering involves identifying isolated detections. The original concept came from work done by Easton White. He was kind enough to share his research database with OTN. We did some preliminary research and developed a proposal for a filtering tool based on what Easton had done. This proof of concept was presented to Steve Kessel and Eddie Halfyard in December 2013 and a decision was made to develop a tool for general use.\n",
"\n",
"This is a very simple tool. It will take an input file of detections and based on an input parameter will identify suspect detections. The suspect detections will be put into a dataframe which the user can examine. There will be enough information for each suspect detection for the user to understand why it was flagged. There is also enough information to be able to reference the detection in the original file if the user wants to see what was happening at the same time.\n",
"\n",
"The input parameter is a time in minutes. We used 60 minutes as the default as this is what was used in Easton's code. This value can be changed by the user. The output contains a record for each detection for which there has been more than xx minutes since the previous detection (of that tag/animal) and more than the same amount of time until the next detection. It ignores which receiver the detection occurred at. That is all it does, nothing more and nothing less.\n",
"\n",
"Below the interval is set to 60 minutes and is not using a a user specified suspect file. The function will also create a distance matrix."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"filtered_detections = filter_detections(df,\n",
" suspect_file=None,\n",
" min_time_buffer=60)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Residence Index\n",
"\n",
"Kessel et al. Paper https://www.researchgate.net/publication/279269147\n",
"\n",
"This residence index tool will take a compressed or uncompressed detection file and caculate the residency \n",
"index for each station/receiver in the detections. A CSV file will be written to the data directory\n",
"for future use. A Pandas DataFrame is returned from the function, which can be used to plot the information. \n",
"The information passed to the function is what is used to calculate the residence index, __make sure you are only\n",
"passing the data you want taken into consideration for the residence index (i.e. species, stations, tags, etc.)__.\n",
"\n",
"\n",
"__detections:__ The CSV file in the data directory that is either compressed or raw. If the file is not compressed \n",
"please allow the program time to compress the file and add the rows to the database. A compressed file will be created\n",
"in the data directory. Use the compressed file for any future runs of the residence index function.\n",
"\n",
"\n",
"__calculation_method:__ The method used to calculate the residence index. Methods are:\n",
"\n",
"- kessel \n",
"- timedelta\n",
"- aggregate_with_overlap\n",
"- aggregate_no_overlap.\n",
"\n",
"\n",
"__project_bounds:__ North, South, East, and West bounding longitudes and latitudes for visualization.\n",
"\n",
"The calculation methods are listed and described below before they are called. The function will default to the\n",
"Kessel method when nothing is passed.\n",
"\n",
"Below is an example of inital variables to set up, which are the detection file and the project bounds.\n",
"\n",
"<hr/>\n",
"## Kessel Residence Index Calculation\n",
"The Kessel method converts both the startdate and enddate columns into a date with no hours, minutes,\n",
"or seconds. Next it creates a list of the unique days where a detection was seen. The size of the\n",
"list is returned as the total number of days as an integer. This calculation is used to determine the \n",
"total number of distinct days (T) and the total number of distinct days per station (S).\n",
"\n",
"$RI = \\frac{S}{T}$\n",
"\n",
"RI = Residence Index\n",
"\n",
"S = Distinct number of days detected at the station\n",
"\n",
"T = Distinct number of days detected anywhere on the array\n",
"\n",
"Warning:\n",
"\n",
" Possible rounding error may occur as a detection on ``2016-01-01 23:59:59``\n",
" and a detection on ``2016-01-02 00:00:01`` would be counted as two days when it is really 2-3 seconds.\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"res_index = ri.residency_index(filtered_detections['filtered'])\n",
"ri.interactive_map(res_index)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visual Timeline\n",
"\n",
"<hr/>\n",
"\n",
"``render_map()`` takes a detection extract CSV file as a data source, \n",
"as well as a string indicating what the title of the plot should be. \n",
"The title string will also be the filename for the HTML output, located\n",
"in an html file.\n",
"\n",
"You can supply a basemap argument to choose from a few alternate basemap tilesets. Available basemaps are:\n",
"\n",
"- No basemap set or ``basemap='dark_layer'`` - CartoDB/OpenStreetMap Dark\n",
"- ``basemap='Esri_OceanBasemap'`` - coarse ocean bathymetry\n",
"- ``basemap='CartoDB_Positron'`` - grayscale land/ocean \n",
"- ``basemap='Stamen_Toner'`` - Stamen Toner - high-contrast black and white - black ocean"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import resonate.html_maps as hmaps\n",
"hmaps.render_map(filtered_detections['filtered'], \"Blue Sharks 2014\")"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}