Commit cfa14ed5 authored by Jon Pye's avatar Jon Pye

Merge branch 'master' into jdev

parents 209c70c3 250e90c9
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Resources\n",
"<hr>\n",
"## Installing Git\n",
"Go to https://git-scm.com/book/en/v2/Getting-Started-Installing-Git , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing R\n",
"Go to https://cloud.r-project.org/ , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing RStudio\n",
"Go to https://www.rstudio.com/products/rstudio/download/#download , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Conda\n",
"Go to https://conda.io/docs/user-guide/install/index.html#regular-installation , choose your operating system and follow the steps.\n",
"\n",
"\n",
"## Installing Jupyter\n",
"Open a command prompt and run: `conda install jupyter` after installing conda."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Data Analysis with pandas and resonATe \n",
"\n",
"http://resonate.readthedocs.io/en/latest/"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"from resonate.filter_detections import filter_detections \n",
"from resonate.compress import compress_detections\n",
"import resonate.kessel_ri as ri\n",
"import pandas as pd\n",
"\n",
"df = pd.read_csv('data/nsbs_matched_detections_2014.csv')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Filtering Detections on Distance / Time\n",
"\n",
"*(White, E., Mihoff, M., Jones, B., Bajona, L., Halfyard, E. 2014. White-Mihoff False Filtering Tool)*\n",
"\n",
"OTN has developed a tool which will assist with filtering false detections. The first level of filtering involves identifying isolated detections. The original concept came from work done by Easton White. He was kind enough to share his research database with OTN. We did some preliminary research and developed a proposal for a filtering tool based on what Easton had done. This proof of concept was presented to Steve Kessel and Eddie Halfyard in December 2013 and a decision was made to develop a tool for general use.\n",
"\n",
"This is a very simple tool. It will take an input file of detections and based on an input parameter will identify suspect detections. The suspect detections will be put into a dataframe which the user can examine. There will be enough information for each suspect detection for the user to understand why it was flagged. There is also enough information to be able to reference the detection in the original file if the user wants to see what was happening at the same time.\n",
"\n",
"The input parameter is a time in minutes. We used 60 minutes as the default as this is what was used in Easton's code. This value can be changed by the user. The output contains a record for each detection for which there has been more than xx minutes since the previous detection (of that tag/animal) and more than the same amount of time until the next detection. It ignores which receiver the detection occurred at. That is all it does, nothing more and nothing less.\n",
"\n",
"Below the interval is set to 60 minutes and is not using a a user specified suspect file. The function will also create a distance matrix."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"filtered_detections = filter_detections(df,\n",
" suspect_file=None,\n",
" min_time_buffer=60)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Residence Index\n",
"\n",
"Kessel et al. Paper https://www.researchgate.net/publication/279269147\n",
"\n",
"This residence index tool will take a compressed or uncompressed detection file and caculate the residency \n",
"index for each station/receiver in the detections. A CSV file will be written to the data directory\n",
"for future use. A Pandas DataFrame is returned from the function, which can be used to plot the information. \n",
"The information passed to the function is what is used to calculate the residence index, __make sure you are only\n",
"passing the data you want taken into consideration for the residence index (i.e. species, stations, tags, etc.)__.\n",
"\n",
"\n",
"__detections:__ The CSV file in the data directory that is either compressed or raw. If the file is not compressed \n",
"please allow the program time to compress the file and add the rows to the database. A compressed file will be created\n",
"in the data directory. Use the compressed file for any future runs of the residence index function.\n",
"\n",
"\n",
"__calculation_method:__ The method used to calculate the residence index. Methods are:\n",
"\n",
"- kessel \n",
"- timedelta\n",
"- aggregate_with_overlap\n",
"- aggregate_no_overlap.\n",
"\n",
"\n",
"__project_bounds:__ North, South, East, and West bounding longitudes and latitudes for visualization.\n",
"\n",
"The calculation methods are listed and described below before they are called. The function will default to the\n",
"Kessel method when nothing is passed.\n",
"\n",
"Below is an example of inital variables to set up, which are the detection file and the project bounds.\n",
"\n",
"<hr/>\n",
"## Kessel Residence Index Calculation\n",
"The Kessel method converts both the startdate and enddate columns into a date with no hours, minutes,\n",
"or seconds. Next it creates a list of the unique days where a detection was seen. The size of the\n",
"list is returned as the total number of days as an integer. This calculation is used to determine the \n",
"total number of distinct days (T) and the total number of distinct days per station (S).\n",
"\n",
"$RI = \\frac{S}{T}$\n",
"\n",
"RI = Residence Index\n",
"\n",
"S = Distinct number of days detected at the station\n",
"\n",
"T = Distinct number of days detected anywhere on the array\n",
"\n",
"Warning:\n",
"\n",
" Possible rounding error may occur as a detection on ``2016-01-01 23:59:59``\n",
" and a detection on ``2016-01-02 00:00:01`` would be counted as two days when it is really 2-3 seconds.\n",
" "
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"res_index = ri.residency_index(filtered_detections['filtered'])\n",
"ri.interactive_map(res_index)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Visual Timeline\n",
"\n",
"<hr/>\n",
"\n",
"``render_map()`` takes a detection extract CSV file as a data source, \n",
"as well as a string indicating what the title of the plot should be. \n",
"The title string will also be the filename for the HTML output, located\n",
"in an html file.\n",
"\n",
"You can supply a basemap argument to choose from a few alternate basemap tilesets. Available basemaps are:\n",
"\n",
"- No basemap set or ``basemap='dark_layer'`` - CartoDB/OpenStreetMap Dark\n",
"- ``basemap='Esri_OceanBasemap'`` - coarse ocean bathymetry\n",
"- ``basemap='CartoDB_Positron'`` - grayscale land/ocean \n",
"- ``basemap='Stamen_Toner'`` - Stamen Toner - high-contrast black and white - black ocean"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"outputs": [],
"source": [
"import resonate.html_maps as hmaps\n",
"hmaps.render_map(filtered_detections['filtered'], \"Blue Sharks 2014\")"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 2",
"language": "python",
"name": "python2"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 2
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython2",
"version": "2.7.13"
}
},
"nbformat": 4,
"nbformat_minor": 2
}
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment