Commit f08d290f authored by Alex Nunes's avatar Alex Nunes

Merge branch 'dev' into 'master'

Dev

See merge request anunes/resonate!3
parents 75493756 5e57fe9e
......@@ -11,3 +11,4 @@ py_notebooks/html/*
docs/_build
conda-dist/*
Icon*
.pytest_*
package:
name: resonate
version: "0.3.1"
version: "1.0.0"
source:
git_rev: master
......@@ -22,7 +22,6 @@ requirements:
- numpy
- sphinx
- geopy
- simplejson
- nose
- colorama
- plotly
......@@ -34,7 +33,6 @@ requirements:
- numpy
- sphinx
- geopy
- simplejson
- nose
- colorama
- plotly
......
.wy-nav-content {
max-width: none !important;
}
.large-math > p {
font-size:2em !important;
}
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
......@@ -72,9 +72,9 @@ copyright = u'2017 Ocean Tracking Network. All Rights Reserved.'
# built documents.
#
# The short X.Y version.
version = 'v0.3.1'
version = 'v1.0.0'
# The full version, including alpha/beta/rc tags.
release = 'v0.3.1'
release = 'v1.0.0'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
......
......@@ -10,8 +10,17 @@ The columns you need are as follows:
- **datecollected** - Date and time of release or detection, all of which have the same timezone (example format: ``2018-02-02 04:09:45``).
- **longitude** - The receiver location at time of detection in decimal degrees.
- **latitude** -  The receiver location at time of detection in decimal degrees.
- **scientificname** - The taxonmoic name for the animal detected.
- **fieldnumber** - The unique number for the tag/device attached to the animal.
- **unqdetecid** - A unique value assigned to each record in the data. resonATe includes a function to generate this column if needed. Details in :ref:`Unique Detections ID <unq_detections_id_page>`.
The :ref:`Receiver Efficiency Index <receiver_efficiency_index_page>` also needs a deployment history for stations. The columns for deployments are as follows:
- **station_name** - A unique identifier for the station or mooring where the receiver was located. This column is used in resonATe for grouping detections which should be considered to have occurred in the same place.
- **deploy_date** - A date of when the receiver was placed in a water or is active (example format: ``2018-02-02``).
- **recovery_date** - A date of when the receiver was removed from the water or became inactive (example format: ``2018-02-02``).
- **last_download** - A date of the last time data was retrieved from the receiver (example format: ``2018-02-02``).
All other columns are not required and will not affect the functions; however, they may be used in some functions. For example, ``receiver_group`` can be used color code data in the :ref:`Abacus Plot <abacus_plot_page>`.
.. warning::
......
.. _filter_page:
.. include:: notebooks/filter_detections.ipynb.rst
.. include:: notebooks/filters.ipynb.rst
Filtering Functions
-------------------
.. _distance_matrix_page:
.. automodule:: filter_detections
.. automodule:: filters
:members:
......@@ -18,6 +18,7 @@ extracts from OTN and other marine telemetry data.
* :ref:`Filtering <filter>`
* :ref:`Interval Data <interval>`
* :ref:`Residence Index <residence_index>`
* :ref:`Receiver Efficiency Index <receiver_efficiency>`
* :ref:`Unique ID <unqid>`
* :ref:`Visual Timeline <timeline>`
......@@ -67,6 +68,8 @@ This is a very simple tool. It will take an input file of detections and based o
The input parameter is a time in minutes. We used 60 minutes as the default as this is what was used in Easton's code. This value can be changed by the user. The output contains a record for each detection for which there has been more than xx minutes since the previous detection (of that tag/animal) and more than the same amount of time until the next detection. It ignores which receiver the detection occurred at. That is all it does, nothing more and nothing less. Details are in :ref:`Filter Tool <filter_page>`.
Two other filtering tools are available as well, one based on distance alone and one based on velocity. They can be found at :ref:`Filter Tools <filter_page>` as well.
.. _distance_matrix:
......@@ -93,6 +96,16 @@ Residence Index
This residence index tool will take a compressed or uncompressed detection file and caculate the residency index for each station/receiver in the detections. A CSV file will be written to the data directory for future use. A Pandas DataFrame is returned from the function, which can be used to plot the information. The information passed to the function is what is used to calculate the residence index, make sure you are only passing the data you want taken into consideration for the residence index (i.e. species, stations, tags, etc.). Details in :ref:`Residence Index Tool <residence_index_page>`.
.. _receiver_efficiency:
Receiver Efficiency Index
-------------------------
`(Ellis, R., Flaherty-Walia, K., Collins, A., Bickford, J., Walters Burnsed, Lowerre-Barbieri S. 2018. Acoustic telemetry array evolution: from species- and project-specific designs to large-scale, multispecies, cooperative networks) <https://doi.org/10.1016/j.fishres.2018.09.015>`_
The receiver efficiency index is number between ``0`` and ``1`` indicating the amount of relative activity at each receiver compared to the entire set of receivers, regardless of positioning. The function takes a set detections and a deployment history of the receivers to create a context for the detections. Both the amount of unique tags and number of species are taken into consideration in the calculation. For the exact method, see the details in :ref:`Receiver Efficiency Index<receiver_efficiency_index_page>`.
.. _unqid:
Unique Id
......@@ -105,7 +118,7 @@ This tool will add a column to any file. The unique id will be sequential intege
Visual Timeline
---------------
This tool takes a detections extract file, compresses it, and generates an HTML and JSON file to an ``html`` folder. Details in :ref:`Visual Timeline <visual_timeline_page>`.
This tool takes a detections extract file and generates a Plotly animated timeline, either in place in an iPython notebook or exported out to an HTML file. Details in :ref:`Visual Timeline <visual_timeline_page>`.
Contents:
---------
......@@ -123,6 +136,7 @@ Contents:
filter
interval_data
residence_index
receiver_efficiency_index
notebooks/data_subsetting.ipynb
unqid
visual_timeline
......
......@@ -8,9 +8,8 @@ Conda
.. code:: bash
conda config --add channels ioos
conda config --add channels conda-forge
conda install -c anunes resonate
conda install resonate
......
......@@ -9,7 +9,7 @@ is used to group detections together and assign them a color.
.. warning::
Input files must include ``datecollected`` as a column.
Input files must include ``datecollected`` as a column.
.. code:: python
......@@ -29,3 +29,4 @@ Or use the standard plotting function to save as HTML:
.. code:: python
abacus_plot(df, ipython_display=False, filename='example.html')
......@@ -9,7 +9,7 @@ individuals seen at each location by using ``type = 'individual'``.
.. warning::
Input files must include ``station`` , ``catalognumber``, ``unqdetecid``, ``latitude``, ``longitude``, and ``datecollected`` as columns.
Input files must include ``station`` , ``catalognumber``, ``unqdetecid``, ``latitude``, ``longitude``, and ``datecollected`` as columns.
.. code:: python
......
......@@ -19,8 +19,8 @@ minutes (default is 60) to create the cohort dataframe.
.. warning::
Input files must include ``station``, ``catalognumber``,
``seq_num``, ``unqdetecid``, and ``datecollected`` as columns.
Input files must include ``station``, ``catalognumber``,
``seq_num``, ``unqdetecid``, and ``datecollected`` as columns.
.. code:: python
......
......@@ -44,7 +44,7 @@ Subsetting on column value
--------------------------
Provide the column you expect to have a certain value and the value
you'd like to create a subset from.
youd like to create a subset from.
.. code:: python
......@@ -59,3 +59,4 @@ you'd like to create a subset from.
# Output the subset data to a new CSV in the indicated directory
data_column_subset.to_csv(directory+column+"_"+value.replace(" ", "_")+"_"+filename, index=False)
......@@ -18,7 +18,7 @@ interval and cohort.
.. warning::
Input files must include ``datecollected``, ``catalognumber``, and ``unqdetecid`` as columns.
Input files must include ``datecollected``, ``catalognumber``, and ``unqdetecid`` as columns.
.. code:: python
......
......@@ -2,6 +2,9 @@
Filtering Detections on Distance / Time
=======================================
White/Mihoff Filter
-------------------
*(White, E., Mihoff, M., Jones, B., Bajona, L., Halfyard, E. 2014.
White-Mihoff False Filtering Tool)*
......@@ -23,7 +26,7 @@ information to be able to reference the detection in the original file
if the user wants to see what was happening at the same time.
The input parameter is a time in minutes. We used 60 minutes as the
default as this is what was used in Easton's code. This value can be
default as this is what was used in Eastons code. This value can be
changed by the user. The output contains a record for each detection for
which there has been more than xx minutes since the previous detection
(of that tag/animal) and more than the same amount of time until the
......@@ -35,12 +38,12 @@ specified suspect file. The function will also create a distance matrix.
.. warning::
Input files must include ``datecollected``, ``catalognumber``, ``station`` and ``unqdetecid`` as columns.
Input files must include ``datecollected``, ``catalognumber``, ``station`` and ``unqdetecid`` as columns.
.. code:: python
from resonate.filter_detections import get_distance_matrix
from resonate.filter_detections import filter_detections
from resonate.filters import get_distance_matrix
from resonate.filters import filter_detections
import pandas as pd
detections = pd.read_csv('/path/to/detections.csv')
......@@ -62,8 +65,72 @@ file to a desired location.
.. code:: python
filtered_detections['filtered'].to_csv('../tests/assertion_files/nsbs_filtered.csv', index=False)
filtered_detections['filtered'].to_csv('/path/to/output.csv', index=False)
filtered_detections['suspect'].to_csv('/path/to/output.csv', index=False)
filtered_detections['dist_mtrx'].to_csv('/path/to/output.csv', index=False)
Distance Filter
---------------
The distance filter will separate detections based only on distance. The
``maximum_distance`` argument defaults to 100,000 meters (or 100
kilometers), but can be adjusted. Any detection where the succeeding and
preceding detections are more than the ``maximum_distance`` away will be
considered suspect.
.. warning::
Input files must include ``datecollected``, ``catalognumber``, ``station`` and ``unqdetecid`` as columns.
.. code:: python
from resonate.filters import distance_filter
import pandas as pd
detections = pd.read_csv('/path/to/detections.csv')
filtered_detections = distance_filter(detections)
You can use the Pandas ``DataFrame.to_csv()`` function to output the
file to a desired location.
.. code:: python
filtered_detections['filtered'].to_csv('/path/to/output.csv', index=False)
filtered_detections['suspect'].to_csv('/path/to/output.csv', index=False)
Velocity Filter
---------------
The velocity filter will separate detections based on the animal’s
velocity. The ``maximum_velocity`` argument defaults to 10 m/s, but can
be adjusted. Any detection where the succeeding and preceding velocities
of an animal are more than the ``maximum_velocity`` will be considered
suspect.
.. warning::
Input files must include ``datecollected``, ``catalognumber``, ``station`` and ``unqdetecid`` as columns.
.. code:: python
from resonate.filters import velocity_filter
import pandas as pd
detections = pd.read_csv('/path/to/detections.csv')
filtered_detections = velocity_filter(detections)
You can use the Pandas ``DataFrame.to_csv()`` function to output the
file to a desired location.
.. code:: python
filtered_detections['filtered'].to_csv('/path/to/output.csv', index=False)
filtered_detections['suspect'].to_csv('/path/to/output.csv', index=False)
......@@ -15,11 +15,11 @@ Many consecutive detections of an animal are replaced by one interval.
.. warning::
Input files must include ``datecollected``, ``catalognumber``, and ``unqdetecid`` as columns.
Input files must include ``datecollected``, ``catalognumber``, and ``unqdetecid`` as columns.
.. code:: python
from resonate.filter_detections import get_distance_matrix
from resonate.filters import get_distance_matrix
from resonate.compress import compress_detections
from resonate.interval_data_tool import interval_data
import pandas as pd
......@@ -47,11 +47,11 @@ You can modify individual stations if needed by using
.. code:: python
station_name = 'station'
station_name = 'HFX001'
station_detection_radius = 500
station_det_radius.set_value(station_name, 'radius', geopy.distance.Distance( station_detection_radius/1000.0 ))
station_det_radius.at[station_name, 'radius'] = geopy.distance.Distance( station_detection_radius/1000.0 )
Create the interval data by passing the compressed detections, the
matrix, and the station radii.
......
Receiver Efficiency Index
=========================
The receiver efficiency index is number between ``0`` and ``1``
indicating the amount of relative activity at each receiver compared to
the entire set of receivers, regardless of positioning. The function
takes a set detections and a deployment history of the receivers to
create a context for the detections. Both the amount of unique tags and
number of species are taken into consideration in the calculation.
The receiver efficiency index implement is implemented based on the
paper [paper place holder]. Each receiver’s index is calculated on the
formula of:
.. container:: large-math
REI =
:math:`\frac{T_r}{T_a} \times \frac{S_r}{S_a} \times \frac{DD_r}{DD_a} \times \frac{D_a}{D_r}`
.. raw:: html
<hr/>
- REI = Receiver Efficiency Index
- :math:`T_r` = The number of tags detected on the receievr
- :math:`T_a` = The number of tags detected across all receivers
- :math:`S_r` = The number of species detected on the receiver
- :math:`S_a` = The number of species detected across all receivers
- :math:`DD_a` = The number of unique days with detections across all
receivers
- :math:`DD_r` = The number of unique days with detections on the
receiver
- :math:`D_a` = The number of days the array was active
- :math:`D_r` = The number of days the receiver was active
Each REI is then normalized against the sum of all considered stations.
The result is a number between ``0`` and ``1`` indicating the relative
amount of activity at each receiver.
.. warning::
Detection input files must include ``datecollected``, ``fieldnumber``, ``station``, and ``scientificname`` as columns and deployment input files must include ``station_name``, ``deploy_date``, ``last_download``, and ``recovery_date`` as columns.
``REI()`` takes two arguments. The first is a dataframe of detections
the detection timstamp, the station identifier, the species, and the tag
identifier. The next is a dataframe of deployments for each station. The
station name should match the stations in the detections. The
deployments need to include a deployment date and recovery date or last
download date. Details on the columns metnioned see the preparing data
section.
.. warning::
This function assumes that no deployments for single station overlap. If deployments do overlap, the overlapping days will be counted twice.
.. code:: python
from resonate.receiver_efficiency import REI
detections = pd.read_csv('/path/to/detections.csv')
deployments = pd.read_csv('/path/to/deployments.csv')
station_REIs = REI(detections = detections, deployments = deployments)
......@@ -2,7 +2,7 @@
Residence Index
===============
Kessel et al. Paper https://www.researchgate.net/publication/279269147
Kessel et al. Paper https://www.researchgate.net/publication/279269147
This residence index tool will take a compressed or uncompressed
detection file and caculate the residency index for each
......@@ -11,7 +11,7 @@ data directory for future use. A Pandas DataFrame is returned from the
function, which can be used to plot the information. The information
passed to the function is what is used to calculate the residence index,
**make sure you are only passing the data you want taken into
consideration for the residence index (i.e. species, stations, tags,
consideration for the residence index (i.e. species, stations, tags,
etc.)**.
**detections:** The CSV file in the data directory that is either
......@@ -20,16 +20,16 @@ program time to compress the file and add the rows to the database. A
compressed file will be created in the data directory. Use the
compressed file for any future runs of the residence index function.
**calculation\_method:** The method used to calculate the residence
**calculation_method:** The method used to calculate the residence
index. Methods are:
- kessel
- timedelta
- aggregate\_with\_overlap
- aggregate\_no\_overlap.
- aggregate_with_overlap
- aggregate_no_overlap.
**project\_bounds:** North, South, East, and West bounding longitudes
and latitudes for visualization.
**project_bounds:** North, South, East, and West bounding longitudes and
latitudes for visualization.
The calculation methods are listed and described below before they are
called. The function will default to the Kessel method when nothing is
......@@ -40,18 +40,15 @@ detection file and the project bounds.
.. warning::
Input files must include ``datecollected``, ``station``, ``longitude``, ``latitude``, ``catalognumber``, and ``unqdetecid`` as columns.
Input files must include ``datecollected``, ``station``, ``longitude``,
``latitude``, ``catalognumber``, and ``unqdetecid`` as columns.
.. code:: python
from resonate import kessel_ri as ri
from resonate import residence_index as ri
import pandas as pd
detections = pd.read_csv('/path/to/detections.csv')
.. raw:: html
<hr/>
detections = pd.read_csv('/Path/to/detections.csv')
Kessel Residence Index Calculation
----------------------------------
......@@ -73,11 +70,11 @@ T = Distinct number of days detected anywhere on the array
.. warning::
Possible rounding error may occur as a detection on ``2016-01-01 23:59:59``
and a detection on ``2016-01-02 00:00:01`` would be counted as two days when it is really 2-3 seconds.
Possible rounding error may occur as a detection on ``2016-01-01 23:59:59``
and a detection on ``2016-01-02 00:00:01`` would be counted as two days when it is really 2-3 seconds.
Example Code
~~~~~~~~~~~~
Kessel RI Example Code
~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
......@@ -85,10 +82,6 @@ Example Code
ri.plot_ri(kessel_ri)
.. raw:: html
<hr/>
Timedelta Residence Index Calculation
-------------------------------------
......@@ -108,8 +101,8 @@ time at the station
:math:`\Delta T` = Last detection time on an array - First detection
time on the array
Example Code
~~~~~~~~~~~~
Timedelta RI Example Code
~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
......@@ -117,10 +110,6 @@ Example Code
ri.plot_ri(timedelta_ri)
.. raw:: html
<hr/>
Aggregate With Overlap Residence Index Calculation
--------------------------------------------------
......@@ -137,8 +126,8 @@ AwOS = Sum of length of time of each detection at the station
AwOT = Sum of length of time of each detection on the array
Example Code
~~~~~~~~~~~~
Aggregate With Overlap RI Example Code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
......@@ -146,10 +135,6 @@ Example Code
ri.plot_ri(with_overlap_ri)
.. raw:: html
<hr/>
Aggregate No Overlap Residence Index Calculation
------------------------------------------------
......@@ -176,18 +161,14 @@ any overlap
AnOT = Sum of length of time of each detection on the array, excluding
any overlap
Example Code
~~~~~~~~~~~~
Aggregate No Overlap RI Example Code
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
.. code:: python
no_overlap_ri = ri.residency_index(detections, calculation_method='aggregate_no_overlap')
ri.plot_ri(no_overlap_ri)
.. raw:: html
<hr/>
ri.plot_ri(no_overlap_ri, title="ANO RI")
Mapbox
------
......@@ -195,13 +176,11 @@ Mapbox
Alternatively you can use a Mapbox access token plot your map. Mapbox is
much for responsive than standard Scattergeo plot.
Example Code
~~~~~~~~~~~~
Mapbox Example Code
~~~~~~~~~~~~~~~~~~~
.. code:: python
mapbox_access_token = 'ADD_YOUR_TOKEN_HERE'
mapbox_access_token = 'YOUR MAPBOX ACCESS TOKEN HERE'
kessel_ri = ri.residency_index(detections, calculation_method='kessel')
ri.plot_ri(kessel_ri, mapbox_token=mapbox_access_token)
ri.plot_ri(kessel_ri, mapbox_token=mapbox_access_token,marker_size=40, scale_markers=True)
......@@ -6,28 +6,45 @@ Visual Timeline
<hr/>
``render_map()`` takes a detection extract CSV file as a data source, as
well as a string indicating what the title of the plot should be. The
title string will also be the filename for the HTML output, located in
an html file.
This tool takes a detections extract file and generates a Plotly
animated timeline, either in place in an iPython notebook or exported
out to an HTML file.
You can supply a basemap argument to choose from a few alternate basemap
tilesets. Available basemaps are:
.. warning::
- No basemap set or ``basemap='dark_layer'`` - CartoDB/OpenStreetMap
Dark
- ``basemap='Esri_OceanBasemap'`` - coarse ocean bathymetry
- ``basemap='CartoDB_Positron'`` - grayscale land/ocean
- ``basemap='Stamen_Toner'`` - Stamen Toner - high-contrast black and
white - black ocean
Input files must include ``datecollected``, ``catalognumber``, ``station``, ``latitude``, and ``longitude`` as columns.
.. warning::
.. code:: python
from resonate.visual_timeline import timeline
import pandas as pd
detections = pd.read_csv("/path/to/detection.csv")
timeline(detections, "Timeline")
Exporting to an HTML File
-------------------------
You can export the map to an HTML file by setting ``ipython_display`` to
``False``.
.. code:: python
from resonate.visual_timeline import timeline
import pandas as pd
detections = pd.read_csv("/path/to/detection.csv")
timeline(detections, "Timeline", ipython_display=False)
Mapbox
------
Input files must include ``datecollected``, ``catalognumber``, ``station``, ``latitude``, ``longitude``, and ``unqdetecid`` as columns.
Alternatively you can use a Mapbox access token plot your map. Mapbox is
much for responsive than standard Scattergeo plot.
.. code:: python
import resonate.html_maps as hmaps
from resonate.visual_timeline import timeline
import pandas as pd
mapbox_access_token = 'YOUR MAPBOX ACCESS TOKEN HERE'
detections = pd.read_csv("/path/to/detection.csv")
hmaps.render_map(detections, "Title")
timeline(detections, "Title", mapbox_token=mapbox_access_token)
.. _receiver_efficiency_index_page:
.. include:: notebooks/receiver_efficiency_index.ipynb.rst
Residence Index Functions
-------------------------
.. automodule:: receiver_efficiency
:members:
......@@ -5,5 +5,5 @@
Residence Index Functions
-------------------------
.. automodule:: kessel_ri
.. automodule:: residence_index
:members:
......@@ -2,19 +2,21 @@
.. include:: notebooks/visual_detection_timeline.ipynb.rst
Below is the sample output for blue sharks off of the coast of Nova Scotia.
Example Output
--------------
Below is the sample output for blue sharks off of the coast of Nova Scotia,
without using Mapbox.
.. raw:: html
<iframe src="_static/nova_scotia_blue_sharks.html" height="400px" width="100%"></iframe>
<iframe src="_static/timeline.html" height="750px" width="750px"></iframe>
<hr/>
Visual Timeline Functions
-------------------------
.. automodule:: html_maps
:members:
.. automodule:: geojson
.. automodule:: visual_timeline
:members:
......@@ -66,8 +66,8 @@
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "python",
"version": "3.6.3"
"pygments_lexer": "ipython3",
"version": "3.6.6"
},
"varInspector": {
"cols": {
......@@ -100,5 +100,5 @@
}
},
"nbformat": 4,
"nbformat_minor": 1
"nbformat_minor": 2
}
......@@ -6,6 +6,9 @@
"source": [
"# Filtering Detections on Distance / Time\n",
"\n",
"\n",
"## White/Mihoff Filter\n",
"\n",
"*(White, E., Mihoff, M., Jones, B., Bajona, L., Halfyard, E. 2014. White-Mihoff False Filtering Tool)*\n",
"\n",
"OTN has developed a tool which will assist with filtering false detections. The first level of filtering involves identifying isolated detections. The original concept came from work done by Easton White. He was kind enough to share his research database with OTN. We did some preliminary research and developed a proposal for a filtering tool based on what Easton had done. This proof of concept was presented to Steve Kessel and Eddie Halfyard in December 2013 and a decision was made to develop a tool for general use.\n",
......@@ -24,13 +27,11 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"from resonate.filter_detections import get_distance_matrix\n",
"from resonate.filter_detections import filter_detections\n",
"from resonate.filters import get_distance_matrix\n",
"from resonate.filters import filter_detections\n",
"import pandas as pd\n",
"\n",
"detections = pd.read_csv('/path/to/detections.csv')\n",
......@@ -57,36 +58,156 @@
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"collapsed": true
},
"metadata": {},
"outputs": [],
"source": [
"filtered_detections['filtered'].to_csv('../tests/assertion_files/nsbs_filtered.csv', index=False)\n",
"filtered_detections['filtered'].to_csv('/path/to/output.csv', index=False)\n",
"\n",
"filtered_detections['suspect'].to_csv('/path/to/output.csv', index=False)\n",
"\n",
"filtered_detections['dist_mtrx'].to_csv('/path/to/output.csv', index=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [