Equivalent to https://github.com/jupyterhub/jupyterhub/pull/2224
Prometheus metrics can potentially leak information about
the user, so they should be kept behind auth by default.
However, for many JupyterHub deployments, they would need
to be scraped by a centralized Prometheus instance that can not
really authenticate separately to each user notebook without
a lot of work. Admins can use this setting to allow unauthenticated
access to the /metrics endpoint.
This commit updates the super usage. Because Python 2 is not supported
anymore, super usage can be updated such that super is called without
any arguments in the default case where super is called with the class
name and self.
Note that all usage of super has not been updated - a few cases which
smelled funny have been ignored.
On Python 3, the default source file encoding for Python files is utf-8
and because Python 2 is no longer supported, the utf8 coding cookies can
be removed
* Don't track API requests with `?no_track_activity=1` in the activity counter
allows external idle-culling scripts to avoid updating the activity counter
* Don't track kernel shutdown as kernel activity
this causes idle-kernel shutdowns to restart the idle-shutdown timer
user-requested shutdowns will still be tracked as api activity
* test ?no_track_activity=1 tracking
* Changelog for activity
instead of the monkeypatch we did to keep the backport patch small
requiring tornado 5 simplifies things a ton because tornado.concurrent.Future is asyncio.Future
[Prometheus](https://prometheus.io/) provides a standard
metrics format that can be collected and used in many contexts.
- From the browser
to drive 'current resource usage' displays, such
as https://github.com/yuvipanda/nbresuse
- From a prometheus server
to collect historical data for operational analysis and
performance monitoring
Example: https://grafana.mybinder.org/dashboard/db/1-overview?refresh=1m&orgId=1
for mybinder.org metrics from JupyterHub and BinderHub,
via prometheus server at https://prometheus.mybinder.org
The JupyterHub and BinderHub projects already expose Prometheus
metrics natively. Adding this to the Jupyter notebook server
allows us to instrument the code easily and in
a standard format that has lots of 3rd party tooling for it.
This commit does the following:
- Introduce the `prometheus_client` library as a dependency.
This library has no dependencies of its own and is pure python.
- Add an authenticated `/metrics` endpoint to the server,
which returns metrics in Prometheus Text Format
- Expose the default process metrics from `prometheus_client`,
which include memory usage and CPU usage info (for just the
notebook process)
* Added a flag to allow access of hidden files
The flag '--allow-hidden' will allow Tornado to access hidden files
such as '.images/my_img.jpg'
* Fixed jupyterlab not following allow-hidden
Jupyterlab stores its options in a different location than the
standard notebook. Added the ability to check there as well.
* Updated implementation for any app
Previously I was accessing the settings dict based on the name of
the app that was being used. ex 'NotebookApp', or 'LabApp'.
Now the setting is passed directly into the Tornado settings, and
can be accessed via a more general method.
* Added/fixed unit tests for test_hidden_files
Fixed broken unit tests by setting the default to allow_hidden=False
then added unit test in FilesTest:test_hidden_files that checks for
the accessibility of files with allow_hidden=True
* allow-hidden now works everywhere
Previously allow-hidden flag only allowed hidden files to be accessed via
tornado. Now you can use the allow-hidden flag to access hidden directories and
files via the filebrowser.
* Remove --allow-hidden alias
* Move allow_hidden option onto ContentsManager
* Use try/finally to ensure allow_hidden option is set back to False after test
* Allow access to hidden files, but don't list them for now
* Simplify hidden check for listing again
* Fix indentation
* tornado 5: PeriodicCallback loop arg will be removed
PCs are always run with the current eventloop,
which is what the explicitly passed loop always is for us already
* Don't double-close socket & stream
closing stream closes the socket
* remove now-inaccurate comment
* Load translations for Javascript in page template
* Normalise language codes to gettext format with underscores
* .mo files need to be under LC_MESSAGES as well
* remove unused JS code
* Normalise result in test
* Fix for opening files on Py 2
* Fix location of I18N directory
* Add translation files to package_data
avoids clobbering cookies when multiple notebook servers are run on one host.
Users can override `cookie_options.path = ‘/‘` if they *want* cookies to be shared across notebooks on one host.
During the deprecation/removal of the `@json_errors` decorator, the
`reason` field was not carried forward into the compatible replacement
method `APIHandler.write_error`. This broke some client (tests) that
relied on that field's presence.
Fixes#2957.
we already apply this logic in our server-side checks,
but browsers check `Access-Control-Allow-Origin` headers themselves as well,
meaning that token-authenticated requests can’t be made cross-origin without CORS headers from browsers,
only scripts.
This makes default browser and server-side origin checks consistent
get_current_user is called in a few places that really shouldn’t raise
move the raising to `get_login_url`, which is called in `@web.authenticated`,
where we want to replace redirect logic with 403.
When starting a kernel using the Jupyter Notebook Kernel API, web
browsers will automatically check for the presence of `x-xsrftoken` in
the Access-Control-Allow-Headers during the preflight CORS check
([ref][ref]).
[ref]: https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers
Since we didn't allow this header before, web browsers would fail the
preflight check even when the x-xsrftoken header isn't being used by the
notebook server.
This meant that running a webpage on localhost:8080 that used Javascript
to start a kernel on a notebook server running on localhost:8888 would
fail.
How I tested this commit:
1. Start a notebook server using
jupyter notebook --no-browser --NotebookApp.allow_origin="*" --NotebookApp.disable_check_xsrf=True --NotebookApp.token=''
2. Build the [web3](https://github.com/jupyter-widgets/ipywidgets/tree/master/examples/web3) example from ipywidgets.
3. In that directory, run `npm run host`.
4. Verify that visiting http://localhost:8080/ starts a kernel in the notebook server.