* Don't track API requests with `?no_track_activity=1` in the activity counter
allows external idle-culling scripts to avoid updating the activity counter
* Don't track kernel shutdown as kernel activity
this causes idle-kernel shutdowns to restart the idle-shutdown timer
user-requested shutdowns will still be tracked as api activity
* test ?no_track_activity=1 tracking
* Changelog for activity
The connect and request timeout defaults have been updated from 20 to 60
seconds and a default value of 40 has been added for KERNEL_LAUNCH_TIMEOUT.
The code ensures that KERNEL_LAUNCH_TIMEOUT is in the env and that the
value of the request timeout is at least 2 greather than KERNEL_LAUNCH_TIMEOUT.
This PR is port of the NB2KG PRs 35 and 38.
`check_pid` returns `True` if the PID for a notebook server still exists. Therefore, the `if check_pid(pid):` statements on lines 424 and 437 evaluate to `True` even though the notebook server is still running.
This commit simply adds a `not` to each line: `if not check_pid(pid):` so that the conditional only evaluates to `True` if `check_pid` returns `False`, which happens when the notebook server has shutdown, as expected.
Moved 'assert mode == keyboard_mode' ouside the branches.
This means that if a unknown mode comes its gonna get catched by the
assert and never gona get to the else statement.
For this, the else stament moved to before the assert, caching the mode
error before the assert.
Previously know as validate_notebook_mode changed name to
validate_dualmode_mode to represent better the method.
Added a docstring.
Removed the handling of index being None.
instead of the monkeypatch we did to keep the backport patch small
requiring tornado 5 simplifies things a ton because tornado.concurrent.Future is asyncio.Future
tornado gen.maybe_future is deprecated in >= 5.0 and doesn't accept asyncio coroutine objects or awaitables in general
causing failures with tornado 6 on asyncio
monkeypatch gen.maybe_future for easier backport to 5.x
later, we can update to use our maybe_future throughout
it doesn’t matter if the close was clean or not,
we should still handle the close event.
we set a different onclose handler prior to the client requesting close,
which is likely what the old wasClean checks were for
Converted `MappingKernelManager.restart_kernel` to a coroutine so that
projects that take advantage of async kernel startup can also realize
appropriate behavior relative to restarts.
Eliminated the Kernel and Kernelspec handlers. The Websocket (ZMQ)
channels handler still remains. This required turning a few methods
into coroutines in the Notebook server.
Renamed the Gateway config object to GatewayClient in case we want
to extend NB server (probably jupyter_server at that point) with
Gateway server functionality - so an NB server could be a Gateway
client or a server depending on launch settings.
Add code to _replace_ the channels handler rather than rely on position
within the handlers lists.
Updated mock-gateway to return the appropriate form of results.
Updated the session manager tests to use a sync ioloop to call the
now async manager methods.
Created a singleton class `Gateway` to store all configuration options
for a Gateway. This class also holds some help methods to make it easier
to use the options and determine if the gateway option is enabled.
Updated the NotebookTestBase class to allow for subclasses to infuence
the patched environment as well as command line options via argv.
Added a test to ensure various gateway configuration items can be
set via the environment or command-line.
This change alleviates a significant pain-point for consumers of Jupyter
Kernel and Enterprise Gateway projects by embedding the few classes defined
in the NB2KG server extension directly into the Notebook server. All code
resides in a separate gateway directory and the 'extension' is enabled
via a new configuration option `--gateway-url`.
Renamed classes from those used in standard NB2KG code so that Notebook
servers using the existing NB2KG extension will still work.
Added test_gateway.py to exercise overridden methods. It does this by
mocking the call that issues requests to the gateway server.
Updated the _Running a notebook server_ topic to include a description
of this feature.
- separates nbserver_extension config loading into new
init_server_extension_config method
- adds init_server_extension_config to the initialize funciton before
the init_webapp call
- adds nbserver_extension configuration found in config.d files (by the
BaseJSONConfigLoader) to both the underlying NotebookApp config object
and the self.nbserver_extensions value
- makes self.nbserver_extensions the canonical location for identifying
all nbserver_extensions rather than temporary extensions variable
This avoids putting the authentication token into a command-line
argument to launch the browser, where it's visible to other users.
Filesystem permissions should ensure that only the user who started the
notebook can use this route to authenticate.
Thanks to Dr Owain Kenway for suggesting this technique.
For files below 25MB there was no visual feedback to the user when
uploading a file. This leads to confusion when uploading files that are
big but not huge over a slow network connection.
When kernels are culled, the kernel is terminated in the background,
unbeknownst to the session management. As a result, invalid sessions
can be produced that appear to exist, yet cannot produce a model from
the persisted row due to the associated kernel no longer being active.
Prior to this change, these sessions, when encountered via a subsequent
call to `get_session()`, would be deleted and a KeyError would be raised.
This change updates the existence check to tolerate those kinds of sessions.
It removes such sessions (as would happen previously), but rather than
raise a KeyError when attempting to convert the row to a dictionary,
it logs a warning and returns None, which then allows `session_exists()`
to return False since the session was removed (as was ultimately the
case previously).
Calls to `get_session()` remain just as before and have the potential
to raise `KeyError` in such cases. The difference now being that the
`KeyError` is accompanied by a message indicating the cause.
Fixes#4209
As per issue #3335, we want all js tests migrated to selenium. This change migrates the test of buffered execution requests.
Test Plan:
py.test -v notebook/tests/selenium/test_buffering.py
Attempts to fix flakiness in `test_display_isolation`. We now ensure the iframe has been added to the dom before calling the selector. To make this work, we clean up the iframe cells (and all other cells) at the end of each test. I'm not 100% positive this fixes, since I haven't been able to reproduce the failure. But the hope is that this fixes the intermittent failing seen in https://github.com/jupyter/notebook/pull/4182.
- use %r instead of %s to handle quoting more succintly
- add a finally block to ensure browser state is transitioned from iframe back to default content
As per issue #3335, we want all js tests migrated to selenium. This change migrates and extends the svg isolation test (extended to include slightly more thorough validation of expected isolation behavior).
Test Plan:
py.test -v notebook/tests/selenium/test_display_isolation.py
Migrates a single js test (testing image display functionality) to selenium as per issue #3335.
Test Plan:
py.test -v notebook/tests/selenium/test_display_image.py
Currently the default URL message given on the console on startup is:
---
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(myip.com or 127.0.0.1):8888/?token=8fdc8 ...
---
This will always need editing to use. Replace with one host IP (e.g. 'myip.com')
option and one local ip (127.0.0.1) option to make it copy/pastable again.
Currently the default URL message given on the console on startup is:
---
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(myip.com or 127.0.0.1):8888/?token=8fdc8 ...
---
This will always need editing to use. Replace with host IP (e.g. 'myip.com')
to make it copy/pastable again.
This allows slower contents managers to not block the event loop by allowing
more of their API to return futures.
Other usages of contents manager functions are already wrapped in maybe_future,
including a use of `file_exists` in contents/handlers.py