-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jupyterhub: allow users created before Cowbird was enabled to spawn jupyterlab #480
base: master
Are you sure you want to change the base?
Conversation
@tlvu please test this in your staging environment to ensure that this works for all of your users who were created before cowbird was enabled by default (version 2.0) |
@mishaschwartz great ! Will test this next week. By the way, do you plan to roll all changes to make Cowbird compatible with existing Magpie users and the poor-man sharing into one PR? Basically all the work-around found in #425, whenever they make sense and is possible of course. |
The workaround described here #425 (comment) can be enabled just by updating your env.local file. I don't think there are any additional code changes that are needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Thanks for the fix!
I'll let @tlvu do a more in-depth test to validate.
Check out #481 which adds better documentation to avoid this for other users in the future. |
Right sorry, I forgot all the work-around are just configs in |
@mishaschwartz Then from
So with this very minimal Then I enabled a config that is more similar to our production
With this I am unable to login to Jupyter anymore. Here are the logs from
Direct login to https://HOST/magpie with my user and passwd works so no problem on Magpie. I removed This is a bit weird, I do not see how the other components can affect cowbird. I'll continue investigating another time. |
If you are unable to login, that is a different issue than the one addressed here. This fixes the issue that users were not able to spawn a jupyterlab container after they had logged in. I'm also confused by your examples: in the first In your second example can you please try with |
This is Ouranos stack pre 2.0.0, jupyterhub, magpie and other are enabled by default 😄
Oh right ! Never thought about this one but very true that the ordering could matter. I have something to do today, will retry this investigation probably Thursday. |
right, right.. forgot about that sorry
sounds good |
I re-added So I continue my testing and I created a new user in Magpie to see how Cowbird trigger works and I notice it create the following: $ ls -l /data/user_workspaces/testcowbird01
total 4
lrwxrwxrwx. 1 root root 40 Dec 19 20:58 notebooks -> /data/jupyterhub_user_data/testcowbird01
drwxrwxrwx+ 2 root root 6 Dec 19 20:58 shapefile_datastore So maybe in this PR, we might want to replicate the same behavior as the real Cowbird instead? Then I was curious about this new $ ack shapefile_datastore
docs/components.rst
38: /user_workspaces/<user_name>/shapefile_datastore # Managed by the `GeoServer` handler
cowbird/handlers/impl/geoserver.py
72:DEFAULT_DATASTORE_DIR_NAME = "shapefile_datastore"
686: return f"shapefile_datastore_{workspace_name}" Notice there is a This leads me to think maybe we should not try to replicate Cowbird behavior here manually but try to trigger Cowbird new user creation trigger again so any naming change is transparent for us? That assume we can call the same Magpie new user trigger from Jupyterhub. |
Trying to trigger the hook manually:
Got back this error and I am unable to understand why it fails:
I did turn on DEBUG logging and expose the port this way: $ git diff
diff --git a/birdhouse/components/cowbird/config/cowbird/cowbird.ini.template b/birdhouse/components/cowbird/config/cowbird/cowbird.ini.template
index 3aa33da2..b4355c15 100644
--- a/birdhouse/components/cowbird/config/cowbird/cowbird.ini.template
+++ b/birdhouse/components/cowbird/config/cowbird/cowbird.ini.template
@@ -75,7 +75,7 @@ keys = console
keys = generic
[logger_root]
-level = INFO
+level = DEBUG
handlers = console
formatter = generic
diff --git a/birdhouse/components/cowbird/default.env b/birdhouse/components/cowbird/default.env
index 0d160735..54a17a4d 100644
--- a/birdhouse/components/cowbird/default.env
+++ b/birdhouse/components/cowbird/default.env
@@ -45,7 +45,7 @@ export COWBIRD_MONGODB_PORT=27017
# DEBUG: logs detailed information about operations/settings (not for production, could leak sensitive data)
# INFO: reports useful information, not leaking details about settings
# WARN: only potential problems/unexpected results reported
-export COWBIRD_LOG_LEVEL=INFO
+export COWBIRD_LOG_LEVEL=DEBUG
# Subdirectory of DATA_PERSIST_SHARED_ROOT containing the user workspaces used by Cowbird
export USER_WORKSPACES="user_workspaces"
diff --git a/birdhouse/components/cowbird/docker-compose-extra.yml b/birdhouse/components/cowbird/docker-compose-extra.yml
index 5ad76749..d7a59413 100644
--- a/birdhouse/components/cowbird/docker-compose-extra.yml
+++ b/birdhouse/components/cowbird/docker-compose-extra.yml
@@ -11,6 +11,8 @@ services:
cowbird:
image: pavics/cowbird:${COWBIRD_VERSION}-webservice
container_name: cowbird
+ ports:
+ - 7000:7000
environment:
HOSTNAME: $HOSTNAME
FORWARDED_ALLOW_IPS: "*" I followed these documentation to craft my How do we debug this kind of error? Is there a way to also turn on DEBUG logging for the |
It's a good idea but I think that this needs to be tackled in a different PR. The issue you're describing is much bigger and needs careful consideration to figure out how to implement properly. Consider all of these scenarios that have to be handled:
The PR here is really just supposed to fix the immediate issue that some users can't spawn jupyterlab containers
It's telling you that you need to specify a callback URL |
OMG ! I can not not read Javascript response. Now that you tell me it looks clear. But I didn't "catch" it yesterday. So I did this
And it actually works, the folder structures are created on disk. But the returned message is so misleading, with a whole bunch of
I am guessing this is the reason for your fix in #488 So given that this works, how about we call this hook from JupyterHub if the folder structure is missing on disk? |
So if |
@tlvu For example, When a In the case of class Geoserver(Handler, FSMonitor):
#[...]
def user_created(self, user_name: str) -> None:
self._create_datastore_dir(user_name)
res = chain(create_workspace.si(user_name), create_datastore.si(user_name))
res.delay()
LOGGER.info("Start monitoring datastore of created user [%s]", user_name)
Monitoring().register(self._shapefile_folder_dir(user_name), True, Geoserver) |
Overview
Users created before Cowbird was enabled will not have a "workspace directory" created. A workspace directory is a symlink to the directory that contains their Jupyterhub data.
When Cowbird is enabled, Jupyterhub checks if the workspace directory exists and raises an error if it doesn't.
This change allows Jupyterhub to create the symlink if it doesn't exist instead of raising an error.
This means that users without a "workspace directory" will be able to continue using Jupyterhub as they did before without the need for manual intervention by a system administrator who would otherwise need to manually create the symlink for them.
Changes
Non-breaking changes
Breaking changes
None
Related Issue / Discussion
Additional Information
CI Operations
birdhouse_daccs_configs_branch: master
birdhouse_skip_ci: false