Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Design] How the crc cloud tool should looks like (as part of ux side) #21

Open
praveenkumar opened this issue Jan 13, 2023 · 0 comments

Comments

@praveenkumar
Copy link
Member

This issue is to track the UX part of CLI so that any refactor or using different tech stack how can we achieve it and what would be the best way to do it.

Current UX

$ ./crc-cloud.sh -h

Cluster Creation :

crc-cloud.sh -C -p pull secret path [-i cloud provider] [-d developer user password] [-k kubeadmin user password] [-r redhat user password] [-a AMI ID] [-t Instance type]
where:
    -C  Cluster Creation mode
    -i  Cloud/Infra provider (optional, default: aws)
    -p  pull secret file path (download from https://console.redhat.com/openshift/create/local) 
    -d  developer user password (optional, default: developer)
    -k  kubeadmin user password (optional, default: kubeadmin)
    -r  redhat    user password (optional, default: redhat)
    -a  Image ID (Cloud provider Machine Image) from which the VM will be Instantiated (optional, default: ami-0569ce8a44f2351be)
    -t  Cloud provider Instance Type (optional, default; c6in.2xlarge)
    -h  show this help text

Cluster Teardown:

crc-cloud.sh -T [-i cloud provider] [-v run id]
    -T  Cluster Teardown mode
    -i  Cloud/Infra provider (optional, default: aws)
    -v  The Id of the run that is gonna be destroyed, corresponds with the numeric name of the folders created in workdir (optional, default: latest)
    -h  show this help text 

In the current scenario to create a cluster (assuming that image is already part of user's project/account)

./crc-cloud.sh -C -p <pull_secret_path> -i aws -a <image_id> -t <type>

Also currently everything is around aws like default image_id and instance_type so these options should be part of provider options.

I think we need to better organize our code to have something like following to have the cloud specific options be part of cloud-provider and create only have subcommand provider name.

$ crc-cloud.sh create <cloud_provider> -h
-a Image ID (default to one of aws image_id)
-t Instance Type (default to one we tested)
- ... other cloud-provider specific options

[Global options]
-p  pull secret file path (download from https://console.redhat.com/openshift/create/local) 
-d  developer user password (optional, default: developer)
-k  kubeadmin user password (optional, default: kubeadmin)

$ crc-cloud.sh delete <cloud_provider> -h
-m provider metadata directory which have info about all the created resource


$ crc-cloud.sh status 
<cloud-provider>  <Instance_NAME> <APISERVER_ACCESS> <EXTERNAL_IP>
aws                           myfirst_tet                  yes                                 35.32.123.123

status command should look into the metadata folder and get info about the cluster/provider. Also right now since we are not publicly releasing the images for cloud provider so it is safe to expect user will choose which bundle version image they want to use for their dev/test/play purpose.

Even in future we want to make image public ( which can be done for OKD) then we want to make sure we should name them as crc-<bundle_version> and have a list available in this repo readme file to let user know.

As of now I am not thinking about plugin but since we have skipe issue for terraform/plumi please update those issue with some kind of design so that we have an overview how this looks like for those enablers. If you think the bash should be entrypoint then we just add another options as plugin or deployer to specify during create/delete/status commands.

Let's collaborate on this issue around how we can move forward with it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant