From b469c3841235228eb901aa473e8a6e81762c62bd Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Carles=20S=2E=20Soriano=20P=C3=A9rez?= Date: Thu, 9 Jan 2025 13:01:48 +0100 Subject: [PATCH] docs: move dev installation and docker sections to wiki (#678) * docs: Moved `docker_and_cloud` to wiki `https://github.com/Deltares/ra2ce/wiki/Docker-and-Cloud` * docs: Updated reference to removed page * docs: updated installation sphinx page --- docs/docker_and_cloud/index.rst | 26 --- .../argo_deployment.rst | 68 ------ .../setting_up_infrastructure/index.rst | 24 --- .../kubernetes_deployment.rst | 143 ------------ .../using_cloud_services/deltares_harbor.rst | 37 ---- .../deltares_hpc_user_guide.rst | 64 ------ .../docker_user_guide.rst | 203 ------------------ .../hackathon_user_guide.rst | 199 ----------------- .../using_cloud_services/index.rst | 25 --- docs/installation/installation.rst | 14 +- docs/overview.rst | 1 - 11 files changed, 2 insertions(+), 802 deletions(-) delete mode 100644 docs/docker_and_cloud/index.rst delete mode 100644 docs/docker_and_cloud/setting_up_infrastructure/argo_deployment.rst delete mode 100644 docs/docker_and_cloud/setting_up_infrastructure/index.rst delete mode 100644 docs/docker_and_cloud/setting_up_infrastructure/kubernetes_deployment.rst delete mode 100644 docs/docker_and_cloud/using_cloud_services/deltares_harbor.rst delete mode 100644 docs/docker_and_cloud/using_cloud_services/deltares_hpc_user_guide.rst delete mode 100644 docs/docker_and_cloud/using_cloud_services/docker_user_guide.rst delete mode 100644 docs/docker_and_cloud/using_cloud_services/hackathon_user_guide.rst delete mode 100644 docs/docker_and_cloud/using_cloud_services/index.rst diff --git a/docs/docker_and_cloud/index.rst b/docs/docker_and_cloud/index.rst deleted file mode 100644 index 719f1d5e0..000000000 --- a/docs/docker_and_cloud/index.rst +++ /dev/null @@ -1,26 +0,0 @@ -.. _docker_and_cloud: - -Docker and Cloud guide -========================= - -In this section we explore the multiple possibilities regarding 'cloud running'. For such purpose we have documentation covering two different concepts: - -- Setting up infrastructure. - - Building a docker container. - - Installation and deployment of cloud services. -- Using "cloud services" for specific purposes. - - Using existing docker images (locally / cloud). - - Running ``ra2ce`` on different cloud services. - - Guidelines to set up and run from scratch a cloud service: - - Known 'use cases' such as setting up and using cloud services for a hackathon, - - building of a docker container, - - installation and deployment of cloud services, - - running ``ra2ce`` on different cloud services. - - -.. toctree:: - :caption: Table of Contents - :maxdepth: 1 - - setting_up_infrastructure/index - using_cloud_services/index \ No newline at end of file diff --git a/docs/docker_and_cloud/setting_up_infrastructure/argo_deployment.rst b/docs/docker_and_cloud/setting_up_infrastructure/argo_deployment.rst deleted file mode 100644 index 3d5078b0f..000000000 --- a/docs/docker_and_cloud/setting_up_infrastructure/argo_deployment.rst +++ /dev/null @@ -1,68 +0,0 @@ -.. _argo_deployment: - -Deploying Argo Workflows with Helm on Amazon EKS -================================================ - -This guide explains how to deploy Argo Workflows on an Amazon EKS cluster. Argo Workflows is an open-source container-native workflow engine for orchestrating parallel jobs on Kubernetes. - -Prerequisites -------------- - -Before deploying Argo Workflows, ensure you have the following prerequisites: - -- An Amazon EKS cluster. Refer to the kubernetes_deployment.rst in the project directory for instructions on deploying an EKS cluster with Terraform. -- ``kubectl`` configured to interact with the deployed EKS cluster. - -.. _argo_local_installation: - -Local installation ------------------- - -1. Download argo cli from the official website `` -2. Move the ``argo.exe`` to your directory of preference, here we will say ``C:\\cloud\\argo``. -3. Add said location to your ``PATH`` variables. - -Deployment Steps ----------------- - -Follow these steps to deploy Argo Workflows on the Amazon EKS cluster: - -1. **Create Argo namespace:** - - Create a namespace for Argo to run in: - - .. code-block:: bash - - kubectl create namespace argo - -2. **Install Argo Workflows:** - - Update Helm repositories to ensure you have the latest information: - - .. code-block:: bash - - kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.5.5/install.yaml - -3. **Access Argo UI:** - - Once the installation is complete, you can access the Argo UI by port-forwarding to the Argo server service: - - .. code-block:: bash - - kubectl -n argo port-forward service/argo-server 2746:2746 - - Open your web browser and navigate to ``_ to access the Argo UI. - -Clean Up --------- - -To uninstall Argo Workflows from the EKS cluster: - -1. **Uninstall Argo Workflows:** - - .. code-block:: bash - - kubectl delete deployment argo -n argo - - This command removes the Argo Workflows deployment from the cluster. - diff --git a/docs/docker_and_cloud/setting_up_infrastructure/index.rst b/docs/docker_and_cloud/setting_up_infrastructure/index.rst deleted file mode 100644 index e9e4e9a5f..000000000 --- a/docs/docker_and_cloud/setting_up_infrastructure/index.rst +++ /dev/null @@ -1,24 +0,0 @@ -.. _setting_up_infrastructure: - -Setting up infrastructure -========================= - -At the moment, the ``ra2ce`` "cloud" infrastructure consists of three main components: - -- Amazon web services `s3 `_. - - Stores data. - - Runs docker components through Kubernetes -- Kubernetes. - - Creates and runs the ``ra2ce`` docker images in containers. - - Runs custom scripts in the related containers. -- Argo. - - "Orchastrates" how a workflow will be run in the s3 using kubernetes. - - Workflows ar ``*.yml`` files describing the node types and resources to use at each step of a cloud run. - - -.. toctree:: - :caption: Table of Contents - :maxdepth: 1 - - kubernetes_deployment - argo_deployment \ No newline at end of file diff --git a/docs/docker_and_cloud/setting_up_infrastructure/kubernetes_deployment.rst b/docs/docker_and_cloud/setting_up_infrastructure/kubernetes_deployment.rst deleted file mode 100644 index 8ac8110a9..000000000 --- a/docs/docker_and_cloud/setting_up_infrastructure/kubernetes_deployment.rst +++ /dev/null @@ -1,143 +0,0 @@ -.. _kubernetes_deployment: - -Deploying Kubernetes on Amazon EKS Cluster with Terraform -========================================================= - -This guide outlines the steps to deploy an Amazon EKS cluster using Terraform. The Terraform configuration provided here automates the setup process, making it easier to create and manage an EKS cluster on AWS. - -Prerequisites -------------- - -Before deploying the EKS cluster, ensure you have the following prerequisites: - -- An AWS account with appropriate permissions to create resources like VPCs, subnets, EKS clusters, and EC2 instances. -- Terraform installed on your local machine. You can download it from `terraform official website `_ and follow the installation instructions. -- AWS CLI configured with appropriate credentials. You can install and configure AWS CLI by following `the official user guidelines `_. - -Deployment Steps ----------------- - -Follow these steps to deploy the Amazon EKS cluster: - -1. **Clone the Repository:** - - .. code-block:: bash - - git clone - -2. **Navigate to the Project Directory:** - - .. code-block:: bash - - cd - -3. **Update Terraform Backend Configuration:** - - Edit the ``backend.tf`` file and replace ``your-bucket-name`` with your S3 bucket name. Ensure that the bucket is already created, and a DynamoDB table is set up for state locking. - -4. **Modify Terraform Configuration:** - - Update the ``main.tf`` file with your desired configurations: - - - Replace ``region`` with your preferred AWS region. - - Modify ``cluster_name`` with your desired EKS cluster name. - - Update ``subnets`` with the IDs of your desired subnets. - - Adjust ``instance_type`` and ``key_name`` in the ``node_groups`` section with your preferred EC2 instance type and key pair name. - -5. **Initialize Terraform:** - - .. code-block:: bash - - terraform init - -6. **Deploy the EKS Cluster:** - - .. code-block:: bash - - terraform apply - -7. **Accessing the Cluster:** - - After the deployment completes, you will get the following outputs: - - - ``cluster_endpoint``: The endpoint URL of the EKS cluster. - - ``kubeconfig``: The generated kubeconfig file to authenticate with the EKS cluster using ``kubectl``. - - ``config_map_aws_auth``: The ConfigMap YAML used to configure AWS authentication for the EKS cluster. - - Use the provided kubeconfig file and ``kubectl`` to interact with the deployed EKS cluster: - - .. code-block:: bash - - export KUBECONFIG=$(pwd)/kubeconfig_ - - aws eks --region eu-west-1 update-kubeconfig --name ra2ce_cluster - - kubectl get svc - -Clean Up --------- - -To avoid incurring unnecessary costs, remember to clean up the resources once you're done using the EKS cluster: - -1. **Destroy Resources:** - - .. code-block:: bash - - terraform destroy - -2. **Manual Clean Up:** - - Ensure all resources associated with the EKS cluster are deleted from the AWS Management Console, including EKS cluster, EC2 instances, security groups, etc. - -Further Customization ----------------------- - -You can either configure the terraform template to add other node groups or follow this documentation to use EKSCTL: - -Nodegroups ----------- - -The nodegroups that are currently available within AWS EKS are: - -+-----------+------------------+---------+---------+------------------+--------------+------+------------+ -| CLUSTER | NODEGROUP NAME | MIN SIZE| MAX SIZE| DESIRED CAPACITY| INSTANCE TYPE| vCPU | MEMORY | -+===========+==================+=========+=========+==================+==============+======+============+ -| ra2ce-cluster | argo-main | 1 | 25 | 1 | t3-small | | | -+-----------+------------------+---------+---------+------------------+--------------+------+------------+ - -Adjusting the nodegroups -------------------------- - -The size of the nodegroup is adjustable by using eksctl (``_). Eksctl does not work well with AWS SSO unfortunately. You will need to configure your credentials manually. - -To increase the current number of nodes (and “overwrite” the Kubernetes behavior): - -.. code-block:: bash - - eksctl scale nodegroup --cluster=ra2ce-cluster --nodes=1 --region=eu-west-1 argo-main - -This can be done before running a big job where you know you will need a certain number of nodes. This way the Argo workflow does not wait before the needed nodes are available. Kubernetes will still remove nodes if they are not used in a certain time window. - -To increase/decrease the minimum number of nodes of a nodegroup: - -.. code-block:: bash - - eksctl scale nodegroup --cluster=ra2ce-cluster --nodes-min=0 --region=eu-west-1 argo-main - -To increase/decrease the maximum number of nodes of a nodegroup: - - -.. code-block:: bash - - eksctl scale nodegroup --cluster=ra2ce-cluster --nodes-max=25 --region=eu-west-1 argo-main - -Adding Node Groups -------------------- - -To add a new node group to your existing EKS cluster, you can use the following command: - -.. code-block:: bash - - eksctl create nodegroup --cluster=ra2ce-cluster --region=eu-west-1 --name=newNodeGroup --node-type=t3.medium --nodes=3 --nodes-min=1 --nodes-max=5 - -This command creates a new node group named "newNodeGroup" with instance type t3.medium and initial 3 nodes. You can adjust the `--nodes-min` and `--nodes-max` parameters as needed. diff --git a/docs/docker_and_cloud/using_cloud_services/deltares_harbor.rst b/docs/docker_and_cloud/using_cloud_services/deltares_harbor.rst deleted file mode 100644 index 11f24aed0..000000000 --- a/docs/docker_and_cloud/using_cloud_services/deltares_harbor.rst +++ /dev/null @@ -1,37 +0,0 @@ -.. _deltares_harbor: - -Deltares Docker Harbor -====================== - -Deltares provides us with the possibility to publish our docker images to an internal repository, which enables the possibility of remotely storing our images and running them in different cloud systems. - -This repository is located in `https://containers.deltares.nl/ra2ce/ `_, and the images get automatically pushed from teamcity when all the tests run correctly following the same format: - -- (Merge) commits to ``master`` produce a new image ``ra2ce:latest``. -- Pushed tags (format ``v.MAJOR.Minor.patch``) produce a new image ``ra2ce:v_MAJOR_Minor_patch``. -- Hackathon branches (``hackathon/branch_name``) produce a new image ``ra2ce_hackathon_branch_name:latest``. - -They can subsequently be retrieved as ``containers.deltares.nl/ra2ce/ra2ce:desired_tag`` - -In addition, other branches can also be manually triggered from TeamCity. These images can be retrieved as ``containers.deltares.nl/ra2ce/ra2ce_name_of_the_branch:latest`` - -.. _deltares_harbor_access_rights: - -Access permissions ------------------- - -In principle anyone should have ``pull`` rights (allows to download the docker images), if ``push`` rights are required please contact our project administrator (`Carles Soriano Pérez `_) or any of the ra2ce team members if not reachable. - -Once ``push`` permissions are granted your local docker machine needs to know of its location, you can do so by simply running the following command: - -.. code-block:: bash - - docker login -u <> -p <> - -.. note:: - To retrieve your ``cli_secret`` go to ``_ then click on your user on the top right side of the window, a menu will emerge, select ``User Profile`` and then copy the ``CLI secret`` with the copy functionality. - -To push your image just run your usual ``docker push`` command but do not forget to correctly tag your image with the ra2ce repository ``containers.deltares.nl/ra2ce/name_of_your_mage:desired_tag``. - -.. note:: - In order to make sure your custom branch is being picked up, we recommend bumping its version manually, if you have installed the ``conda`` environment you should be able to run the `commitizen `_ bump line ``cz bump --devrelease your_custom_release_number``. \ No newline at end of file diff --git a/docs/docker_and_cloud/using_cloud_services/deltares_hpc_user_guide.rst b/docs/docker_and_cloud/using_cloud_services/deltares_hpc_user_guide.rst deleted file mode 100644 index 3b5605653..000000000 --- a/docs/docker_and_cloud/using_cloud_services/deltares_hpc_user_guide.rst +++ /dev/null @@ -1,64 +0,0 @@ -.. _deltares_hpc_user_guide: - -Deltares HPC User Guide -======================= - -Introduction ---------------------------------- -This user guide introduces how to run Ra2ce in a HPC environment. -The HPC environment will need to support Docker containers. Not every -HPC environment will but the Deltares H7 HPC does. - -Running Ra2ce Docker container on H7 -------------------------------------- - -Access the head node of the H7. Detailed instructions can be found on our public Wiki: https://publicwiki.deltares.nl/display/Deltareken/Access - -The Wiki is be the place to see the latest information on how to use the H7. So if anything in this -user guide does not work as expected use the Wiki to see if anything changed. - -In short, you can access the H7 using SSH or RDP. - -.. code-block:: bash - - ssh h7.directory.intra - -Move all the input data to a p drive of your project. In the Ra2ce repository there is a -slurm_job.sh file that also needs to be moved to an accessible location (the same p drive for example) - -Navigate to the p drive on the head node - -.. code-block:: bash - - cd /p/<> - -Now you can schedule the job using slurm. Slurm is a job orchestrator used with many HPC environments. - -In the slurm job there are resource requests set up. Adjust your slurm_job.sh file where needed for your use case. - -See all options on the Wiki: - -In the slurm job for Ra2ce we just run 1 command. - -.. code-block:: bash - - docker run --mount src=${PWD},target=/data,type=bind containers.deltares.nl/ra2ce/ra2ce:latest python /data/run_race.py`` - -+------------------------------------------------+---------------------------------------------------------------+ -| Command | Description | -+------------------------------------------------+---------------------------------------------------------------+ -| docker run | The command to run a docker container | -| --mount src=,target=,type=bind | This mounts the current folder to a target location in the | -| | Docker container | -| containers.deltares.nl/ra2ce/ra2ce:latest | The container image and version to run. If you want older | -| | versions you can change the latest tag | -| python /data/run_race.py | The command to run in the container | -+------------------------------------------------+---------------------------------------------------------------+ - -Make sure your run_race.py writes to the mounted drive or you will lose your output once the job is finished. - -.. code-block:: bash - - sbatch slurm_job.sh - -See the progress of your job here: https://hpcjobs.directory.intra/ diff --git a/docs/docker_and_cloud/using_cloud_services/docker_user_guide.rst b/docs/docker_and_cloud/using_cloud_services/docker_user_guide.rst deleted file mode 100644 index a8e924422..000000000 --- a/docs/docker_and_cloud/using_cloud_services/docker_user_guide.rst +++ /dev/null @@ -1,203 +0,0 @@ -.. _docker_user_guide: - -================= -Docker User Guide -================= - -This user guide introduces how to run Ra2ce in a public cloud environment. - -.. _docker_user_guide_installation: - - -Installation ------------- - -We assume the reader has a windows machine, otherwise please find the corresponding binaries for your own Operating System. - - -- Docker: - - Install docker from the original source ``_ . - -- Kubernetes (kubectl): - - Install kubernetes cli from the original website ``_. - -.. note:: - It is recommended having ``docker`` and ``kubectl`` in your ``PATH`` environment variables. - Make sure your command line recognizes said commands in order to follow this user guide. - - -How to build the cloud docker image ------------------------------------ - -Assuming access to a Linux box with Docker installed, or in Windows a Docker Desktop installation. You can do the -following: - - .. code-block:: bash - - git clone git@github.com:Deltares/ra2ce.git - cd ra2ce - docker build -t ra2ce:latest -f Dockerfile . - -These instructions will build a docker image. After a good while, you should end up with: - - .. code-block:: bash - - $ docker images - REPOSITORY TAG IMAGE ID CREATED SIZE - ra2ce latest 616f672677f2 19 hours ago 1.01GB - -Remark that this is a local image only. - - -Running Ra2ce Docker container locally -====================================== - -To run the Docker container locally you can execute the following command: - -.. code-block:: bash - - docker run -it ra2ce bash - -This will create a Docker container from the specified image and start an interactive shell in which you can enter commands - -In this cloud version of the Docker container there is no Jupyter notebook available to interact with the Ra2ce package. -Instead you can start Ra2ce from the command line or write an input script (Python) similar to a Jupyter notebook. - -If you wish to include files (i.e. input files and run files) you can mount folders to your Docker container on run with for example: - -.. code-block:: bash - - docker run -it -v $PWD/workflow:/scripts -v $PWD/tests/test_data/acceptance_test_data/:/data ra2ce_package bash - -After that you can call run_race.py. Output files will be available in the acceptance_test_data folder - - -Running Ra2ce container in a cloud environment -============================================== - -In the local example you are running a Docker container entirely locally and thus can mount your own hard drive with input data. - -In a cloud environment this is not possible so we have additional requirements: - -- A compute environment that is responsible to run a Docker container on a remote server. -- A data storage environment that can be mounted to the container mimicking the way we run a container locally -- A workflow orchestrator that can manage mounting the afformentioned data storage layer and manage a workflow of multiple container instances. - -Currently we have tested Ra2ce using the following tech stack: - -- Kubernetes (AWS EKS) as compute environment. If there is no environment currently available check the ``/infra/README.md`` on how to set it up. -- AWS S3 as data storage layer. -- Argo Workflow as workflow orchestrator. If there is no Argo deployment currently available see ``/infra/workflow/README.md`` on how to deploy Argo. - - -Running a singular Ra2ce container in Kubernetes ------------------------------------------------- - -In Kubernetes, you can deploy Docker containers stored in container registries such as Docker Hub or any other container registry provider. This guide illustrates how to run a Docker container from an existing container registry using ``kubectl``. - - -Prerequisites -^^^^^^^^^^^^^ - -Before following this guide, ensure you have the following: - -- A Kubernetes cluster set up. -- ``kubectl`` installed and configured to connect to your Kubernetes cluster. -- Docker container image pushed to a container registry accessible to your Kubernetes cluster. - - -Steps -^^^^^ - -1. **List Available Images**: First, list the available Docker container images in your container registry. You will need the full image name for the subsequent steps. - -2. **Create Deployment YAML**: Create a YAML file specifying the details of the container you want to run. An example YAML file is available in ``/infra/workflow/pod.yaml``: - - Replace ``:`` with the full image name and tag of your Docker container image, and ```` with the port your container listens on. - -3. **Apply Deployment**: Apply the deployment YAML using ``kubectl``: - - .. code-block:: bash - - kubectl apply -f pod.yaml - - Replace ``pod.yaml`` with the filename of your deployment YAML file. - -4. **Verify Deployment**: Check if the deployment was successful: - - .. code-block:: bash - - kubectl get pods - - You should see your deployment listed with 1 desired replica and 1 current replica. - -5. **Access the Running Container**: You can access the logs of the running container or execute commands within the container using ``kubectl``. For example: - - - To view container logs: - - .. code-block:: bash - - kubectl logs - - Replace ```` with the name of your pod. - - - To execute a command in the container: - - .. code-block:: bash - - kubectl exec -it -- - - Replace ```` with the command you want to execute in the container. - - -Running a Ra2ce workflow in Argo --------------------------------- - -Argo Workflows is an open-source workflow engine optimized for Kubernetes. This guide demonstrates how to run a simple Argo workflow on your Kubernetes cluster. - - -Prerequisites -^^^^^^^^^^^^^ - -Before following this guide, ensure you have the following: - -- A Kubernetes cluster set up. -- ``kubectl`` installed and configured to connect to your Kubernetes cluster. -- Argo Workflows installed in your Kubernetes cluster. You can install Argo Workflows by following the official documentation: ``_ - - -Steps -^^^^^ - -1. **Create Workflow YAML**: Create a workflow YAML file specifying the steps of your workflow. An example YAML file is available in ``/infra/workflow/pod.yaml``: - - Replace ``:`` with the Docker container image you want to use in your workflow. - -2. **Submit Workflow**: Submit the workflow YAML using ``kubectl``: - - .. code-block:: bash - - kubectl apply -f workflow.yaml - - Replace ``workflow.yaml`` with the filename of your workflow YAML file. - -3. **Check Workflow Status**: Monitor the status of your workflow using Argo CLI or Argo UI. To use Argo CLI: - - - Install Argo CLI by following the official documentation: ``_ - - Check the status of your workflow: - - .. code-block:: bash - - argo list - - This command lists all workflows, including the one you just submitted. - - - To view detailed information about your workflow: - - .. code-block:: bash - - argo get - - Replace ```` with the name of your workflow. \ No newline at end of file diff --git a/docs/docker_and_cloud/using_cloud_services/hackathon_user_guide.rst b/docs/docker_and_cloud/using_cloud_services/hackathon_user_guide.rst deleted file mode 100644 index b5dbd162b..000000000 --- a/docs/docker_and_cloud/using_cloud_services/hackathon_user_guide.rst +++ /dev/null @@ -1,199 +0,0 @@ -.. _hackathon_user_guide: - -Hackathon User Guide -==================== - -This chapter explores the setting up of your local machine to run ``ra2ce`` in a typical "hackathon" case. -Our hackathons usually consist of workflows as: - -- Data collection. -- Overlaying hazard(s) based on the data collection. -- Running ``ra2ce`` analysis based on the "n" overlayed hazard network(s). -- Collecting results (post-processing). - -Some examples can be found in the :ref:`hackathon_sessions` documentation. - -Based on the personal notes of `Matthias Hauth `_ we will present here how to: - -- Build and run a docker image locally, -- Push said image to the Deltares Harbor (:ref:`deltares_harbor`). -- Use argo (:ref:`argo_deployment`) to run workflows in the s3 (:ref:`kubernetes_deployment`). - -Keep in mind this documentation could contain information already present in the :ref:`setting_up_infrastructure` subsection. - -Build and run a docker image ---------------------------------- - -**Prerequisites**: - -- Have docker desktop installed and the application open (check introduction of :ref:`docker_user_guide_installation`). -- Have the ``ra2ce`` repository checked out in your machine (check how to install ``ra2ce`` in :ref:`install_ra2ce_devmode`). -- Run a command line using said check-out as your working directory. - -First, lets bump the local ``ra2ce`` version (assume we work based on a ``v0.9.2`` ``ra2ce`` checkout) so we can track whether our image has been correctly built later on. - - .. code-block:: bash - - $ cz bump --devrelease 0 --increment patch - bump: version 0.9.2 → 0.9.3.dev0 - tag to create: v0.9.3 - increment detected: PATCH - - .. warning:: - This creates a local tag, which you don't need to push (and it's best not to). - - -We can build a docker image based on the docker file located in the repo - - .. code-block:: bash - - cd ra2ce - docker build -t ra2ce:latest . - docker images (this returns a list of all images available.) - docker run -it ra2ce:latest /bin/bash (adapt IMAGE_ID) - - -The command line looks like: ``(ra2ce_env) 4780e47b2a88:/ra2ce_src~$``, and you can navigate it by using ``cd`` and ``ls`` (it's a Linux container). -``ra2ce`` should be installed in the image and ready to be used as a "package", you can verify its installation by simply running: - - .. code-block:: bash - - $ docker run -it ra2ce:latest python -c "import ra2ce; print(ra2ce.__version__)" - 0.9.3.dev1 - - -Push a docker image ------------------------ - -**Prerequisites**: -- Have rights to publish on the registry (check :ref:`deltares_harbor_access_rights`). - -We (re)build the image with the correct registry prefix. - - .. code-block:: bash - - cd ra2ce - Docker build -t containers.deltares.nl/ra2ce/ra2ce:matthias_test . - - .. note:: - registry_name/project_name/container_name:tag_name - - -You can check again whether the image is correctly built with any of the following commands: - - .. code-block:: bash - - docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test - docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test bash - docker run -it containers.deltares.nl/ra2ce/ra2ce:user_test python -c "import ra2ce; print(ra2ce.__version__)" - - -Then push to the online registry: - - .. code-block:: bash - - docker push containers.deltares.nl/ra2ce/ra2ce:user_test - - -Use argo workflows ----------------------- - -**Prerequisites**: - -- Have kubectl installed (:ref:`docker_user_guide_installation`) -- Have argo installed (:ref:`argo_local_installation`) -- Have aws installed (You can install and configure AWS CLI by following `the official user guidelines `_) - - -1. In ``C:\Users\{you_username}\.aws``, modify ``config``, so that: - - .. code-block:: ini - - [default] - region=eu-west-1 - - -2. Go to ``_ : - - You will see the ``RA2CE`` aws project, click on it. - - Select now ``Access keys``, a pop-up will show - - Copy the content of option 2, the ``Copy`` button will do it for you. It should be something like: - - .. code-block:: ini - - [{a_series_of_numbers}_AWSPowerUserAccess] - aws_access_key_id={an_access_key_id} - aws_secret_access_key={a_secret_access_key} - aws_session_token={a_session_token} - - -3. Now, go again to ``C:\Users\{you_username}\.aws``, - - replace the ``credentials`` content with that of step 2, - - replace the header so it only containts ``default``, - - the final content of ``credentials`` should be something as: - - .. code-block:: ini - - [default] - aws_access_key_id={an_access_key_id} - aws_secret_access_key={a_secret_access_key} - aws_session_token={a_session_token} - - .. warning:: - These credentials need to be refreshed EVERY 4 hours! - - -4. We will now modify ``C:\Users\{you_username}\.kube\config`` - - .. code-block:: bash - - aws eks --region eu-west-1 update-kubeconfig --name ra2ce-cluster - - .. note:: - ``aws eks update-kubeconfig --region {region-code} --name {my-cluster}`` - - .. warning:: - This step has not been entirely verified as for now we were not able to generate the required data in a 'clean' machine. Instead we copy & pasted the data in a machine where it was already properly configured. - -5. Now we forward the kubernetes queue status to our local argo: - - .. code-block:: bash - - kubectl -n argo port-forward service/argo-server 2746:2746 - -6. It should now be possible to access your local argo in ``_ - - An authentication token will be required, you can request it via command line: - - .. code-block:: bash - - argo auth token - - Copy and paste it. - - .. note:: - This authentication code expires within 15 minutes, you will have to refresh it multiple times. - If you don't want to do this you can always get the current status with: - - .. code-block:: bash - - kubectl get pods -n argo - - .. note:: - ``-n argo`` means namespace argo. - -7. Submit a workflow - - Navigate to the location of your ``.yml`` (or ``.yaml``) workflow. - - Ensure the workflow's namespace is set to ``argo``, the ``.yml`` should start with something like: - - .. code-block:: yaml - - apiVersion: argoproj.io/v1alpha1 - kind: Workflow - metadata: - namespace: argo - - - Execute the following command - - .. code-block:: bash - - kubectl create -f {your_workflow}.yml - - - You can track the submitted workflow as described in steps 5 and 6. \ No newline at end of file diff --git a/docs/docker_and_cloud/using_cloud_services/index.rst b/docs/docker_and_cloud/using_cloud_services/index.rst deleted file mode 100644 index b2f7d509e..000000000 --- a/docs/docker_and_cloud/using_cloud_services/index.rst +++ /dev/null @@ -1,25 +0,0 @@ -.. _using_cloud_services: - -Using cloud services -==================== - -A variety of different cloud services are available to run ra2ce "on the cloud", both internal and external to Deltares facilities. Here we list the ones we have tested and therefore can provide (certain) support. - -Available cloud services: - -- Ra2ce S3 (Amazon web services, aws). - - Allows to run docker containers by using kubernets. - - Allows to store data to use in the containers. -- Deltares docker harbor. - - Allows to push and pull ra2ce docker images. - -In this subsection you can finde user guidelines on how to use all of them. - -.. toctree:: - :caption: Table of Contents - :maxdepth: 1 - - docker_user_guide - deltares_harbor - deltares_hpc_user_guide - hackathon_user_guide \ No newline at end of file diff --git a/docs/installation/installation.rst b/docs/installation/installation.rst index e7c340cc3..7b59786c0 100644 --- a/docs/installation/installation.rst +++ b/docs/installation/installation.rst @@ -29,25 +29,15 @@ Alternatively you can install the latest version available on GitHub or a specif pip install git+https://github.com/Deltares/ra2ce.git pip install git+https://github.com/Deltares/ra2ce.git@v0.3.1 - -.. _install_ra2ce_devmode: - Development mode +++++++++++++++++++++++++++ -When running a development environment with Anaconda, the user may follow these steps in command line: - - .. code-block:: bash - - cd - conda env create -f .config\environment.yml - conda activate ra2ce_env - poetry install +Please refer to our `installation for contributors wiki page `_. Docker and cloud +++++++++++++++++++++++++++ You may install ra2ce using `Docker` and running with different cloud services. -Please refer to our :ref:`docker_and_cloud`. +Please refer to our `docker and cloud wiki page `_. Binder environment diff --git a/docs/overview.rst b/docs/overview.rst index 82376d764..6966e5ff1 100644 --- a/docs/overview.rst +++ b/docs/overview.rst @@ -97,7 +97,6 @@ Overview examples/index.rst network_module/network_module.rst analysis_module/analysis_module.rst - docker_and_cloud/index contributing/index.rst technical_documentation/technical_documentation.rst faq/faq.rst