diff --git a/.gitmodules b/.gitmodules new file mode 100644 index 0000000000000000000000000000000000000000..ff159c35a78ff5d64a30c1e45e609f217962848c --- /dev/null +++ b/.gitmodules @@ -0,0 +1,6 @@ +[submodule "front"] + path = front + url = git@gitlab.viarezo.fr:viarezo/vroum/vroum-front +[submodule "back"] + path = back + url = git@gitlab.viarezo.fr:viarezo/vroum/vroum-back diff --git a/README.md b/README.md index 2a9f8fe48c03848bc1c449ad852a5ee924315c8a..ad6f86baedd42cca9b1115c08586cfbcbc987ba5 100644 --- a/README.md +++ b/README.md @@ -1,92 +1,740 @@ -# Formation Kubernetes +# Le guide ultime de Kubernetes à ViaRézo -It is high time kubernetes had become easy for ViaRézo. +L'objectif est de déployer un site complexe par petits groupes de 2 ou 3 et d'en apprendre un maximum sur comment on fait du Kubernetes à ViaRézo. -## Getting started +## 0. Installer le nécessaire -To make it easy for you to get started with GitLab, here's a list of recommended next steps. +Il va falloir mettre en place quelque utilitaires de base pour pouvoir réaliser cette formation: -Already a pro? Just edit this README.md and make it your own. Want to make it easy? [Use the template at the bottom](#editing-this-readme)! +- [`docker`](https://docs.docker.com/get-docker/) +- [`kubectl`](https://kubernetes.io/docs/tasks/tools/#kubectl) +- [`helm`](https://helm.sh/docs/intro/install/) -## Add your files +Pour configurer l'accès au cluster de test de ViaRézo, il faut que tu ailles récupérer un fichier sur le cluster de test de ViaRézo: +```bash +ssh 138.195.139.40 "sudo cp ~root/.kube/config ." +scp 138.195.139.40:config ~/.kube/config +``` -- [ ] [Create](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#create-a-file) or [upload](https://docs.gitlab.com/ee/user/project/repository/web_editor.html#upload-a-file) files -- [ ] [Add files using the command line](https://docs.gitlab.com/ee/gitlab-basics/add-file.html#add-a-file-using-the-command-line) or push an existing Git repository with the following command: +Vérifie que tous marche bien avec. +```bash +kubectl get nodes +``` +Un petit conseil pour les utilisateurs de OhMyZsh: n'hésitez pas à activer les plugins associées à ces applications. +```bash +omz plugin enable docker +omz plugin enable kubectl +omz plugin enable helm +``` +Ça permet de faire de l'auto-complétion et pour les plus aventureux il y a des alias sympa. + +Un petit conseil pour ceux qui n'utilisent pas OhMyZsh: **installer ohmyzsh**. + + +## 1. Créer un namespace pour le groupe + +Un namespace, c'est une façon d'isoler des resources entre elles. On va donc créer un namespace pour le groupe, c'est dans ce namespace que vous allez travailler. + + +Pour un seul des membres du groupe uniquement: +```bash +kubectl create namespace <le nom de mon équipe géniale> +``` + +On définie ensuite ce namespace comme namespace par défaut: +```bash +kubectl config set-context --current --namespace <le nom de mon équipe géniale> +``` +Cela veut dire que si vous ne spécifiez pas explicitement de namespace lorsque vous créez des ressources, elles seront créées dans ce namespace. + +## 2. Il est temps de construire l'application elle-même + +Et cette application c'est VRoum. + +Pour cette première partie, l'objectif est de construire les images qui vont faire tourner notre site, il y a deux images à vous répartir. + +### Le Front + +Le code source du front est dans le dossier front. +Ton objectif est de créer un Dockerfile et de faire tourner le front de VRoum en local. + + + +### Le Back + +Le code source du back est dans le dossier back. + + + +### Checks (before you go on) + +- [ ] I can run a simple command with all the tool listed above (`git --version`, `kubectl --help`, etc...) +- [ ] I can run a container: `docker run hello-world` +- [ ] I can run a simple `kubectl` query: `kubectl get nodes` +- [ ] I can contact my cluster through http/https: `curl <my-cluster-addr>` returns a 404 + +## 1. (Optional) Build and launch the app locally + +> This task is optional, don't loose time on it right now! + +### Why + +If you truly want to immerse yourself in the life of a developer, you will need to be able to iterate quickly on the app locally. + +### What + +Be creative, try to modify a simple thing in the app. + +### How + +For this you simply need the `go` cli installed and some knownledge of this language. + +When you are happy with the result, you can launch the app with `go run main.go`, or build a binary with `go build`. + +### Checks + +- [ ] I can run the app locally, and see the web UI. +- [ ] I have implemented a small change in the application and it still runs + +## 2. Build a container image (Docker) + +### Why + +While you build and iterate on your app locally, you need to be able to deploy it on a real production environment. + +Since you don't know where it will run (isolated virtual machine, which packages are installed), we want to ensure the reproductability and isolation of the application. That's why containers, that `docker` helps build and run, are made for! + +It is a standard API to build and ship applications across diverse workloads. Whatever the server it is running on, you _image_ should always construct the same isolated environment. + +Moreover, it is way less expensive in resources (CPU, RAM) than a Virtual Machine, which acheives an isolation by reinstalling a whole OS. + +### What + +We need to _build_ a container _image_ from the code in this repository. For this, the command `docker build -t <image-name>:<version> .` builds an image from a local _recipe_ `Dockerfile`. + +For example, for a simple python application, it could be: + +```Dockerfile +FROM python:3.8 + +COPY requirements.txt . +RUN pip install -r requirements.txt + +COPY main.py main.py + +CMD ["python", "main.py"] +``` + +You can find the complete [Dockerfile reference](https://docs.docker.com/engine/reference/builder/) here. + +Here we have a _webservice_ written in _golang_, running an HTTP server on the port `3000`. +It serves some static files (stored in `/public`), for the UI. You will mainly access it through a `GET /` for the UI, but there are other routes to manage the state of the app. + +### How + +You can follow such [a tutorial](https://docs.docker.com/language/golang/build-images/) + +1. Write a `Dockerfile`. You need to start from a _base _image_, ideally with golang already install. +2. In the `Dockerfile`, download the microservice's dependencies. Since latest golang version, we only need `go.mod` and `go.sum` for this task. +3. In the `Dockerfile`, build the microservice. You need the command `go build` for this. +4. In the `Dockerfile`, add the `public` folder inside the container, in the same `public` folder. + + ```Dockerfile + COPY ./public public + ``` + +4. When the container starts, run the microservice. +5. Build a container image: `docker build -t guestbook:v0.1.0 .` +6. Run the container. + + You need to _expose_ the port of your application, which run on `3000`. For this, you just need to add the `--publish <external-port>:<internal-port>` to the `docker run` command. + + ```bash + docker run --publish 3000:3000 guestbook:v0.1.0 + ``` + +7. Check that the microservice responds to requests on + http://localhost:3000. You should see the following UI: + +  + +8. **Optional**: Implement some best practices, such as "multi-stage builds". It help reduce the size of your images, and increase security. + + The image you built so far is pretty large because it contains the entire Go + toolkit. It's time to make it smaller. Much smaller. Here is what you need to + do: + + 1. Check to see how big your container image is. + 2. Change the `go build` command to make the binary statically linked (if you + don't know what that means, just ask!). + 3. In your `Dockerfile`, create a second stage that starts from `scratch`. + 4. Copy the binary from the first stage to the second. + 5. In the second stage, run the microservice. + 6. Build your container image again. + 7. Check to see how big the image is now. + + + +### Checks + +- [ ] I can build an image locally +- [ ] I can run a the container locally +- [ ] I can access the web interface locally + +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> + +You can find the complete solution [here](https://github.com/padok-team/dojo-guestbook/blob/feat/solution/Dockerfile). Don't spoil yourself too much! + +</details> + +## 3. Run it locally with docker-compose + +### Why + +You have a working local environment, however you already need to chain a few commands, and as your app will be growing more complex, the setup will be harder to maintain. + +Instead of having to type an _imperative_ chain of commands, you can have a _declarative_ description of your local _docker/container_ application. That's is why `docker-compose` is made for: it reads this config and run the right `docker commands` for you. + +### What + +We need to be able to launch the current container with only the `docker-compose up` command. + +The `docker-compose.yaml` file will contains everything needed: + +- how to build the image +- how to run the container, including configuration of port +- how to link it to another container +- how to persistent storage + +### How + +There is a [_get started_](https://docs.docker.com/compose/gettingstarted/) article, or the [complete specification](https://docs.docker.com/compose/compose-file/) + +- define your guestbook service +- you can use the image you built, but you can also specify how to rebuild it! +- don't forget to expose the port needed for your application + +### Checks + +- [ ] I can launch locally the application with `docker-compose up` +- [ ] I can see the UI in my brower at `http://localhost:3000` + +<details> +<summary>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</summary> + +You should have something like: + +```yaml +version: '3' +services: + guestbook: + build: + context: ./ + dockerfile: Dockerfile + ports: + - 3000:3000 +``` + +</details> + + +## 4. Add a database to your service + +### Why + +If you test your app, you can see a big **⚠️ No database connection... ⚠️**. Furthermore, when you try to add something to the guestbook, it hangs (⌛) without saving it (try to refresh the page). + +The application is actually stateless, and needs a Redis backend to save its state. To avoid interfering with your local installation, we will run it in container, using once again `docker` and `docker-compose`. + +### What + +We simply need to add a new service in our docker-compose file, and have a way for the app to use it. + +### How + +1. Add a `redis` service in your app. Don't build redis locally, but use the public `redis:6` image. +2. Expose its redis port `6379`. +3. Make the guestbook app use it: + + The Guestbook app uses _environment variable_ for its configuration. Here you need to setup the `REDIS_HOST` variable to the hostname of your redis cluster. In a docker-compose environment, each service can be called with its name. +4. Try to run it: does the application store the state? +5. (Optional) Make it persistent! + + Currently, if you save some sentences in the app, then run `docker-compose down` and `docker-compose up` again, you'll see that you will loose all your data! 😢 + + You can manage volumes in docker-compose, which are persisted, and mount these volumes in your app. If you prefer, you can also link a local folder to a container, it can be useful for live reloading. + +### Check + +- [ ] The application actually saves messages + +  + +- [ ] (Optional) If you run `docker-compose down`, you don't loose data when you relaunch the app. + +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> + +You can find the complete solution [here](https://github.com/padok-team/dojo-guestbook/blob/feat/solution/docker-compose.yml). Don't spoil yourself too much! + +</details> + +## 5. Deploy you app on Kubernetes: the Pod + +> If you are here, ask for a quick formation on Kubernetes. We will make a quick overview for everyone! + +### Why + +Now that we can run our application locally, we want to deploy it to Kubernetes, which is a container orchestrator. + +### What + +We will start with the basics: a Pod. It is the basic unit to run something on Kubernetes. It is composed of one or several containers, running together. + +Here an example of a Pod manifest: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: my-pod + namespace: my-namespace + labels: + foo: bar +spec: + containers: + - name: my-container + image: myapp:v1.0.0 + command: ['/bin/my-app'] + args: ['--migrate-db', '--db-host=12.34.56.78'] +``` + +You can save this kind of _manifest_ into a file, for example `manifests/pod.yaml`, and then _deploy it_ to Kubernetes with `kubectl apply -f manifests/pod.yaml`. If you have several files, you can also apply the whole folder. + +You also have some basic Kubernetes commands to get informations about your pod. + +```bash +kubectl get pods +kubectl describe pod <my-pod> +kubectl logs <my-pod> +``` + +Take some time to [learn a bit about pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/). + +### How + +1. Write a `pod.yaml` file (the VSCode extension can help you with that) +2. At minimum, you need a name and a first container definition, with its name and image. **For the image, you can push the image to a public registry, or for kind add it to the cluster with `kind load docker-image "${IMAGE}" --name padok-training`. You can also use the following: `dixneuf19/guestbook:v0.1.0`. +3. Try to deploy it, and launch the previous command +4. If you need to delete it, use `kubectl delete -f manifests/` +5. Take some time to play around with this object: what happens if you give a non existing image? +6. Try to access your application with `kubectl port-forward <my-pod> 3000:3000` + +### Checks + +- [ ] My pod is running: I can see its state and follow its logs +- [ ] I have access to the Web UI with the port-forward + +<details> +<summary>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</summary> + +You should have something like: + +```yaml +apiVersion: v1 +kind: Pod +metadata: + name: guestbook + labels: + app: guestbook + project: dojo +spec: + containers: + - name: guestbook + image: dixneuf19/guestbook:v0.1.0 + ports: + - containerPort: 3000 + name: http +``` +</details> + + +## 6. Manage replications and rolling update: Deployments + +### Why + +One pod is cool, but what if you want to deploy several instances of the same app, to avoid any downtime if a node fails? + +That is the function of deployments: you declare a _template of a Pod_, with also a replication. It also helps you manage updates of your applications without any downtime. + +### What + +Same as before, everything in Kubernetes is declarative. You can create a file, write a manifest into it and apply! + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: my-deployment +spec: + replicas: 3 + selector: + matchLabels: + foo: bar + template: + metadata: + labels: + foo: bar + spec: + containers: + - name: my-container + image: myapp:v1.0.0 + ports: + - containerPort: 3000 +``` +As for all kubernetes resources, here are generic useful commands: + +```bash +kubectl get deployment +kubectl describe deployment <my-dep> +``` + +### How + +1. Transform your current pod into a deployment. You just need to put everything from `Pod.spec` to the `Deployment.spec.template.spec`. +2. What are these "selector"? Can you modify them? +3. Play along with replicas. Try to delete some pods. +4. Modify something in you template, and watch closely the way your pods are replaced. Is there any _downtime_? + +### Checks + +- [ ] I can still access one of my replica with port-forward +- [ ] I have listed or described my deployment + +<details> +<summary>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</summary> + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: guestbook +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + project: dojo + template: + metadata: + labels: + app: guestbook + project: dojo + spec: + containers: + - name: guestbook + image: dixneuf19/guestbook:v0.1.0 + ports: + - containerPort: 3000 + name: http +``` +</details> + + + +## 7. Expose your app internally + +### Why + +While you can access your app with port-forwarding, it is not very practical. Moreover, since the app is _stateless_, we want to access any pod. + +For a start, a internal access would be good enough. That is the job of *Services*, they provide an internal load balancing inside the cluster. + +### What + +You start to know the drill: create a manifest and apply it. + +Not that for services, you need to _select_ your pods using their **labels**. The easy thing to do: just use the same labels used in your deployment to find its pods. + +```yaml +apiVersion: v1 +kind: Service +metadata: + name: my-service +spec: + selector: + foo: bar + ports: + - protocol: TCP + port: 80 + targetPort: 8080 +``` + +In the cluster, other pods will be able to call one the pod behind the service, just with + +```bash +curl http://my-service # request one of the pods selected by the service +# if your pod run in a different namespace, you need to specify it +curl http://my-service.my-ns ``` -cd existing_repo -git remote add origin https://gitlab.viarezo.fr/ViaRezo/kubernetes/formation-kubernetes.git -git branch -M main -git push -uf origin main + +Here is the [official documentation](https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service) and some useful commands. + +```bash +kubectl get services +kubectl describe service <my-svc> +kubectl port forward svc/<my-svc> 3000:80 +# lets see on http://localhost:3000 ``` -## Integrate with your tools +### How + +1. Create the service manifest, set the correct labels and port and apply it! +2. You are free to use the external port you want +3. You can test if the service is functional with `kubectl port-forward svc/<my-svc> <local-port>:<svc-port>` +4. Try to break your service: what happen if you set wrong labels ? Can you have a service pointing on multiple deployments? + +### Checks + +- [ ] I can access the UI using port-forwarding on the service -- [ ] [Set up project integrations](https://gitlab.viarezo.fr/ViaRezo/kubernetes/formation-kubernetes/-/settings/integrations) +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> -## Collaborate with your team +You can find the complete solution [here](https://github.com/padok-team/dojo-guestbook/blob/feat/solution/manifests/service.yaml). Don't spoil yourself too much! -- [ ] [Invite team members and collaborators](https://docs.gitlab.com/ee/user/project/members/) -- [ ] [Create a new merge request](https://docs.gitlab.com/ee/user/project/merge_requests/creating_merge_requests.html) -- [ ] [Automatically close issues from merge requests](https://docs.gitlab.com/ee/user/project/issues/managing_issues.html#closing-issues-automatically) -- [ ] [Enable merge request approvals](https://docs.gitlab.com/ee/user/project/merge_requests/approvals/) -- [ ] [Automatically merge when pipeline succeeds](https://docs.gitlab.com/ee/user/project/merge_requests/merge_when_pipeline_succeeds.html) +</details> -## Test and Deploy +## 8. Show it to the world: Ingress -Use the built-in continuous integration in GitLab. +### Why -- [ ] [Get started with GitLab CI/CD](https://docs.gitlab.com/ee/ci/quick_start/index.html) -- [ ] [Analyze your code for known vulnerabilities with Static Application Security Testing(SAST)](https://docs.gitlab.com/ee/user/application_security/sast/) -- [ ] [Deploy to Kubernetes, Amazon EC2, or Amazon ECS using Auto Deploy](https://docs.gitlab.com/ee/topics/autodevops/requirements.html) -- [ ] [Use pull-based deployments for improved Kubernetes management](https://docs.gitlab.com/ee/user/clusters/agent/) -- [ ] [Set up protected environments](https://docs.gitlab.com/ee/ci/environments/protected_environments.html) +Now that you have an internal load balancer, you want to expose your app to your friends. Thankfully, an **Ingress Controller** and its DNS are already setup for you, all traffic for `*.vcap.me` goes to your cluster -*** +However, you need to tell the Ingress Controller where to route the request it receives, depending on its _hostname_ or _path_. That is the job of the **Ingress**: it defines a route to the service you deployed before. -# Editing this README +### What + +Create the manifest for an ingress and deploy it! + +```yaml +apiVersion: networking.k8s.io/v1 +kind: Ingress +metadata: + name: my-ingress + annotations: + nginx.ingress.kubernetes.io/rewrite-target: / +spec: + rules: + - host: www.padok.fr + http: + paths: + - path: /blog + pathType: Prefix + backend: + service: + name: my-service + port: + number: 80 +``` + +Here is the [usual documentation](https://kubernetes.io/docs/concepts/services-networking/ingress/) and commands: + +```bash +kubectl get ingress +kubectl describe ingress <my-ingress> +# visit https://guestbook.vcap.me/ +``` -When you're ready to make this README your own, just edit this file and use the handy template below (or feel free to structure it however you want - this is just a starting point!). Thank you to [makeareadme.com](https://www.makeareadme.com/) for this template. +### How -## Suggestions for a good README -Every project is different, so consider which of these sections apply to yours. The sections used in the template are suggestions for most open source projects. Also keep in mind that while a README can be too long and detailed, too long is better than too short. If you think your README is too long, consider utilizing another form of documentation rather than cutting out information. +1. Write a manifest and apply it. Choose a specific hostname for your app and your namespace if you share the cluster +2. Try to access your app, do you have HTTPs ? +3. Try to deploy your app on a _subpath_ using the `nginx.ingress.kubernetes.io/rewrite-target: /` annotation, or on a subdomain by modifying the `path` and host. -## Name -Choose a self-explaining name for your project. +### Checks -## Description -Let people know what your project can do specifically. Provide context and add a link to any reference visitors might be unfamiliar with. A list of Features or a Background subsection can also be added here. If there are alternatives to your project, this is a good place to list differentiating factors. +- [ ] I can access the app from my navigator without a port forwarding -## Badges -On some READMEs, you may see small images that convey metadata, such as whether or not all the tests are passing for the project. You can use Shields to add some to your README. Many services also have instructions for adding a badge. +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> -## Visuals -Depending on what you are making, it can be a good idea to include screenshots or even a video (you'll frequently see GIFs rather than actual videos). Tools like ttygif can help, but check out Asciinema for a more sophisticated method. +You can find the complete solution [here](https://github.com/padok-team/dojo-guestbook/blob/feat/solution/manifests/ingress.yaml). Don't spoil yourself too much! -## Installation -Within a particular ecosystem, there may be a common way of installing things, such as using Yarn, NuGet, or Homebrew. However, consider the possibility that whoever is reading your README is a novice and would like more guidance. Listing specific steps helps remove ambiguity and gets people to using your project as quickly as possible. If it only runs in a specific context like a particular programming language version or operating system or has dependencies that have to be installed manually, also add a Requirements subsection. +</details> -## Usage -Use examples liberally, and show the expected output if you can. It's helpful to have inline the smallest example of usage that you can demonstrate, while providing links to more sophisticated examples if they are too long to reasonably include in the README. +## 9. Make it fail: probes -## Support -Tell people where they can go to for help. It can be any combination of an issue tracker, a chat room, an email address, etc. +### Why -## Roadmap -If you have ideas for releases in the future, it is a good idea to list them in the README. +Our app is deployed, but is not very functional: we lack the redis for the storage! However, before deploying it, let's make it explicit that is does not work. When the redis is not set, the app should be failing. That way, someone can get the alert and fix the issue. -## Contributing -State if you are open to contributions and what your requirements are for accepting them. +That is the job of *Kubernetes probes*: often doing an HTTP request, it asks continuously the application if it is still running. -For people who want to make changes to your project, it's helpful to have some documentation on how to get started. Perhaps there is a script that they should run or some environment variables that they need to set. Make these steps explicit. These instructions could also be useful to your future self. +### What -You can also document commands to lint the code or run tests. These steps help to ensure high code quality and reduce the likelihood that the changes inadvertently break something. Having instructions for running tests is especially helpful if it requires external setup, such as starting a Selenium server for testing in a browser. +This time we need to modify the manifest of our deployment! Read [this article from Padok blog](https://www.padok.fr/en/blog/kubernetes-probes) or [documentation](https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/) to learn how to set them. -## Authors and acknowledgment -Show your appreciation to those who have contributed to the project. +Our app exposes its status at `/healthz`, if the application is not functional it will return a 5XX error. -## License -For open source projects, say how it is licensed. +### How + +1. Modify your deployment and add probes to your main container. Which type of probes do you need ? +2. Apply it. Is your application still available on the URL? It should not but rolling updates protects your. Ask a teacher about it. +3. Remove the "zombie" pods. You can delete and apply back the deployment, but a more elegant solution is to _scale down_ the replica set under the deployment (`kubectl scale replicaset <my-rs> --replicas 0`). You don't know what is a replicat set? Ask! +4. Is your website still available? Does the [HTTP error code](https://developer.mozilla.org/en-US/docs/Web/HTTP/Status) makes sense? + +### Checks + +- [ ] All my pods are "notReady" or "Failing" +- [ ] The website is down + +<details> +<summary>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</summary> + +```yaml +apiVersion: apps/v1 +kind: Deployment +metadata: + name: guestbook +spec: + replicas: 3 + selector: + matchLabels: + app: guestbook + project: dojo + template: + metadata: + labels: + app: guestbook + project: dojo + spec: + containers: + - name: guestbook + image: dixneuf19/guestbook:v0.1.0 + ports: + - containerPort: 3000 + name: http + readinessProbe: + httpGet: + path: "/healthz" + port: http + livenessProbe: + httpGet: + path: "/healthz" + port: http +``` +</details> + + +## 10. Install a complex Redis app with Helm + +### Why + +We wan't to fix our app and give it some persistent storage. However, Redis is a stateful application, a bit more complex than our simple webservice. You could write your own manifests for its deployment, but we would certainly make some mistakes. Let use what the community offers us! + +Helm is tool which helps us + +- Generate manifests from YAML templates. You can reduce the boilerplate of your code, reduce repetition etc... +- Manage our deployments as "packages", and distribute or use remote packages made by the community. + +### What + +The [Helm documentation](https://helm.sh/docs/intro/quickstart/) is quite good, but unless you have time, don't loose too much time on it. + +We will only need one command, which installs or upgrades a _release_ (ie a deployment package). We will use the _redis_ chart from the _bitnami_ repository, identified by its URL. Lastly, we will set one specific option, using a `values.yaml` file. + +```bash +helm upgrade --install <release-name> <chart-name> --repo <repo-url> -f <path-of-values-file> +``` + +### How + +1. We will use the _Bitnami_ Redis chart, you can find its [source code here](https://github.com/bitnami/charts/tree/master/bitnami/redis). +2. Create your `values.yaml` file. You only need to set `architecture: standalone`, but you can explore other options in the `values.yaml` of the repository. +3. Deploy your release with the `helm` command: + - You can name your release as you want, but if you name it the same name as the chart, the name of the resources will be shorter. + - The chart you want to use is called redis + - The Helm repository URL is https://charts.bitnami.com/bitnami + - Don't forget to set your values file +4. Explore what has been create: pods, deployments (why is there none ?), services, etc... + +### Checks + +- [ ] I have 1 redis pod running +- [ ] I have one helm release deployed: `helm ls` + +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> + +Simple run: + +```bash +helm upgrade --install redis redis --repo https://charts.bitnami.com/bitnami --set architecture=standalone +``` + +</details> + +## 11. Connect the app to Redis + +### Why + +Well, you absolutely want to have a guestbook for yourself no? + +### What + +Your application use an Environment Variable to set the host to the Redis server, has you have done previously in the Docker-Compose file. + +The [official documentation](https://kubernetes.io/docs/tasks/inject-data-application/define-environment-variable-container/) is very clear! You just need to find the path to your Redis. Since it is an internal call, you need to use the *Service* created by the Helm chart. + +Once it is set correctly, your app should be _Ready_ and you could access it from its public URL. + +### How + +1. Find the name of your Redis service. How should it be called from the pod? +2. Update your Deployment manifest and apply it. +3. Enjoy your application! + +### Checks + +- [ ] My pods are up and running +- [ ] I can actually use the guestbook from its public URL + +<details> +<summary><em>Compare your work to the solution before moving on. Are there differences? Is your approach better or worse? Why?</em></summary> + +You can find the complete solution [here](https://github.com/padok-team/dojo-guestbook/blob/feat/solution/manifests/deployment.yaml). Don't spoil yourself too much! + +</details> + +## 12. To go further + +This dojo is already quite long, but here are some ideas to continue the exercise and learn more about Kubernetes! Ask your teacher for more details on where to start, what to look for, etc... + +- Make your app resilient with [Pod Anti Affinities](https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity) +- Scale your app with an [HorizontalPodAutoscaling](https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/). You will need to generate some load on the application (adding a route could work). +- Deploy easily your own Kubernetes cluster with [`k0s`](https://k0sproject.io/) and [`k0sctl`](https://github.com/k0sproject/k0sctl) +- Automate your deployment with [ArgoCD](https://www.padok.fr/en/blog/kubernetes-cluster-gitops), the GitOps way +- Deploy a monitoring solution with [Prometheus and Grafana](https://github.com/prometheus-community/helm-charts/tree/main/charts/kube-prometheus-stack) +- _Advanced_: [Make an app scale depending on a Redis queue](https://github.com/padok-team/dojo-kubernetes-prometheus) + + +## Cleanup + +Stop the local kind cluster: + +```bash +./scripts/teardown.sh +``` + +Once you are done with this exercise, be sure to delete the containers you +created: + +```bash +docker ps --quiet | xargs docker stop +docker ps --quiet --all | xargs docker rm +``` -## Project status -If you have run out of energy or time for your project, put a note at the top of the README saying that development has slowed down or stopped completely. Someone may choose to fork your project or volunteer to step in as a maintainer or owner, allowing your project to keep going. You can also make an explicit request for maintainers. +I hope you had fun and learned something! \ No newline at end of file diff --git a/back b/back new file mode 160000 index 0000000000000000000000000000000000000000..d5c39ec0552de2ce825c5f19e6ea18958f295dd1 --- /dev/null +++ b/back @@ -0,0 +1 @@ +Subproject commit d5c39ec0552de2ce825c5f19e6ea18958f295dd1 diff --git a/front b/front new file mode 160000 index 0000000000000000000000000000000000000000..4641534153a3ecab06de6e26cbb7ef261afa3c0d --- /dev/null +++ b/front @@ -0,0 +1 @@ +Subproject commit 4641534153a3ecab06de6e26cbb7ef261afa3c0d diff --git a/solutions/dockerfiles/Dockerfile.front b/solutions/dockerfiles/Dockerfile.front new file mode 100644 index 0000000000000000000000000000000000000000..134518e0799a9c7ebf6a7b0f08b755df938a1377 --- /dev/null +++ b/solutions/dockerfiles/Dockerfile.front @@ -0,0 +1,15 @@ +FROM node:16 + +WORKDIR /front/ + +COPY package.json package-lock.json /front/ +RUN npm install + +COPY src/ /front/src/ +COPY public/ /front/public/ +COPY .env /front/ +RUN npm run build + +FROM nginx + +COPY --from=0 /front/build/ /usr/share/nginx/html/