# Freifunk Leipzig - Erstaufnahme Einrichtungen This repo contains the config and documentation for our installations at * `Am Deutschen Platz` * `Arno-Nitzsche-Straße` --- **this is a work in progress** * this repo was created for `Am Deutschen Platz` and was then reused for `Arno-Nitzsche-Straße` * therefore the ansible stuff is a bit smelly * there is a lot of documentation missing for the `Arno-Nitzsche-Straße` * ... --- ## Quick Links * [Documentation](documentation/MAIN.md) * [Incidents](documentation/INCIDENTS.md) * [Todo](documentation/TODO.md) ## Usage ### Requirements * `pass` (password manager) * `pandoc` (offline documentation generation) * `python3` (ansible) * `python3-venv` (ansible) * `rsync` (ansible) ### Initial Setup 0. install requirements 1. clone repo and change directory: `git clone --recurse-submodules https://git.sr.ht/~hirnpfirsich/ffl-eae-adp && cd ffl-aea-adp` 2. create python3 virtual enviroment: `python3 -m venv ansible-environment` 3. enter python3 virtual environment: `. ansible-environment/bin/activate` 4. install ansible and dependencies: `pip3 install -r ansible-environment.txt` 5. import all gpg keys for `pass`: `gpg --import files/gpg/*` 6. trust all imported gpg keys: `gpg --edit-key ` with `trust` and `5` for every key 7. create `ssh_config` with all hosts: `ansible-playbook playbook_create_ssh_config.yml` (use `-e jumphost=eae-adp-jump01` to configure ssh to use `eae-adp-jump01` as the jump host) 8. leave python3 virtual environment: `deactivate` ### Daily Usage Before doing enything you need to enter the environment: `. environment` After using `playbook_create_ssh_config.yml` you can call `ssh` simply with the name of the machine (ie. `ssh gw-core01`). The `ssh_config` file is generated from the `ansible-inventory`. Should something in the inventory change or you want to use/change the jumphost simply reexecute the playbook. Passwords managed using `pass`. Simply call `pass` after sourcing the environment. ### Monitoring Initially we've deployed the monitoring on `monitoring01` (that lives on `hyper01` in `Am Deutschen Platz`). After deploying the second camp we've decided to move the monitoring into the `cloud`. The new monitoring stack runs on `eae-adp-jump01`. Unfortunately `prometheus` crashes every few hours on `openbsd`. So there is a cronjob restarting `prometheus` every 2 hours on `eae-adp-jump01`. As soon as someone finds the time we will move the monitoring stack onto a normal linux machine. * old monitoring: `monitoring01 - 10.84.1.51` * is not getting new configs via ansible * rocks an old version of the grafana dashboard * the facility management still has a link to this instance * new monitoring: `eae-adp-jump01 - 10.84.254.0` Both stacks offer the following services: * `prometheus`: `tcp/9090` * `alertmanager`: `tcp/9093` * `grafana`: `tcp/3000` Use `ssh -D 8888 eae-adp-jump01` an configure this socks proxy in your favorite browser to visit the webguis. ### Descriptions * `environment`: configure environment (path to `pass` store, http(s) socks proxy and python venv for ansible) * `playbook_create_ssh_config.yml`: playbook to create an additional `ssh_config` file (`.ssh/ffl_eae_adp_config`) that get's included in the default `ssh_config` * `playbook_distribute_authorized_keys.yml`: deploy `files/authorized_keys` on all machines * `playbook_provision_accesspoints.yml`: configure accesspoints * `playbook_provision_backbone.yml`: configure wg tunnel and ospf link between `gw-core01` and `eae-adp-jump01` * `playbook_provision_eap-adp-jump01.yml`: general system configuration for `eae-adp-jump01` (monitoring, routing, ...) * `playbook_provision_hyper01.yml`: general system configuration for `hyper01` and create vms/containers * `playbook_provision_monitoring.yml`: configure and install prometheus and grafana on `monitoring01`