Compare commits
129 Commits
prettify_j
...
master
Author | SHA1 | Date |
---|---|---|
Gregor Michels | eadcf6f296 | |
Gregor Michels | 7de03e6cd6 | |
Gregor Michels | 4904933475 | |
Gregor Michels | 2299e3aff1 | |
Gregor Michels | d1c1f34bf8 | |
Gregor Michels | 14df3e24df | |
Gregor Michels | d7206111fa | |
Gregor Michels | e5a0e2352d | |
Gregor Michels | 509f56e167 | |
Gregor Michels | 1171e76cd7 | |
Gregor Michels | a236643969 | |
Gregor Michels | 191b7f2a77 | |
Gregor Michels | b621e8dd48 | |
Gregor Michels | 01c9fa2317 | |
Gregor Michels | 23dba0c340 | |
Gregor Michels | eaeb360e6c | |
Gregor Michels | 72df3338d3 | |
Gregor Michels | 8fa87485ff | |
Gregor Michels | 220bb149c8 | |
Gregor Michels | 7b452966d2 | |
Gregor Michels | 68ee430145 | |
Gregor Michels | 3e7178b5ec | |
Gregor Michels | 473d7aa05a | |
Gregor Michels | c7989547aa | |
Gregor Michels | 767f76e13e | |
Gregor Michels | fe220194f9 | |
Gregor Michels | 2962a08be7 | |
Gregor Michels | 77454046b8 | |
Gregor Michels | a837a2b916 | |
Gregor Michels | e3793d07a8 | |
Gregor Michels | fe8d3b5dec | |
Gregor Michels | 3ec08cb017 | |
Gregor Michels | 67db4a7521 | |
Gregor Michels | bdc70d629b | |
Gregor Michels | a23c6dc488 | |
Gregor Michels | e750db6783 | |
Gregor Michels | 2d74d25dfc | |
Gregor Michels | 718bdb4594 | |
Gregor Michels | 0edf72cb66 | |
Gregor Michels | c40e49d645 | |
Gregor Michels | 6524149a48 | |
Gregor Michels | ea1cf9dc43 | |
Gregor Michels | f832189278 | |
Gregor Michels | 325e06cdc0 | |
Gregor Michels | f806e83705 | |
Gregor Michels | 1a834de455 | |
Gregor Michels | 2d85ba5226 | |
Gregor Michels | d5617ce1e9 | |
Gregor Michels | dce505c762 | |
Gregor Michels | 26884f6d8d | |
Gregor Michels | ff623aec65 | |
Gregor Michels | bd2dd8795e | |
Gregor Michels | 032937c7ea | |
Gregor Michels | b1a9e763ec | |
Gregor Michels | e79bc30351 | |
Gregor Michels | 44a1e9613a | |
Gregor Michels | cdac15e466 | |
Gregor Michels | e3d8369768 | |
Gregor Michels | 9afebe8438 | |
Gregor Michels | d808775f39 | |
Gregor Michels | 0db1eb2c6a | |
Gregor Michels | 51a8de4299 | |
Gregor Michels | 1ea236b206 | |
Gregor Michels | a1870e78ba | |
Gregor Michels | 0bf94d10a2 | |
Gregor Michels | ec0cfc908a | |
Gregor Michels | fb901524ca | |
Gregor Michels | 9506e94dad | |
Gregor Michels | 3e2fc42c19 | |
Gregor Michels | 6d30cf07da | |
Gregor Michels | 34e4fbf000 | |
Gregor Michels | 35f48f1bad | |
Gregor Michels | 090b8b4709 | |
Gregor Michels | 91918091ec | |
Gregor Michels | 03e2543f95 | |
Gregor Michels | 0475923590 | |
Gregor Michels | 69834a8d2b | |
Gregor Michels | f6ba9f5aa6 | |
Gregor Michels | c0f8ec9b6e | |
Gregor Michels | 64721148d8 | |
Gregor Michels | e3b111f2c7 | |
Gregor Michels | 5fa5b13da7 | |
Gregor Michels | 5017cb5dfb | |
Gregor Michels | d58b09272e | |
Gregor Michels | 9cfee1f384 | |
Gregor Michels | dca1261f07 | |
Gregor Michels | ffb7617db8 | |
Gregor Michels | 8389a18488 | |
Gregor Michels | 258355170b | |
Gregor Michels | 74075f307f | |
Gregor Michels | d4b0e622ef | |
Gregor Michels | 2a781ae751 | |
Gregor Michels | c058853f73 | |
Gregor Michels | b5fefed0be | |
Gregor Michels | f791ad76ab | |
Gregor Michels | 329b09bd9a | |
Gregor Michels | bf1c7bd3ab | |
Gregor Michels | 1e82bcc6b9 | |
Gregor Michels | ce15f497e7 | |
Gregor Michels | a4718616a9 | |
Gregor Michels | 5337a22df1 | |
Gregor Michels | 8370f150a6 | |
Gregor Michels | e110320999 | |
Gregor Michels | 7244b53d6d | |
Gregor Michels | ab2ab6601e | |
Gregor Michels | 5f4430e4b8 | |
Gregor Michels | d780bdd4fb | |
Gregor Michels | 82a50739b1 | |
Gregor Michels | 3c69441681 | |
Gregor Michels | 8d4fc76a81 | |
Gregor Michels | e9e0b07230 | |
Gregor Michels | 4afda5bdd9 | |
Gregor Michels | 1579bbdd47 | |
Gregor Michels | 02115216d6 | |
Gregor Michels | 2cc3c9457a | |
Gregor Michels | 61c1255e64 | |
Gregor Michels | e2be3c1c2d | |
Gregor Michels | 13ea6beabc | |
Gregor Michels | ad46726773 | |
Gregor Michels | d732b5c1bd | |
Gregor Michels | 3a03ff7cdd | |
Gregor Michels | f7827b6fd9 | |
Gregor Michels | a1a92d66cc | |
Gregor Michels | 421cb9ab18 | |
Gregor Michels | 0e3ff8b22f | |
Gregor Michels | a038b5e5ff | |
Gregor Michels | 166a2d33b8 | |
Gregor Michels | 4a784df86c | |
Gregor Michels | aa8e746faf |
|
@ -1,2 +1,3 @@
|
|||
ansible-facts.json/
|
||||
switch-configs-stock/
|
||||
*.html
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
[submodule "roles/gekmihesg.openwrt"]
|
||||
path = roles/gekmihesg.openwrt
|
||||
url = https://github.com/gekmihesg/ansible-openwrt.git
|
37
README.md
37
README.md
|
@ -1,11 +1,18 @@
|
|||
# Freifunk Leipzig - Erstaufnahme Einrichtung - Am Deutschen Platz
|
||||
# Freifunk Leipzig - Erstaufnahme Einrichtungen
|
||||
|
||||
This repo contains the config and documentation for our installation at the "Erstaufnahme Einrichtung - Am Deutschen Platz"
|
||||
This repo contains the config and documentation for our installations at
|
||||
* `Am Deutschen Platz`
|
||||
* `Arno-Nitzsche-Straße`
|
||||
|
||||
---
|
||||
|
||||
**this is a work in progress**
|
||||
|
||||
* this repo was created for `Am Deutschen Platz` and was then reused for `Arno-Nitzsche-Straße`
|
||||
* therefore the ansible stuff is a bit smelly
|
||||
* there is a lot of documentation missing for the `Arno-Nitzsche-Straße`
|
||||
* ...
|
||||
|
||||
---
|
||||
|
||||
## Quick Links
|
||||
|
@ -27,7 +34,7 @@ This repo contains the config and documentation for our installation at the "Ers
|
|||
### Initial Setup
|
||||
|
||||
0. install requirements
|
||||
1. clone repo and change directory: `git clone https://git.sr.ht/~hirnpfirsich/ffl-eae-adp && cd ffl-aea-adp`
|
||||
1. clone repo and change directory: `git clone --recurse-submodules https://git.sr.ht/~hirnpfirsich/ffl-eae-adp && cd ffl-aea-adp`
|
||||
2. create python3 virtual enviroment: `python3 -m venv ansible-environment`
|
||||
3. enter python3 virtual environment: `. ansible-environment/bin/activate`
|
||||
4. install ansible and dependencies: `pip3 install -r ansible-environment.txt`
|
||||
|
@ -46,6 +53,30 @@ Should something in the inventory change or you want to use/change the jumphost
|
|||
|
||||
Passwords managed using `pass`. Simply call `pass` after sourcing the environment.
|
||||
|
||||
### Monitoring
|
||||
|
||||
Initially we've deployed the monitoring on `monitoring01` (that lives on `hyper01` in `Am Deutschen Platz`).
|
||||
|
||||
After deploying the second camp we've decided to move the monitoring into the `cloud`.
|
||||
The new monitoring stack runs on `eae-adp-jump01`.
|
||||
Unfortunately `prometheus` crashes every few hours on `openbsd`.
|
||||
So there is a cronjob restarting `prometheus` every 2 hours on `eae-adp-jump01`.
|
||||
|
||||
As soon as someone finds the time we will move the monitoring stack onto a normal linux machine.
|
||||
|
||||
* old monitoring: `monitoring01 - 10.84.1.51`
|
||||
* is not getting new configs via ansible
|
||||
* rocks an old version of the grafana dashboard
|
||||
* the facility management still has a link to this instance
|
||||
* new monitoring: `eae-adp-jump01 - 10.84.254.0`
|
||||
|
||||
Both stacks offer the following services:
|
||||
* `prometheus`: `tcp/9090`
|
||||
* `alertmanager`: `tcp/9093`
|
||||
* `grafana`: `tcp/3000`
|
||||
|
||||
Use `ssh -D 8888 eae-adp-jump01` an configure this socks proxy in your favorite browser to visit the webguis.
|
||||
|
||||
### Descriptions
|
||||
|
||||
* `environment`: configure environment (path to `pass` store, http(s) socks proxy and python venv for ansible)
|
||||
|
|
|
@ -1,28 +1,94 @@
|
|||
[accesspoints]
|
||||
ap-c5d1 ip=10.84.1.33 location=office-facility channel_2g=1 channel_5g=36 txpower_2g=12 txpower_5g=13
|
||||
ap-ac7c ip=10.84.1.31 location=office-social channel_2g=11 channel_5g=161 txpower_2g=12 txpower_5g=13
|
||||
ap-c5d1 ip=10.84.1.33 location=office-social2 channel_2g=1 channel_5g=36 txpower_2g=12 txpower_5g=13
|
||||
ap-ac7c ip=10.84.1.31 location=office-social1 channel_2g=11 channel_5g=161 txpower_2g=12 txpower_5g=13
|
||||
ap-8f42 ip=10.84.1.36 location=tent-1 channel_2g=6 channel_5g=40
|
||||
ap-0b99 ip=10.84.1.32 location=tent-2 channel_2g=11 channel_5g=44
|
||||
ap-c495 ip=10.84.1.34 location=tent-3 channel_2g=1 channel_5g=48
|
||||
ap-2bbf ip=10.84.1.30 location=tent-4 channel_2g=11 channel_5g=149
|
||||
ap-1a38 ip=10.84.1.35 location=tent-5 channel_2g=6 channel_5g=153
|
||||
ap-8f39 ip=10.84.1.37 location=tent-5 channel_2g=1 channel_5g=157
|
||||
ap-1293 ip=10.84.1.38 location=office-facility channel_2g=1 channel_5g=100 txpower_2g=6 txpower_5g=7
|
||||
|
||||
ap-b62f ip=10.85.1.31 location=tent-1 channel_2g=1 channel_5g=36 txpower_2g=15 txpower_5g=20
|
||||
ap-b656 ip=10.85.1.35 location=tent-1 channel_2g=6 channel_5g=140 txpower_2g=15 txpower_5g=20
|
||||
ap-b6ee ip=10.85.1.32 location=office-security channel_2g=1 channel_5g=48 txpower_2g=12 txpower_5g=13
|
||||
ap-b5df ip=10.85.1.38 location=office-social channel_2g=11 channel_5g=153 txpower_2g=12 txpower_5g=13
|
||||
ap-b6cb ip=10.85.1.33 location=office-facility channel_2g=6 channel_5g=60 txpower_2g=12 txpower_5g=13
|
||||
ap-b641 ip=10.85.1.30 location=tent-2 channel_2g=1 channel_5g=136 txpower_2g=15 txpower_5g=20
|
||||
ap-b6d7 ip=10.85.1.34 location=tent-2 channel_2g=6 channel_5g=104 txpower_2g=15 txpower_5g=20
|
||||
ap-b644 ip=10.85.1.36 location=tent-2 channel_2g=11 channel_5g=124 txpower_2g=15 txpower_5g=20
|
||||
ap-b634 ip=10.85.1.37 location=tent-3 channel_2g=1 channel_5g=116 txpower_2g=15 txpower_5g=20
|
||||
ap-b6cc ip=10.85.1.39 location=tent-3 channel_2g=6 channel_5g=40 txpower_2g=15 txpower_5g=20
|
||||
ap-b682 ip=10.85.1.40 location=tent-3 channel_2g=11 channel_5g=64 txpower_2g=15 txpower_5g=20
|
||||
|
||||
ap-116e ip=10.86.1.31 location=p203 disable_2g=1 channel_5g=48 txpower_2g=17 txpower_5g=20
|
||||
ap-11c4 ip=10.86.1.32 location=office-security channel_2g=1 channel_5g=36 txpower_2g=17 txpower_5g=20
|
||||
ap-1202 ip=10.86.1.33 location=p201 disable_2g=1 channel_5g=153 txpower_2g=17 txpower_5g=20
|
||||
ap-12a8 ip=10.86.1.34 location=p104 channel_2g=11 channel_5g=60 txpower_2g=17 txpower_5g=20
|
||||
ap-13ac ip=10.86.1.35 location=p106 disable_2g=1 channel_5g=116 txpower_2g=17 txpower_5g=20
|
||||
ap-144c ip=10.86.1.36 location=p108 channel_2g=1 channel_5g=140 txpower_2g=17 txpower_5g=20
|
||||
ap-12c2 ip=10.86.1.37 location=p207 disable_2g=1 channel_5g=128 txpower_2g=17 txpower_5g=20
|
||||
ap-16bc ip=10.86.1.38 location=p205 channel_2g=6 channel_5g=104 txpower_2g=17 txpower_5g=20
|
||||
ap-1374 ip=10.86.1.39 location=kitchen-og disable_2g=1 channel_5g=153 txpower_2g=17 txpower_5g=20
|
||||
|
||||
[accesspoints:vars]
|
||||
ansible_remote_tmp=/tmp
|
||||
garet_profile=aruba-ap-105_21.02
|
||||
garet_release=845a6ba
|
||||
garet_profile=aruba-ap-105_22.03
|
||||
garet_release=9974455
|
||||
|
||||
[aptype_aruba_ap_303]
|
||||
ap-11c4
|
||||
ap-116e
|
||||
ap-1202
|
||||
ap-12a8
|
||||
ap-13ac
|
||||
ap-144c
|
||||
ap-12c2
|
||||
ap-16bc
|
||||
ap-1374
|
||||
|
||||
[aptype_aruba_ap_105]
|
||||
ap-c5d1
|
||||
ap-ac7c
|
||||
ap-8f42
|
||||
ap-0b99
|
||||
ap-c495
|
||||
ap-2bbf
|
||||
ap-1a38
|
||||
ap-8f39
|
||||
ap-1293
|
||||
ap-b62f
|
||||
ap-b656
|
||||
ap-b6ee
|
||||
ap-b5df
|
||||
ap-b6cb
|
||||
ap-b641
|
||||
ap-b6d7
|
||||
ap-b644
|
||||
ap-b634
|
||||
ap-b6cc
|
||||
ap-b682
|
||||
|
||||
[switches]
|
||||
sw-access01 ip=10.84.1.11
|
||||
sw-access02 ip=10.84.1.12
|
||||
sw-access01 ip=10.84.1.11 base_mac=bc:cf:4f:e3:bb:8d location=office-social2
|
||||
sw-access02 ip=10.84.1.12 base_mac=bc:cf:4f:e3:ac:39 location=tent-5
|
||||
sw-access04 ip=10.84.1.14 base_mac=5c:e2:8c:6a:7f:cc location=tent-2
|
||||
|
||||
[switches_stock]
|
||||
ffl-ans-sw-distribution01 ip=10.85.1.11 base_mac=5c:e2:8c:60:82:fb sw_type=gs1900-10hp location=office-facility
|
||||
ffl-ans-sw-access01 ip=10.85.1.12 base_mac=04:bf:6d:15:c6:b3 sw_type=gs1900-10hp location=tent-1
|
||||
ffl-ans-sw-access02 ip=10.85.1.13 base_mac=04:bf:6d:15:c6:92 sw_type=gs1900-10hp location=tent-2
|
||||
sax-rgs-sw-access01 ip=10.86.1.11 sw_type=s2800s-8t2f-p location=p104
|
||||
sax-rgs-sw-access02 ip=10.86.1.12 sw_type=s2800s-8t2f-p location=p204
|
||||
|
||||
[gateways]
|
||||
gw-core01 ip=10.84.1.1
|
||||
gw-core01 ip=10.84.1.1
|
||||
ffl-ans-gw-core01 ip=10.85.1.1
|
||||
sax-rgs-gw-core01 ip=10.86.1.1 garet_profile=sophos-sg-xxx_22.03 garet_release=601bc29
|
||||
|
||||
[gateways:vars]
|
||||
ansible_remote_tmp=/tmp
|
||||
garet_profile=sophos-sg-125r2-22.03
|
||||
garet_profile=sophos-sg-125r2_22.03
|
||||
garet_release=89cbd27
|
||||
|
||||
[server]
|
||||
|
@ -38,3 +104,84 @@ mon-e2e-wan01 ip=192.168.0.3 cpus=1 disk=10 memory=256 net='{"net0":"name=e
|
|||
|
||||
[container:vars]
|
||||
ostemplate=local:vztmpl/debian-11-standard_11.3-1_amd64.tar.zst
|
||||
|
||||
[openwrt:children]
|
||||
switches
|
||||
|
||||
[site_adp]
|
||||
ap-c5d1
|
||||
ap-ac7c
|
||||
ap-8f42
|
||||
ap-0b99
|
||||
ap-c495
|
||||
ap-2bbf
|
||||
ap-1a38
|
||||
ap-8f39
|
||||
ap-1293
|
||||
sw-access01
|
||||
sw-access02
|
||||
sw-access04
|
||||
gw-core01
|
||||
hyper01
|
||||
monitoring01
|
||||
mon-e2e-clients01
|
||||
mon-e2e-wan01
|
||||
|
||||
[site_adp:vars]
|
||||
wifi_ssid="GU Deutscher Platz"
|
||||
wifi_encryption=none
|
||||
backoffice_wifi_ssid="GU Deutscher Platz Backoffice"
|
||||
backoffice_wifi_encryption=psk2
|
||||
backoffice_wifi_psk="{{ lookup('passwordstore', 'wifi/GU_Deutscher_Platz_Backoffice') }}"
|
||||
site=adp
|
||||
|
||||
[site_ans]
|
||||
ap-b641
|
||||
ap-b62f
|
||||
ap-b6ee
|
||||
ap-b6cb
|
||||
ap-b6d7
|
||||
ap-b656
|
||||
ap-b644
|
||||
ap-b634
|
||||
ap-b5df
|
||||
ap-b682
|
||||
ap-b6cc
|
||||
ffl-ans-gw-core01
|
||||
ffl-ans-sw-distribution01
|
||||
ffl-ans-sw-access01
|
||||
ffl-ans-sw-access02
|
||||
|
||||
[site_ans:vars]
|
||||
wifi_ssid="GU Arno-Nitzsche-Strasse"
|
||||
wifi_encryption=none
|
||||
wifi_disabled=0
|
||||
backoffice_wifi_ssid="GU Arno-Nitzsche-Strasse BO"
|
||||
backoffice_wifi_encryption=psk2
|
||||
backoffice_wifi_psk="{{ lookup('passwordstore', 'wifi/GU_Arno-Nitzsche-Straße_Backoffice') }}"
|
||||
mgmt_gateway=10.85.1.1
|
||||
site=ans
|
||||
|
||||
[site_rgs]
|
||||
sax-rgs-sw-access01
|
||||
sax-rgs-sw-access02
|
||||
sax-rgs-gw-core01
|
||||
ap-11c4
|
||||
ap-116e
|
||||
ap-1202
|
||||
ap-12a8
|
||||
ap-13ac
|
||||
ap-144c
|
||||
ap-12c2
|
||||
ap-16bc
|
||||
ap-1374
|
||||
|
||||
[site_rgs:vars]
|
||||
wifi_ssid="{{ lookup('passwordstore', 'wifi/site_rgs_ssid') }}"
|
||||
wifi_encryption=none
|
||||
wifi_disabled=0
|
||||
backoffice_wifi_ssid="{{ lookup('passwordstore', 'wifi/site_rgs_backoffice_ssid') }}"
|
||||
backoffice_wifi_encryption=psk2
|
||||
backoffice_wifi_psk="{{ lookup('passwordstore', 'wifi/site_rgs_backoffice') }}"
|
||||
mgmt_gateway=10.86.1.1
|
||||
site=rgs
|
||||
|
|
|
@ -4,3 +4,4 @@ interpreter_python=/usr/bin/python3
|
|||
gathering=smart
|
||||
fact_caching=jsonfile
|
||||
fact_caching_connection=ansible-facts.json
|
||||
callbacks_enabled = ansible.posix.profile_tasks
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
<mxfile host="app.diagrams.net" modified="2023-03-14T01:11:23.043Z" agent="5.0 (X11)" etag="HeUGzaMI0PEll7OsNIGH" version="21.0.6" type="device"><diagram name="Page-1" id="YwlCLJMcKuBeH3aDT3El">7R3bcqM49mvy6BQS98e2k57tquntbGe2ZvepC4NiM0OMC5N2sl+/EkgYkAC5zUWJ6U6VrSMh4Nx1zpF8o6+eX39LvP32axyg6AZqweuNfncDITRMDX8QyFsOcSwK2CRhkIPACfAY/g9RIBv2EgboUBmYxnGUhvsq0I93O+SnFZiXJPGxOuwpjqp33XsbxAEefS/ioX+GQbqlb8Fei8D/gcLNlt0ZaLTn2WODKeCw9YL4WALp9zf6KonjNP/2/LpCEUEew0t+3eeG3uLBErRLZS5w7N/sf23saPH564/7ryvwLbzbL4BOn+6nF73QV6aPm74xHCTxyy5AZBrtRl8et2GKHveeT3qPmOoYtk2fI9wC+CudDiUpem18UlC8P2YcFD+jNHnDQ9gFNn0oyjOQtY8nCkBGgW0J+wXQo1TfFHOfEIO/UNycgyfoqo8n4EjiyR0OTc47QJMmiSZnMDQBCW5Cu+ATUWC4tYt3GLgMvMM2wxuo4ojAH7w0Rckug0BNx9BDmsR/F0oL42dZRXx+QxRw+q8TryW8idDGYAmKvDT8WZ1ehEt6h4c4xDcuyLbQG7QAm+IQvyQ+oleVFV9tItfomCj1kg1KuYky0havfQm1JYRipnaDLruU2txEg1Pbnql9NrW1vqhdn2hwaps9G7ynMIpWcRQn2bX6kxfYnl+QvNSzNixTM/oxkYZuilVkiR3w7Xh+GNDjAtbgmHUDU4RZDxlAhz1h1oAVzOqWwPlgBqKMWWAMh1ljWMyuPd8JdBFmoW4YZnAhZlt4FFgCnTUkj76H1dP03i4clt8CzUcIivjNtExgo57w6tg1SQYcYkUqckhBBgMLsoZ0ZIkQqyFHc5x+EAst+9asoZbn2ZFR63YgMk7SbbyJd170exzvKfr+Qmn6RmNY3ksaV5GLXsP0P+TyW2jS5n9LXXevdOqs8cYanKeIIZ9D8jZ5P/bxHulDnp7p/gSV9gXxy2ZOVwtWmJeQO1UtAy0xwc/zGvF7e2+lAXvixB1KM9d8QWgZVS4yawGx+niotY3HX/In6NVhLKKBgzDWR+croPfBWBwn6FrNZDrurVv+Z1RnbFid9MYjYEgesT88k9jDMAmw+2CSc7WarltnaTXdNtvGD6PVLIfj2G7eGEjvae0sjRsPKAnxG6OkYHP89qWJSLMQDtI4TZW1ZMWjT+aXNbzmhbx/kdekd7ujh79R6m8pgijv4w5zif+0W42EU1bF542JL1llPa7T1UM/Sz1sNr4H5oAqUASzeSDI5oSmACiC2fV7ZwAguHcdBgVA4ZSCe2u1h8R/+pJ46aGPBcxbo+ghPoRpGJOo4jpO0/i5Klts7Kco3JAxKZHJpXfY50nWp/CVSO7So90+5mgiTzlJUXL/E+WUJXMdtt6ekP/5dUOSxLd+ePBj4N4m2VzLffLFz54jgj8OxzBjj/pK5PMn8l+4EtFMzdZ7WonUdCcUZIraoqK9C5REoGYWqFmglBUosyZPggT1qPIkE6/bYL9k3/jutM7EW7Ph2rk4AW51sWHogmCHACn1rGVvSJGIs/N6Rc/kgfimFREjkWyxHrjVjRrYgYKxri2aOJd8y+YGU33E6QSR+GuwYXRtcjnRrujdkeX8GGJ/Dh0OPzzfJx/ZtbyMF5JclXHqq0qJd4sUNQv9tDIuKq2wIoKeNf6yIV+8/WJtIcTg+DZFF8f6GDtplcJCbJZRT0Ecieus8RwGQbYSEkVlRQndPon1ViVDh/KBg2lkidzA0BoZ2oppZEOiPOhypLTQo7UuYFRUAIk15WydZutUFx5VrZMw9y4wT2bwdBXmqYlaqpgnCed4aPOkG9VV1OTmSabCbdbJs06ui5GyOllUnytaMvjrq9DJTdRSRCezRGAbuZ6eooW3OywOx0UQYuyH6xcijhr4aATsWNmwbSssACegHWAViuMQT2J5XiJerq6ujWy1TLMuMPgAWiNSjbHIRPndm1J2t8j1NpUsiHO5p+xtUy4Xt/jEcEf9gyCZnCNq1KSvIWamcXK+hjEla4CZNVpZY9JyAGZsJs22KBbaMyVKdue107x2qkuRqmsnUxS/F62d4HXEs5qopcjayZQIpw+tklXLf5sSVTYDZVvMhnrmyVAhsZtxtk6zdaoLj7LWSRS+F1knMvAarFMDtRSxTuxm864rUcise7HXYHcv3PhgVe21LnkOx7kbHWo7+UfZt2AOus3v3e60kWa4S8MLYk4A2jgcB6bYAGhJJE8HL3jSqwHmYh97V4B5uJRyN1LaC92bysolnEHhOGDkDiL9LLuqptnSlfWwIYL5nFPPuyghx5yYTzJQ/XiLjDQXRDjTepUWv0piKavNceHHCdL4HZxquI5P8S5ldgT0T6BmP3LcfJXNO5KT7UfsMvwjJiV69BEsWR/hUp/0Ij5wJOzK0MbWqDm2k8e7HFHIdg7yzEGeDjFSNcjjiGLaoiAPSVVfQZCniVqKBHmcUcLtHRwLVdPJEoH3gXIQTkNMcDJUzPsRZ/N0nsJrCEMpYp5EAR+hefpwVY5nUUsV8yRxWPPg5slVbMeHI7EhcdbJs06ui5GqOpmVoAh0chD+PKlekZoObLGaLmCVGT6a8m4gqyLK21Ug3mOxQ05UUd7uHO+ZlfdZUu4qHe9xZeM9vn8VDnUTtVTRyQrEe2xLNZ08XbzHVSze487xntk8nafwlI73uLLxHv060hFN1FLFPCkQ73F0xeI97hzvmXXyeVKudLynqCjtVMoOvA6lrHYcB2gqBHI0WNHKotNrx62SlVg0zMdBz8dBT3gcdJs0K1vPa0gYB/5MmuswFI20U6XUF2g6h/TLK3klTvfgT6aBHeePiH56ZMx9PdR8dJbs5gWiU9XsAo0/UWYign4Uek7yc2wmUw1sd5Vm1nhjgN1VgN5kGuY5a4vpmJzCLFw3qzQcxzosq1hsfTjmRjyg2ROwSifVf2n76BSs4l5qJRp4oXYYe3H4Xc+bP+3ayVAj8Rz/Q2mjqaezzkp7L7bNncS22e4ktm2I3eq/xDwfxdN1LzV3F/72hWjhWaMwT6DAO2wzkoMqKQn8wUsxyncZBGqnOO+fdHkGG5aG3VhvTRi3LdcvtAcLvSpqZn0l2WAPuIlco2OinFu4iXoTX0bcErVX377fcxRXI2SQxKlH41ekXPJXtg53JL1qRc4AUn+sI+hc9wd6FEe+sOjuy+Mf378s//3Hl2//nAmVt6HgtIiRCcWXFH1are4fH2cSUdQLUhUjk4jPVSy/fb+7/z6TKG8bfNB7ZApJ1F3NzkfVZ+jL9xjb9dB514Oj9diHDskeaj9YNlUCJ/OZQ1dz5hCVEWWTlOzAlRK7bsKN57+skaImtaezhhoJM3wGEjeTOE7Lypgw7tc4QGTE/wE=</diagram></mxfile>
|
Binary file not shown.
After Width: | Height: | Size: 64 KiB |
|
@ -1118,3 +1118,501 @@ all updates where doing using the new "idempotent" `playbook_sysupgrade` (since
|
|||
* 2022.10.24 01:44 - 01:46: `gw-core01`
|
||||
=> downtime of the accesspoints in the specified timeframe
|
||||
=> downtime of `gw-core01` in the specified timeframe
|
||||
|
||||
|
||||
025 2022.11.19 04:00 (ANS) | (maintenance) (try to) steer clients into 5 GHz band
|
||||
---------------------------------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**problem**:
|
||||
* (if i remember correctly) way more clients in the 2,4 GhZ band than in the 5 GHz band (3/4 to 1/4)
|
||||
|
||||
**solution**:
|
||||
* halfe the transmit power in the 2,4 GHz band
|
||||
* increased transmit power in the 5 GHz band by 1 dBm
|
||||
* implemented by `5017cb5`
|
||||
|
||||
**impact**:
|
||||
This restarted wifi on all APs at the same time.
|
||||
Downtime for all clients for a few seconds at 04:00 in the morning.
|
||||
|
||||
**validation**:
|
||||
One day afterwards it seemed like there where more clients in the 5 GHz band (50/50), but the datarates dropped for most of them.
|
||||
|
||||
**critisism**:
|
||||
* placement, transmit power and supported bands of the clients impact 5 GHz utilization
|
||||
* unsure what actually is the problem
|
||||
* also did not correctly validate for a few days
|
||||
|
||||
|
||||
026 2022.11.20 15:30 (ANS) | (maintenance) replace SFP modules
|
||||
--------------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**intro**:
|
||||
The needed SFP modules for `ans` did not arrive in time for the installation.
|
||||
Therefore we've installed super old and shitty transcievers (> 10 years old, >70°C, ...) to get a working network.
|
||||
|
||||
**impact**:
|
||||
* L2 interruption (<= 10 seconds) for all tents
|
||||
|
||||
|
||||
027 2022.11.21 02:00 | (maintenance) attach volume to `eae-adp-jump01` for prometheus
|
||||
-------------------------------------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**problem**:
|
||||
After installing a prometheus stack onto `eae-adp-jump01` (`8389a18`) the `/var/` partition filled up after a few days.
|
||||
Limiting the size of the TSDB did not resolve this issues (maybe i've misconifigured the limit).
|
||||
|
||||
**solution**:
|
||||
* `sysupgrade` to `OpenBSD 7.2`
|
||||
* attach 20GB block device onto vm and mount it as `/var/prometheus`:
|
||||
```
|
||||
eae-adp-jump01# rcctl stop prometheus
|
||||
eae-adp-jump01# rm -r /var/prometheus/*
|
||||
eae-adp-jump01# sysctl hw.disknames
|
||||
eae-adp-jump01# fdisk -iy sd1
|
||||
eae-adp-jump01# disklabel -E sd1
|
||||
> a a
|
||||
>
|
||||
> *
|
||||
> q
|
||||
eae-adp-jump01# newfs sd1a
|
||||
eae-adp-jump01# diff -Naur /etc/fstab.20221121 /etc/fstab
|
||||
--- /etc/fstab.20221121 Sun Jun 26 23:00:39 2022
|
||||
+++ /etc/fstab Mon Nov 21 02:01:03 2022
|
||||
@@ -8,3 +8,4 @@
|
||||
e1c3571d54635852.j /usr/obj ffs rw,nodev,nosuid 1 2
|
||||
e1c3571d54635852.i /usr/src ffs rw,nodev,nosuid 1 2
|
||||
e1c3571d54635852.e /var ffs rw,nodev,nosuid 1 2
|
||||
+a0469c9f38992e1d.a /var/prometheus ffs rw,nodev,nosuid 1 2
|
||||
eae-adp-jump01# mount /var/prometheus
|
||||
eae-adp-jump01# chown _prometheus:_prometheus /var/prometheus
|
||||
eae-adp-jump01# rcctl start prometheus
|
||||
eae-adp-jump01# syspatch
|
||||
eae-adp-jump01# reboot
|
||||
```
|
||||
|
||||
|
||||
028 2022.11.29 02:00 | periodically restart prometheus
|
||||
------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**problem**:
|
||||
`prometheus` crashed regularly on `eae-adp-jump01`.
|
||||
It seems like `OpenBSD` is missing some functionality on file handles that let's `prometheus` crash.
|
||||
Here is an [github issue](https://github.com/prometheus/prometheus/issues/8799) (for an older `OpenBSD` release) that descripes the same problems.
|
||||
|
||||
**solution**:
|
||||
until I've got time to install a new linux machine somewhere that does the monitoring: regularly restart `prometheus`:
|
||||
```
|
||||
eae-adp-jump01# crontab -e
|
||||
[...]
|
||||
0 */2 * * * rcctl restart prometheus
|
||||
```
|
||||
|
||||
|
||||
029 2022.11.29 03:00 (ANS) | (maintenance) automagically start offloader
|
||||
------------------------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**problem**:
|
||||
ANS washes the traffic via a FFLPZ/FFDD offloader vm.
|
||||
There only was a script that manually started the offloader vm.
|
||||
On reboots the offloader vm would not automagically start.
|
||||
|
||||
**solution**:
|
||||
implement a service that starts the vm
|
||||
|
||||
**impact**:
|
||||
after validating the script on another openwrt machine I tested the script in production.
|
||||
This created the following downtimes:
|
||||
* `offloader` down from 02:50 to 03:05 -- service interruption for the public wifi
|
||||
* `ffl-ans-gw-core01` down from 02:53 to 02:55 -- service interruption for everybody
|
||||
|
||||
**disclaimer**:
|
||||
The script is manually deployed on `ffl-ans-gw-core01` and therefore not part of this repo at the moment
|
||||
|
||||
|
||||
030 2022.11.30 15:30 (ANS) | (maintenance) replace switches
|
||||
-----------------------------------------------------------
|
||||
|
||||
---
|
||||
|
||||
_this log entry was added way after doing the actual work.
|
||||
Please read it with a grain of salt_
|
||||
|
||||
---
|
||||
|
||||
**intro**:
|
||||
The switches installed into ans were defective.
|
||||
Not every boot had working PoE.
|
||||
Meaning that a power outage could result in no power for the APs.
|
||||
Fortunately `Zyxel` replaced the devices.
|
||||
|
||||
**replacement log**:
|
||||
* 16:34:30 - 16:34:50: `ffl-ans-sw-distribution01`
|
||||
* quickly replaced device and connections
|
||||
* => l2 interruption for `ffl-ans-sw-acces01` and `ffl-ans-sw-access02`
|
||||
* => power cycle of APs in social, security and facility container
|
||||
* 16:49: `ffl-ans-sw-access01`
|
||||
* power up new device alongside
|
||||
* bridge old and new device with short patch cable
|
||||
* move sfp uplink to new device
|
||||
* move first ap to new switch
|
||||
* wait till ap was back up and serving clients
|
||||
* move second ap
|
||||
* teardown old device
|
||||
* => minimal l2 downtime
|
||||
* => rolling AP downtimes
|
||||
* 17:09:30 - 17:10:15: `ffl-ans-sw-access02`
|
||||
* quickly replaced device and connections
|
||||
* => power cycle of all APs in `tent 2&3`
|
||||
|
||||
|
||||
031 2022.12.23 14:00 (ADP) | enable backoffice wifi in tent 1
|
||||
--------------------------------------------------------------
|
||||
|
||||
**intro**:
|
||||
The facility management moved into the "logistics" container
|
||||
|
||||
**problem**:
|
||||
Because there is no AP inside the container the wifi experience sucks.
|
||||
The printer is unable to connect to the wifi and the notebook has 0 bars.
|
||||
|
||||
**hotfix**:
|
||||
Also distribute backoffice wifi from `tent 1` because it is the nearest ap.
|
||||
Commited through (`d808775`).
|
||||
Rolled out at around 14:00.
|
||||
|
||||
**impact**:
|
||||
* quick wifi outage for tent 1 (a few seconds)
|
||||
|
||||
**validation**:
|
||||
* the wifi experience still sucks
|
||||
* scanning sometimes works
|
||||
* i see the backoffice devices from the logistics container associating and reassociating multiple times per minute
|
||||
|
||||
**longterm**:
|
||||
* install an ap inside the logistics container
|
||||
* ETA for installation: `29.12.2022`
|
||||
|
||||
|
||||
032 2022.12.29 14:00 - 16:00 (ADP) | add ap to new facility management container
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
We installed an accesspoint into the new facility management container (`ap-1293`).
|
||||
Afterwards we disabled the temporary backoffice wifi in tent 1 (`e3d8369`).
|
||||
|
||||
The new ap is connected via a `CAT.7 outdoor 30m` cable and is plugged into `sw-acces01 port 6`.
|
||||
|
||||
This closes incident `031`.
|
||||
|
||||
|
||||
033 2023.01.02 17:45 (ANS) | network core rebooted
|
||||
--------------------------------------------------
|
||||
|
||||
someone accidentially unplugged the power for the network core in the facility management container
|
||||
|
||||
**impact**:
|
||||
* downtime from 17:40 - 17:45
|
||||
* public wifi was down 5 minutes longer because `batman` needed to converge on the `offloader`
|
||||
|
||||
|
||||
034 2023.01.08 04:00 - 05:30 | (maintenance) bring accesspoints onto OpenWrt 22.03
|
||||
----------------------------------------------------------------------------------
|
||||
|
||||
**target version**:
|
||||
```
|
||||
-----------------------------------------------------
|
||||
https://git.sr.ht/~hirnpfirsich/garet
|
||||
garet 9974455, aruba-ap-105_22.03
|
||||
-----------------------------------------------------
|
||||
```
|
||||
|
||||
**changelog**:
|
||||
* `OpenWrt 22.03.2`
|
||||
* support for `lldp` -- needs to be configured
|
||||
|
||||
**canary test**:
|
||||
* `ap-c5d1` (adp)
|
||||
* `ap-b6ee` (ans)
|
||||
* down from 04:00 - 04:05
|
||||
* `playbook_provision_accesspoints` restarted wifi on `ap-c5d1` 04:21 one more time
|
||||
|
||||
**Deutscher Platz**:
|
||||
* down from 05:00 - 05:06
|
||||
* `playbook_provision_accesspoints` restarted wifi on the following aps again at around 05:10
|
||||
* `ap-8f39`
|
||||
* `ap-c495`
|
||||
* `ap-ac7c`
|
||||
* `ap-0b99`
|
||||
|
||||
**Arno-Nitzsche-Str**:
|
||||
* down from 05:21 - 05:27
|
||||
* `playbook_provision_accesspoints` did not restart the wifi again
|
||||
|
||||
**summarized impact**:
|
||||
* `Arno-Nitzsche-Str`: wifi down from 05:21 - 05:27
|
||||
* `ap-b6ee` from 04:00 - 04:05
|
||||
* `Deutscher Platz`: wifi down from 05:00 - 05:06
|
||||
* `ap-c5d1` from 04:00 - 04:05
|
||||
* `ap-8f39` additional wifi restart at 05:10
|
||||
* `ap-c495` additional wifi restart at 05:10
|
||||
* `ap-ac7c` additional wifi restart at 05:10
|
||||
* `ap-0b99` additional wifi restart at 05:10
|
||||
|
||||
|
||||
035 2023.01.16 04:30 - 12:15 (ANS) | uplink broken
|
||||
--------------------------------------------------
|
||||
|
||||
facility management rebooted the gigacube
|
||||
|
||||
|
||||
036 2023.01.24 02:15 (RGS) | (maintenance) increase tx power of aps
|
||||
-------------------------------------------------------------------
|
||||
|
||||
```
|
||||
RUNNING HANDLER [reload wireless] *********************************************************************************************************************
|
||||
Tuesday 24 January 2023 02:16:45 +0100 (0:00:31.967) 0:02:06.789 *******
|
||||
```
|
||||
|
||||
see `191b7f2` for details
|
||||
|
||||
|
||||
037 2023.01.29 23:10 - 2023.01.30 17:00 (ADP) | unstable ethernet link to tent-3
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
**impact**:
|
||||
very unstable uplink for ap in `tent-3`
|
||||
|
||||
**hotfix**:
|
||||
shutdown ap via poe to move clients onto other accesspoint (there really are no other ap in this tent though :()
|
||||
|
||||
**problem**:
|
||||
someone butchered the ethernet cables (from the network core) by squeezing and bending them through cable guides.
|
||||
|
||||
**fix**:
|
||||
Tried "unbending" them and the link came back!
|
||||
|
||||
|
||||
038 2023.02.01 04:00 (ADP) | move to different mullvad account
|
||||
--------------------------------------------------------------
|
||||
|
||||
old pubkey: 'Sqz0LEJVmgNlq6ZgmR9YqUu3EcJzFw0bJNixGUV9Nl8=`
|
||||
|
||||
```
|
||||
RUNNING HANDLER [reload network] **********************************************************************************************************************
|
||||
Wednesday 01 February 2023 03:59:45 +0100 (0:00:03.789) 0:01:22.895 ****
|
||||
changed: [gw-core01]
|
||||
```
|
||||
|
||||
see commit `68ee430` for details
|
||||
|
||||
|
||||
039 2023.02.07 (ADP) | unstable ethernet link in tent-3 (again)
|
||||
---------------------------------------------------------------
|
||||
|
||||
**introduction**:
|
||||
the uplink for `tent-3` went flacky again
|
||||
|
||||
**problem**:
|
||||
the cables took irrepearable damage from mishandling (see `incident 035` for details)
|
||||
|
||||
**fix**:
|
||||
* install new access switch into `tent-2` (`sw-access04`: `220bb14`)
|
||||
* migrate uplink for `tent-3` from the `core` onto `sw-access04`
|
||||
|
||||
|
||||
040 2023.02.28 08:00 (ADP) | dns issues
|
||||
---------------------------------------
|
||||
|
||||
**introduction**:
|
||||
Someone on site called and notified me that "the internet is not working".
|
||||
|
||||
**problem**:
|
||||
`gw-core01` stopped serving dns queries:
|
||||
```
|
||||
root@gw-core01:~# logread | grep max
|
||||
Tue Feb 28 08:44:16 2023 daemon.warn dnsmasq[1]: Maximum number of concurrent DNS queries reached (max: 150)
|
||||
```
|
||||
|
||||
**fix**:
|
||||
* increased `maxdnsqueries`
|
||||
* increased `dnscache`
|
||||
* changed upstream dns to `9.9.9.9` (quad9) and `1.1.1.1` (cloudflare)
|
||||
|
||||
see `a236643` for details
|
||||
|
||||
|
||||
041 2023.03.11 19:20 - 2023.03.13 20:30 (ADP) | broken management vpn tunnel
|
||||
----------------------------------------------------------------------------
|
||||
|
||||
```
|
||||
root@gw-core01:~# date
|
||||
Mon Mar 13 19:40:48 2023
|
||||
root@gw-core01:~# wg
|
||||
interface: wg0
|
||||
public key: 1lYOjFZBY4WbaVmyWFuesVbgfFrfqDTnmAIrXTWLkh4=
|
||||
private key: (hidden)
|
||||
listening port: 51820
|
||||
|
||||
peer: 9j6aZs+ViG9d9xw8AofRo10FPosW6LpDIv0IHtqP4UM=
|
||||
preshared key: (hidden)
|
||||
endpoint: 162.55.53.85:51820
|
||||
allowed ips: 0.0.0.0/0
|
||||
latest handshake: 1 day, 23 hours, 55 minutes, 49 seconds ago
|
||||
transfer: 1.17 GiB received, 16.71 GiB sent
|
||||
persistent keepalive: every 15 seconds
|
||||
root@gw-core01:~# ifdown wg0
|
||||
root@gw-core01:~# ifup wg0
|
||||
root@gw-core01:~# echo wg0 still not handshaking properly
|
||||
root@gw-core01:~# uci delete network.wg0.listen_port
|
||||
root@gw-core01:~# /etc/init.d/network reload
|
||||
root@gw-core01:~# echo wg0 is up again !
|
||||
root@gw-core01:~# uci commit network
|
||||
```
|
||||
|
||||
042 2023.03.12 18:00 - 2023.03.22 19:30 (RGS) | `ap-1374` (`kitchen-og`) down
|
||||
-----------------------------------------------------------------------------
|
||||
|
||||
`ap-1374` is (mostly) down since 2023.03.12 18:00.
|
||||
Neither the ethernet link nor the poe is coming up.
|
||||
```
|
||||
user@freifunk-admin:~$ date && ssh sax-rgs-sw-access02
|
||||
Wed 15 Mar 2023 12:07:55 AM CET
|
||||
[...]
|
||||
sax-rgs-sw-access02# show logging buffered
|
||||
|
||||
Log messages in buffer
|
||||
[...]
|
||||
5;Feb 17 2000 05:37:36;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
4;Feb 17 2000 05:37:37;%TRUNK-4-INFO: Power-Over-Ethernet on gi0/7 Powered Down!
|
||||
4;Feb 17 2000 05:37:48;%TRUNK-4-INFO: Power-Over-Ethernet on gi0/7: Detected Standard PD, Delivering power!
|
||||
5;Feb 17 2000 05:37:54;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 17 2000 05:38:26;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 17 2000 05:38:28;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 17 2000 05:38:32;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 17 2000 05:38:35;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 17 2000 05:38:38;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 17 2000 05:38:59;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 20 2000 10:02:32;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/6, changed state to down
|
||||
5;Feb 20 2000 10:02:35;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/6, changed state to up
|
||||
5;Feb 24 2000 22:50:15;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 24 2000 22:50:15;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 24 2000 22:50:15;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
4;Feb 24 2000 22:50:18;%TRUNK-4-INFO: Power-Over-Ethernet on gi0/7 Powered Down!
|
||||
5;Feb 25 2000 13:57:06;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/6, changed state to down
|
||||
5;Feb 25 2000 13:57:09;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/6, changed state to up
|
||||
4;Feb 26 2000 21:52:17;%TRUNK-4-INFO: Power-Over-Ethernet on gi0/7: Detected Standard PD, Delivering power!
|
||||
5;Feb 26 2000 21:52:22;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 26 2000 21:52:54;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 26 2000 21:52:57;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 26 2000 21:53:01;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 26 2000 21:53:03;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 26 2000 21:53:06;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
5;Feb 26 2000 21:53:26;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to up
|
||||
5;Feb 27 2000 00:31:56;%LINEPROTO-5-UPDOWN: Line protocol on GigabitEthernet0/7, changed state to down
|
||||
4;Feb 27 2000 00:31:57;%TRUNK-4-INFO: Power-Over-Ethernet on gi0/7 Powered Down!
|
||||
5;Feb 27 2000 05:24:48;%AAA-5-LOGIN: New ssh connection for user admin, source 10.86.254.0 ACCEPTED
|
||||
6;Feb 27 2000 05:25:00;%AAA-6-INFO: User 'admin' enter privileged mode from ssh with level '15' success
|
||||
sax-rgs-sw-access02# show clock
|
||||
2000-02-27 05:43:37 Coordinated(UTC+0)
|
||||
```
|
||||
|
||||
**needed fix**:
|
||||
* check keystone modules on site
|
||||
* also check module for `0/6` (there are some `ifInErrors`)
|
||||
|
||||
**additional work - set correct time on switches (done)**:
|
||||
```
|
||||
sax-rgs-sw-access0X> enable
|
||||
sax-rgs-sw-access0X# configure terminal
|
||||
sax-rgs-sw-access0X(config)# clock timezone CET +1
|
||||
sax-rgs-sw-access0X(config)# clock set 00:26:15 mar 15 2023
|
||||
sax-rgs-sw-access0X(config)# clock source ntp
|
||||
sax-rgs-sw-access0X(config)# ntp server pool.ntp.org
|
||||
sax-rgs-sw-access0X(config)# exit
|
||||
sax-rgs-sw-access0X(config)# write
|
||||
```
|
||||
|
||||
**disable port till fix is there - done 16.03.2023 00:40**:
|
||||
```
|
||||
sax-rgs-sw-access02> enable
|
||||
sax-rgs-sw-access02# configure terminal
|
||||
sax-rgs-sw-access02(config-if-GigabitEthernet0/7)# no poe enable
|
||||
sax-rgs-sw-access02(config-if-GigabitEthernet0/7)# exit
|
||||
sax-rgs-sw-access02(config)# exit
|
||||
sax-rgs-sw-access02# write
|
||||
```
|
||||
|
||||
**actual fix - done 22.03.2023**:
|
||||
* reterminate keystone modules for both links (`GigabitEthernet0/6` and `GigabitEthernet0/7`)
|
||||
* reenable poe on `GigabitEthernet0/7`
|
||||
* test by
|
||||
* resetting link counters on `sax-rgs-sw-access02`
|
||||
* ` iperf3` from ap to core gateway (bidirectional)
|
||||
* looking at the counters again
|
||||
|
||||
|
||||
043 2023.03.20 01:30 | (maintenance) update eae-adp-jump01
|
||||
----------------------------------------------------------
|
||||
|
||||
```
|
||||
syspatch
|
||||
pkg_add -uU
|
||||
reboot
|
||||
```
|
||||
|
||||
|
||||
044 2023.03.25 23:45 - 2023.03.26 13:00 (ANS) | broken upstream
|
||||
---------------------------------------------------------------
|
||||
|
||||
`ffl-ans-gw-core01` hasn't handshaked with `eae-adp-jump01` since 2023.03.25 at around 23:45.
|
||||
Additionally the facility management called and said that there was "no internet" on site.
|
||||
|
||||
The facility management will drive to the ANS and check in with me to talk about the next steps
|
||||
|
||||
**solution**: after power cycling the gigacube the upstream came back
|
||||
|
||||
|
||||
045 2023.04.01 - 2023.04.02 (ANS) | fibre cut to tent-1
|
||||
-------------------------------------------------------
|
||||
|
||||
**issue**: fibre cut from `facility management` container to `tent-1`
|
||||
|
||||
**solution**: replace fibre with outdoor copper cable
|
||||
|
||||
**dicussion**:
|
||||
* longterm: replace copper with fibre
|
||||
|
|
File diff suppressed because one or more lines are too long
Binary file not shown.
Before Width: | Height: | Size: 117 KiB After Width: | Height: | Size: 126 KiB |
|
@ -4,7 +4,7 @@ groups:
|
|||
# from https://awesome-prometheus-alerts.grep.to/rules.html#rule-prometheus-self-monitoring-1-2
|
||||
- alert: PrometheusTargetMissing
|
||||
expr: up == 0
|
||||
for: 0m
|
||||
for: 1m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
|
@ -21,10 +21,70 @@ groups:
|
|||
description: "The uptime of a node changed in the last two hours. VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
|
||||
- alert: PublicWifiUpstreamLost
|
||||
expr: sum(probe_success{job="e2e_clients_v4"}) == 0
|
||||
expr: sum(probe_success{job="e2e_adp_clients_v4"}) == 0
|
||||
for: 0m
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: The public wifi lost its ability to route into the internet
|
||||
description: "check the vpn connection"
|
||||
|
||||
- name: ServerSpecific
|
||||
rules:
|
||||
# https://awesome-prometheus-alerts.grep.to/rules#rule-host-and-hardware-1-7
|
||||
#
|
||||
# Please add ignored mountpoints in node_exporter parameters like
|
||||
# "--collector.filesystem.ignored-mount-points=^/(sys|proc|dev|run)($|/)".
|
||||
# Same rule using "node_filesystem_free_bytes" will fire when disk fills for non-root users.
|
||||
- alert: HostOutOfDiskSpace
|
||||
expr: (node_filesystem_avail_bytes * 100) / node_filesystem_size_bytes < 10 and ON (instance, device, mountpoint) node_filesystem_readonly == 0
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host out of disk space (instance {{ $labels.instance }})
|
||||
description: "Disk is almost full (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
|
||||
# https://awesome-prometheus-alerts.grep.to/rules#rule-host-and-hardware-1-9
|
||||
- alert: HostOutOfInodes
|
||||
expr: node_filesystem_files_free / node_filesystem_files * 100 < 10 and ON (instance, device, mountpoint) node_filesystem_readonly == 0
|
||||
for: 2m
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: Host out of inodes (instance {{ $labels.instance }})
|
||||
description: "Disk is almost running out of available inodes (< 10% left)\n VALUE = {{ $value }}\n LABELS = {{ $labels }}"
|
||||
|
||||
- name: Network
|
||||
rules:
|
||||
- alert: PortChangedState
|
||||
expr: changes(ifLastChange[2h]) != 0
|
||||
labels:
|
||||
severity: warning
|
||||
annotations:
|
||||
summary: "{{ $labels.ifName }} on {{ $labels.instance }} changed it's state {{ $value }}x time(s) in the last 2 hours"
|
||||
description: "This alarm will clear in 2 hours"
|
||||
|
||||
- alert: PortIfInErrors
|
||||
expr: increase(ifInErrors[2h]) > 0 or increase(node_network_receive_errs_total[2h]) > 0
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "{{ if $labels.ifName }} {{ $labels.ifName }} {{ else }} {{ $labels.device }} {{ end }} on {{ $labels.instance }} has {{ $value }} ifInErrors in the last 2 hours. This alarm will clear automatically in 2 hours"
|
||||
description: "For some reason the port is throwing ifInErrors"
|
||||
|
||||
- alert: PortIfOutErrors
|
||||
expr: increase(ifOutErrors[2h]) > 0 or increase(node_network_transmit_errs_total[2h]) > 0
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "{{ if $labels.ifName }} {{ $labels.ifName }} {{ else }} {{ $labels.device }} {{ end }} on {{ $labels.instance }} has {{ $value }} ifOutErrors in the last 2 hours"
|
||||
description: "For some reason the port is throwing ifOutErrors. This alarm will clear automatically in 2 hours"
|
||||
|
||||
- alert: SNMPNodeRebooted
|
||||
expr: (sysUpTime / 100) <= (60 * 60 * 2)
|
||||
labels:
|
||||
severity: critical
|
||||
annotations:
|
||||
summary: "{{ $labels.instance }} rebooted at least one time in the last two hours"
|
||||
description: "This alarm will clear in 2 hours"
|
||||
|
|
|
@ -3,3 +3,5 @@ ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAykqqvlk2XTSa5xxAtWUA7RpEcI0rPBIAmFmT+zzU2VdU
|
|||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDTFLWYfL9LhAj1tTfjdy2b9ncT3IqxDSXrVyG0Anci7H37GbkVGxiQw86HPR5CL2TzIX9jhrWnK8T3f/CQmhEiYhjE6p3kRkZN+krTTfm77sarb3wdg1OHtmlCNm6EmkIOuK7ewIzHgNsHW5jeNg4wl/klmXK4XKMIiJsr7s1gTZ6F7jz3av2p0aaHF6ntAyMmSPJTVhCbvUQaM27tSaPjGUOya2sxXajgIVbVBSMsaSwSGfOCty/Bef4WTM14NNMiSpdYs3uW1BMM39bYy2vgONFPeQLjmWr/X940wZZvYCcEaYSyTAbIXdaVyilxyC69ZDEg/rf3jvyemO0pWQn3 chaosox@molly (Linux)
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDaUUOwqS03cgJmvA5ryagZXKxmU03gLfoYZVQ8zJIDeHzgNyfFX8orXCxL6vZYeMuM5KRFsqXQPXFc5ILjoen5qhkTI+qc70nHr2Zvy3+Zq4zVqcE8Vpcgfy4VbzJPrrcxwxmdeGzf8LlfrWPlX5OG8rZx8kpSqUpDadG3ma1guykyq4FN8oEEuzlo6gBjgSuse9qrC94JwRUbd+jHN/RbaKnbXKwslO73zVNkdbq3BVCHLjLNq40NkxQv49vYVyWIozugFoNTul3hI/M0OPxNfkx+kzkG+2cgE8nxLdjkojqAkVKNM+SC70UCuEVofEYbyyzXDWwZ3JwDO9mUpsH2XKPLsdKCwlIc9dwnKu5Pu2Q9rsvpbgmcTNhsV8EXXglsROO3YXspKwg9oAXAFjmeAzYylrMRjFv/q0RL5d8+YnN6xoRZad7/fcj5OHpaxFTDeyTQ50TSc1uIlWJmnFCE0PzWmuTz/BkR0X52fh7Tu2vE90nOn24lrZxokiLVwDo7WBbBtT7MXTnr0wKf3jMPSk7Mp2e4TaLoole4NyNHa7RwHWhwu4o/y29qwW4SvKoSOiKFnpa7HiRCkEvxpuYgVA/KsmGWySRXNndMn4TOFfRJcMHx1B67XgGLTDTtGMrP+eN2mIerAxdv/4uAIwnQUaWnycPex90BN1uyc6/itw== chaosox@wintermute
|
||||
ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILWktkX613ZL6iXrSXXFykgXj3XHTGhHAUMXLypKV5Qw chaosox@molly (WSL)
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQCVJAFhvSqCggIxCjxl8ybLUGP/WJJJ67AzipkIVpVsfYUwNGvMUFu13meHBaf34c2sVVSn7dV0qw51Xj3h570KFFuijFwsQbRb7xtyPY6c+Vw7Ehhu9EPcopxGltSk8VmxNdyO5X4DxVrnGN2xZOQq/4aDNnl1aegVtsMEXfy/wUvkMp89gJmn9u2yXhjnbgdYB4VE/Zxtwi1h0JqL6WbGf/wrvwjD6xJBmUe+G/+2tdcyYcEPmyObpNq4RYtu3JhNYD8xXRxEFVy+dNXm2P3/8JspW6N7VHYpLQTvDf3PzxoTlfENap+pgihag1URJzhqhJ4g+OHGAcpk3rKcnJbF rsa-key-20221112
|
||||
ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCzA1vg4hukp41JZYW7I/TG02s4pBEiuUgfUqmQskbjoKpvcHxhZv82S3QqJQnqBANtOL+QTCrVyMEn6pjQ61BVTgaYNldUnedWorHXS93r32Eqz+NWW7zTqPgylEdWzznY3Sx8mimaHNnGjBjcgAgyVekk+vx4LOaTw8pqu85Da5eKAp97XQTxdJtEsxG1etp7LZrvKCL2+1KGz5J/AoyMHMdEWgMRHZPFzOJJ81uxBwvEt+1dIcO72bl+rNK1AgwEnqyrJav5mA0Z1Dj85cCb6QccUZI2J3c9RDSLMIrab359rdMj7mOVgm5GGaHS2VZb2Zjsxux9JBY7yv/wh5katkSXlpsW9LS5SGN4TPsALVEmDPLkaA0LXDsXyvWs12Sdamw5igdpJhWfnmsqvkiD3eNb0uTf/uhO8qyNx8fzn1WhxkDEphSfmH/0rP3mCvkyTWqBS1Dr15Iq4dHyVc1IfhijErbk0RUF3aedmvj7Jbpg/NfZ3dGrHUWEbRhZXngKgS9WJ3KayF7T6CsH/P2iPufYaa9PcOAp01Ezdk/wvdq1O1hRb5H76+5ilKg1HjtJ/kMgX7U39jchNZ94KGIDDiWKKvq3D+Q9Cxe6qndiGmPAyCszQ8dzEs/p11QxhDlIJHLaUF/nMRuVsFaJljIe5SA2r4tYzWQ5O7BL09e6qQ== mowoe@decima
|
||||
|
|
|
@ -0,0 +1,13 @@
|
|||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||
|
||||
mDMEY2/88hYJKwYBBAHaRw8BAQdAV9QF5wXsizMDUD2w2GTUurA04t+z3n7SAq4V
|
||||
blntKKu0Fk1heCA8YWRtaW5AbG9kcmljaC5kZT6ImQQTFgoAQRYhBCLp8m6zG1Mb
|
||||
22CRck/7U9n7BCTMBQJjb/zyAhsDBQkDwzg+BQsJCAcCAiICBhUKCQgLAgQWAgMB
|
||||
Ah4HAheAAAoJEE/7U9n7BCTMkIMBAKHQMDe8Rb1bi2mF+caQyYP5sklMVbOTlSY4
|
||||
f1tbqzG3AQDCZoClNCVF7ppCYjPsEpuhayRmS+mI9YR4JuF73owsDbg4BGNv/PIS
|
||||
CisGAQQBl1UBBQEBB0CbniuHfjUu/nd6uBDYVkW4MSJo3lpg/Mdt5s64NY4jQwMB
|
||||
CAeIfgQYFgoAJhYhBCLp8m6zG1Mb22CRck/7U9n7BCTMBQJjb/zyAhsMBQkDwzg+
|
||||
AAoJEE/7U9n7BCTM/CwBAO+rrWsyE4x0Owx4bggh144JIu5J5DGij1KboGsoxFW0
|
||||
AP9Xe4aoaYfKNEouckI2G0cmDE/9FtA9v73SkzeXTKQfDw==
|
||||
=0vzZ
|
||||
-----END PGP PUBLIC KEY BLOCK-----
|
|
@ -0,0 +1,51 @@
|
|||
-----BEGIN PGP PUBLIC KEY BLOCK-----
|
||||
|
||||
mQINBGOvMSsBEACf1T6IuReSSg+qo/qZCPvOKAiZhVc230/iM0sPaxhtCOD7mpNA
|
||||
NX59dLNGfGF3jlA4+cL1pI48klLB98VttVQwxGBTN1RtIjN/WXk5OoNXfGMbj63V
|
||||
/sn/4vhgWcW1r1sJBE5I5/E5GU7RJfW44IRNbMoGjNoMiAHAX/RS7Sk2JsYM4fd/
|
||||
RcSIFTicCUuMoyIVUw7PYUPertOf1l/+vS4VI/J9X7ykk+jdWqu5LmdKbFY+1CSf
|
||||
cmqn5/5oG00wByTEGelzW9f3tAr+WGIclD0VKZeIEwnifxrucI3EWPb/p2yatSLS
|
||||
1OZj4Ub0N3bHSUwOuBuzwKD9zK4PpDUCD8we9uN8SdvhDNh/zVoQh3RrqaQJBixb
|
||||
hpTEPwZt17Mxop3KNruBiGHutnOO3OAr8U1iOeIwLjiG/BnuGMm97tniLAZi/7d5
|
||||
4/AgmF2yYdDvl1TfYAMD80ZA97hWR50anjPQGaCgp7lOruffMPK4kEOv1t2+Tn0P
|
||||
Msf04xYa+8kn/ck9TUxFcEkhtJ9nhqX8JY+a/HPI3wFY0FOveatWAg2y17FR3yv2
|
||||
laOEIztRoFmj8foxWv8bUZKAkmXlkSbbQwJyBNQKYFEpag4VR7bCZNyOo9pgLTza
|
||||
r5pVFuejTKnYfWful+fiHlfyYtaIQEzAKDiJFY/N9vMRBsOO3PPZZ/tXFwARAQAB
|
||||
tCFtb3dvZS1pbmZyYS1kcGwgPG1vd29lQG1vd29lLmNvbT6JAk4EEwEIADgWIQQx
|
||||
eV+4PFvI2a2yPLoBNQ7MK4nlcgUCY68xKwIbAwULCQgHAgYVCgkICwIEFgIDAQIe
|
||||
AQIXgAAKCRABNQ7MK4nlchfID/9XzAxH0CFa0v3skLBcAWbFvy2vWvJ9rPALGnO4
|
||||
IYznzUy9lt8LT4lK1fqcGMWpe1Ore1e1rdtHLNZuII2VkTJqImsWT4B4JMO2vkfy
|
||||
rDlOc0cO+/hS1jchs4165YiCvnhoGO9kCGvFNzvoDY7xFHccPO8STRSiRpC7YMXH
|
||||
JNSAONUHe25zlfRORauAa58QAaxRhhb8E7zcnad2jIbYEBqBh2rFgMmVk99L1SBt
|
||||
IhrLYX5iRKKwSEiuhjBeTlUbhXPMr2WGE1PFfGKT0HFmDBC1Z8csz4mZSa/wQoH8
|
||||
40lx+c3wBS/CE2JKRM3LCJX0Tm9pjhPKg5ykV9WT5QHxy4Tbv7FUbA9Fs9VuT1r7
|
||||
cGprH+aXFWKK3DQJ3CsvhOx9zRSOGk1/RqteJ8LeaFxwocjOfQtjFv3uJ/Mc5A8V
|
||||
lM8OfB6hLdTX+HP7U9glaT4A7Dmu/q6CGvKN1+kDGf8Ansn2yGrOG1kD9Fd2htEH
|
||||
xqZK34PZALOZzG644adnHS6yYBCXjEQdKWfg3wOMWDITVFeirFvJ654r7WgrUJ6F
|
||||
jE4CNNu7R8e1Nm6Z4iVP0yjJjnZBZHSTW3nY2hz8fXEpxjBB8GCSDjVS7fkA5gBe
|
||||
A9TVT3EWXl+zImRoWqjLGXbFMrM0aAjO+ralQzExATxqXH75l4i2FDX78sXGfsmE
|
||||
DLVjerkCDQRjrzErARAA9ikeaDPeCGPeFsxSkxTNMtVdguY3WOo/dG/HOIE+DAgK
|
||||
A0ZrDr6IhrnKPu4tsAjpxY7qgaT1crkXKFkc/eRWFUS5+3x2JkbLD0Qzhm+S76HE
|
||||
NL+UtiXXNOTGt3yFLZrq6PF8LN00e0ottzcEr52R8UShvKyH3GotQuULdOmOxa7V
|
||||
0HAdPAkI6waFgZ6c5Oje4R6aCTK5VuVBgZXuh5TRkF/fcvtP5lI94dKVHAIE+OGX
|
||||
Rh1aKuzxwrVlwgbFKKqySnUdc/RO6xD6Cw2KNjs2HYSNw5oM0oEYJo5IQWTpw2zF
|
||||
Ut7pOx4Htbtv7DXr1OiPOFjKl/9MgbErmdmw6Ovjw4IT++jVrUWOJy16fiDsulk0
|
||||
9Z5Lv6PQLB814mXyCCWK9Juhymv4Ii1d8u7f55Di7vVJoyT2dG23OloYitWSfldA
|
||||
Cp3jVtv6YHxjOR3/LzI6Qdg23vOxFYYesDb8REnGpood0ProNdesfd8TJIuPTJGo
|
||||
fanWAmk+10mIgm0DuBv9ZAbxFPa/PJBlARCapr3uMmtJ+RwXW8k/MPzoLEsM7VWv
|
||||
rSvGAjACVLV+FjV+nHzITOOX7xHoT3xl81cXx4NdyCGsHlpoE8Us07g8qMGJX45+
|
||||
4N6YZi/x0/5M0qwdJTQoMIPqystBCGfijLLFP/+vpjm21WRc9gMrQVlORsJbuLkA
|
||||
EQEAAYkCNgQYAQgAIBYhBDF5X7g8W8jZrbI8ugE1DswrieVyBQJjrzErAhsMAAoJ
|
||||
EAE1DswrieVyQCMP/0d9bXYs9yYq1PkopIOOc8BnfNSTMkl8qjZR7Cx5IBH6wHWx
|
||||
Q4RuETNsMJhAgZyjCKP2A/SS8BmFsc2OcnGVjdYDDovrfZW53Cz0kM0KS1NY0t+S
|
||||
IdGw64twNoxQxtSvySTC7kofBMJxbjdEAyxnft0qPWDKrWxRiGVepcnIGnxjHOGU
|
||||
L6GyJfw/0X5lF8yVIsio8A0cvlhgpL5p7blgrYrmyCPV2HIfUgCDAqDnm8Kfsr6e
|
||||
FqARo3P5SrGCKDXBSG9NSjsbKRATdpg9ZwMqoKNMCNzUl1DbzJ+BzY9PWn31FeNL
|
||||
BBd7DDp92gH+hgu1O23m/S6GX/ZehnyF8jucwMlY1S5giOOehmvLd1YZKlDTkpV2
|
||||
9ucFM77IVyQryiix/vC31s0g/4aeKxFmkilKlEXqY2A7zfjhIy08xl9nr2BKd9aL
|
||||
ZTZLydkDuPWeYrk6yTJS8tSQyN5U4ivAdXVhi2/Da2+OEEsL3CgMp4HLiZDljHco
|
||||
m0wJkU6O9psJGMuewgStPUhY0TkYOsR5vRu27HAy/c/YPFHYGTj7chSpR8zkBSYc
|
||||
LZaotTEQWm+RYOtyDumuLXAuZzhYd9fbY3bpRNC1673+Bm9y0uNRtfxBC1j/K/bs
|
||||
nTTsaduddyYg2mV8VlVt/hTTAreMwfDpejWq7W7xhcwP5MCovdGD+d/hCLk6
|
||||
=OJvy
|
||||
-----END PGP PUBLIC KEY BLOCK-----
|
|
@ -28,4 +28,10 @@ area 0.0.0.0 {
|
|||
interface wg0 {
|
||||
type p2p
|
||||
}
|
||||
interface wg2 {
|
||||
type p2p
|
||||
}
|
||||
interface wg3 {
|
||||
type p2p
|
||||
}
|
||||
}
|
||||
|
|
|
@ -0,0 +1,11 @@
|
|||
# allow incoming udp packets for wg2
|
||||
pass in proto udp from any to self port 51822
|
||||
|
||||
# allow ospf on wg2
|
||||
pass on wg2 proto ospf
|
||||
|
||||
# allow prometheus on wg2
|
||||
pass on wg2 proto tcp from any to self port 9100
|
||||
|
||||
# allow outgoing snmp on wg2
|
||||
pass out on wg2 proto udp from self to any port snmp
|
|
@ -0,0 +1,11 @@
|
|||
# allow incoming udp packets for wg3
|
||||
pass in proto udp from any to self port 51823
|
||||
|
||||
# allow ospf on wg3
|
||||
pass on wg3 proto ospf
|
||||
|
||||
# allow prometheus on wg3
|
||||
pass on wg3 proto tcp from any to self port 9100
|
||||
|
||||
# allow outgoing snmp on wg3
|
||||
pass out on wg3 proto udp from self to any port snmp
|
|
@ -7,6 +7,9 @@ local function scrape()
|
|||
local metric_wifi_network_noise = metric("wifi_network_noise_dbm","gauge")
|
||||
local metric_wifi_network_signal = metric("wifi_network_signal_dbm","gauge")
|
||||
local metric_wifi_clients = metric("wifi_network_clients", "gauge")
|
||||
local metric_wifi_airtime_total = metric("wifi_network_airtime_total", "gauge")
|
||||
local metric_wifi_airtime_busy = metric("wifi_network_airtime_busy", "gauge")
|
||||
local metric_wifi_airtime_utilization = metric("wifi_network_airtime_utilization", "gauge")
|
||||
|
||||
local u = ubus.connect()
|
||||
local status = u:call("network.wireless", "status", {})
|
||||
|
@ -19,7 +22,7 @@ local function scrape()
|
|||
local labels = {
|
||||
channel = iw.channel(ifname),
|
||||
ssid = iw.ssid(ifname),
|
||||
bssid = iw.bssid(ifname),
|
||||
bssid = string.lower(iw.bssid(ifname)),
|
||||
mode = iw.mode(ifname),
|
||||
ifname = ifname,
|
||||
country = iw.country(ifname),
|
||||
|
@ -37,11 +40,16 @@ local function scrape()
|
|||
local wifi_clients = 0
|
||||
for _ in pairs(iw.assoclist(ifname)) do wifi_clients = wifi_clients +1 end
|
||||
|
||||
local hostapd_status = u:call("hostapd." .. ifname, "get_status", {})
|
||||
|
||||
metric_wifi_network_quality(labels, quality)
|
||||
metric_wifi_network_noise(labels, iw.noise(ifname) or 0)
|
||||
metric_wifi_network_bitrate(labels, iw.bitrate(ifname) or 0)
|
||||
metric_wifi_network_signal(labels, iw.signal(ifname) or -255)
|
||||
metric_wifi_clients(labels, wifi_clients)
|
||||
metric_wifi_airtime_total(labels, hostapd_status.airtime.time)
|
||||
metric_wifi_airtime_busy(labels, hostapd_status.airtime.time_busy)
|
||||
metric_wifi_airtime_utilization(labels, hostapd_status.airtime.utilization)
|
||||
end
|
||||
end
|
||||
end
|
||||
|
|
|
@ -0,0 +1,2 @@
|
|||
*
|
||||
!.gitignore
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
radios:
|
||||
radio0:
|
||||
type: "mac80211"
|
||||
path: "pci0000:00/0000:00:11.0"
|
||||
band: "2g"
|
||||
htmode: "HT20"
|
||||
radio1:
|
||||
type: "mac80211"
|
||||
path: "pci0000:00/0000:00:12.0"
|
||||
band: "5g"
|
||||
htmode: "HT20"
|
|
@ -0,0 +1,12 @@
|
|||
---
|
||||
radios:
|
||||
radio0:
|
||||
type: "mac80211"
|
||||
path: "platform/soc/a000000.wifi"
|
||||
band: "2g"
|
||||
htmode: "HT20"
|
||||
radio1:
|
||||
type: "mac80211"
|
||||
path: "platform/soc/a800000.wifi"
|
||||
band: "5g"
|
||||
htmode: "VHT20"
|
|
@ -1,2 +1,4 @@
|
|||
EB0D409FD8884BBECC04532AF937CB4882C16136
|
||||
C2AA3A4266D111B27C3774EB2438B8ADFDF45447
|
||||
22E9F26EB31B531BDB6091724FFB53D9FB0424CC
|
||||
31795FB83C5BC8D9ADB23CBA01350ECC2B89E572
|
||||
|
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue