Fixing Proxmox boot ending in grub prompt with ZFS disks

I run a proxmox server and I had replaced a failed HDD /dev/sdd, the probem is with ZFS that the disk has some partitions. When you replace a HDD you must be sure to make the same partions for that new disk. If you do not do this you have a working ZFS disk but after the replacement your proxmox server can stop booting and ends with an grub> prompt

Today I fixed a failing proxmox server by using: Proxmox ISO on USB, boot this and go into the Proxmox Debug mode. go into the 2nd prompt

do the following

  1. zpool import -R /mnt rpool
  2. mount -t proc /proc /mnt/proc
  3. mount –rbind /dev /mnt/dev
  4. mount –rbind /sys /mnt/sys
  5. # Enter chroot
  6. chroot /mnt /bin/bash
  7. source /etc/profile
  8. # Reinstall package
  9. dpkg –configure -a

This can be used if your network is booted. If not you can skip 1 – 4

  1. apt-get update && apt-get dist-upgrade -y
  2. apt-get install –reinstall grub-pc
  3. apt-get install –reinstall zfs-initramfs
  4. apt-get install –reinstall pve-kernel-4.15.17-3 linux-image-amd64

 

  1. # Reinstall grub
  2. grub-install /dev/sda
  3. grub-install /dev/sdb

as my /dev/sdd is only a ZFS disk it cannot be used for grub so I skipped that one.

  1. update-grub2
  2. for x in $(cat /proc/cmdline); do case $x in root=ZFS=*) BOOT=zfs; ;; esac; done
  3. grub-probe /
  4. update-initramfs -u -k all
  5. # Exit chroot
  6. exit
  7. reboot

Initially my kernel gave me a verification error. But selecting and older kernel gave me a working proxmox again. By reinstalling the failing kernel again grub is updated correctly and your machine can boot again.

The next GUIDE to install NGINX-PROXY-MANAGER and not having bad gateway database issues

Before reading: the main reason why this nginx-proxy-manager was not running in my environment was the fact that I was running my Linux version as LXC under Proxmox and not as a VM under proxmox. After failing a 2nd time with exact the same config files what was working I noticed that I was using LXC and that the Proton VM was actually a VM, by changing it to a normal Debian VM I was able to get a working version fast again.

This is also probably the reason that Portainer was not able to start the database as well. So in the end: using a VM ..

 

I was reading:
another website that was telling me how to install nginx-proxy-manager. But I failed. I kept getting ‘ bad gateway’ and if you read the github posts about this issue you will not understand why all is failing.

So yes, I did install proton VM, a sucking virtual machine under my proxmox as it was used by the guy from that other website. As docker is available I had to start it during boot. Those guidelines were described fine. But installing my own mysql or mariadb was failing time after time. Especially as mariadb or mysql was not having a root password. So I failed. Buy why?

So in the end (lucky I had a snapshot, so that I could go back when messing some things really bad up. I restarted the machine and thought about what I read on another website: nginx-proxy-manager is ‘now’  providing a mysql instance itself. AHA .. so if that is true than I have to forget all info about previous own installed docker stuff with databases. So i removed those failures from the system.

I checked the website of nginx-proxy-manager and thought: let start over ..

In the end to make this story short

I made sure the server pointed to “host”: “127.0.0.1”, in the config.json

make sure there is a config.json
place this config.json where you use the ‘docker-compose up -d’
I did it in /home/nginx-proxy-manager/

And probably here is the catch as the default example is telling

 # Make sure this config.json file exists as per instructions above:
      - ./config.json:/app/config/production.json

the /app/config/production.json is a location where you did not put your own config.json. So this part is totally wrong. So the config.json with your database settings can never be found, so you get issues, but the ‘ make sure this config.json exisist as per instructions above ‘ gave me no clue, cause what is stated above?

So I tried what I did before in the docker-compose.yml I changed the location of my config.json to

-./config.json:/home/nging-proxy-manager/config.json

now I restarted the docker container again but I made an error the container was started with docker-compose up without the -d (DAEMON) … so I got output in my screen and suddenly I saw that there was a connection to the database but my password was not accepted.

I made sure I shut down the docker container again, removed the contents from the directories and restarted it again .. YEAAAHHH .. finally .. it was working

In short, 2 things to notice

config.json: change  the host part to: “host”: “127.0.0.1”,
in the docker-compose.yml pinpoint the config.json to the actual location on your HDD wher you put it.

Now start with ie. docker-compose up -d

have fun

Moving VMWARE VM to Proxmox: The steps to follow

first download the OVFtool from VMARE and make sure you put the ‘bundle’  file on your Proxmox host. I used: VMware-ovftool-4.4.0-15722219-lin.x86_64.bundle

Make sure  sure the prerequisites of ovftool are present on the proxmox host
apt install libncursesw5

This file could be needed (I saw somewhere that someone had an error missing this dependency), so I installed it.

than make sure the VMWARE-ovftool can be executed so chmod it to ie. 755

install it with ./VMWARE-ovftool-xxxx
after the installation is finished it will tell you that it has been installed correctly

Than follow the following steps:

  1. ovftool vi:root@[vmware-machine]/[name-of-vm] .
    This will download the VM onto your Proxmox host
  2. qm importovf 200 [name-of-vm].ovf local-zfs
    this will convert your VM to Proxmox and put it on (in my case) local-zfs
    when ready you need to add a network card to the hardware in Proxmox, as this is not transferred from VMWARE
  3. Add the vmxnet3 driver for network in Proxmox
    boot the machine and login.

check the ensXX where XX can be different from VMWARE, so change it (easy to be seen with the command ‘ ip address ‘

change it to the correct new number in the file: /etc/networking/interface
shutdown machine again and reboot

VM is being converted from VMWARE to Proxmox

NB. I used Proxmox 6.2-10 with their ISO on a HP Gen8 MicroServer (Community Edition)

Message to self: VMware root disabled on webui en shell

pam_tally2 --user root

In my example the there were 25 failed root login attempts:

1Login Failures Latest failure From
2root 25 01/02/20 10:56:59 unknown

The clear the the password lockout use the following command:

1pam_tally2 --user root --reset

ALT-F1 brings you to the shell if it is enabled (it not also, but no username/pwd can be given

ALT-F2 brings you back

no space left on device VMWARE

Upgrade goes wrong

esxcli software profile update -p ESXi-6.7.0-20190802001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

gives no space left on device

with error:

[Errno 28] No space left on device
vibs = VMware_locker_tools-light_10.3.10.12406962-14141615
Please refer to the log file for more details.
[root@ezsetupsystemb05ada87ad44:~] cd /tmp
[root@ezsetupsystemb05ada87ad44:/tmp] wget http://No space left on device
wget: bad address ‘No’
[root@ezsetupsystemb05ada87ad44:/tmp] wget http://hostupdate.vmware.com/software/VUM/PRODUCTION/main/esx/vmw/vib20/tools-light/VMware_locker_tools-light_10.3.10.12406962-14141615.vib
Connecting to hostupdate.vmware.com (92.123.124.29:80)

After this again

esxcli software profile update -p ESXi-6.7.0-20190802001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml

and now it is ok

De anderhalve meter: Corona

Mijn weblog zou niet mijn weblog zijn als ik in al de jaren dat ik deze weblog heb niet iets zou schrijven wat een stuk historie is, wordt en blijft.

In dit stukje vraag ik mezelf iets af: in hoeverre is de anderhalve meter afstand nu zinvol, minder zinvol of onzinnig?

Met andere halve meter afstand is de kans dat je besmet raakt minder (wordt gesteld). Want de kans dat je direct besmet wordt door iemand met een virus (ik schrijf met opzet: een virus) is minder omdat je afstand houdt. Echter: er bestaat een kans dat iemand buiten loopt, niest, een windvlaag de nies meeneemt en uiteindelijk via de lucht bij je terecht komt.

Nu zit je in een TV studio, je zit anderhalve meter van elkaar. je praat normaal, je doet normaal echter nu ben je klaar met je praatje en je moet jezelf verplaatsen. Op dat moment gaat iemand anders op je plek zitten. je loopt zelf door de ‘lucht’ van de ander en gaat zitten

Is dit dan geen schijnveiligheid? Natuurlijk begrijp ik dat als er afstand is de kans verminderd, maar stel: je loopt in de winkel je loopt langs iemand schouder aan schouder omdat een winkelwagentje in de weg zit je raakt elkaar aan en loopt door. 5 minuten later proest iemand in de winkel en jij loopt 5 seconden later daar waar deze persoon heeft lopen te proesten, wat is dan de veiligheid?

Wat is de kans bij het eerste en wat is de kans bij de tweede?

 

 

Het maken van een Timelapse en deze dan automatisch naar Youtube posten

Voor ons huis wordt een appartementen  complex gebouwd: Het Quadrant in Apeldoorn aan de Laan van Zonnehoeve nabij Station de Maten. Omdat ik als nerd en geek het interessant vind om te kijken wat er nu allemaal gebeurd heb ik een Unifi Flex G3 camera opgehangen die uitkijkt op de bouw van dit nieuwe complex. In de achtergrond is momenteel de sloop van de Americahal te zien

Elke x seconde wordt er 1 foto gemaakt en deze foto’s worden 1 keer per dag achter elkaar gezet en er een video van gemaakt. Zo wordt elke dag in 3 minuut en 51 seconden samengevat. Daarnaast wordt er ook elke vrijdag een timelapse gemaakt die het begin van de bouw tot het huidige moment bijhoud. Deze video wordt uiteindelijk ongeveer 15 minuten lang (afhankelijk hoe lang de bouw duurt)

Kijken? Zie hier het youtube kanaal

Alles is automatisch ingesteld. Dus zolang alles het maar blijft doen wordt alles automatisch gedaan. Ik hoef niets te doen.

Wat wordt er dan exact gedaan?

  • Elke X seconde wordt er een JPG image opgeslagen op een Virtuele Linux Server
  • Elke dag om 15.00 wordt er een timelapse gemaakt van de foto’s van een dag daarvoor
  • Tezamen met een rechtenvrij stuk muziek wordt de video gemaakt
  • Om ongeveer 15.45 wordt elke dag deze video naar Youtube geupload
  • de 24 uur worden in 3 minuut en 51 seconden samengevat, je kan dus zo elke dag even kort zien wat er op de bouwplaats is gedaa
  • Elke x minuut wordt er ook nog een 2e foto genomen. Deze wordt op een andere locatie op de Virtuele Linux Server opgeslagen en gebruikt om een een 2e timelapse te maken
  • Zo wordt elke vrijdag om 08.00 een timelapse gemaakt om zo een video te krijgen die de gehele bouw van start tot eind volgt.

Wat is er gebruikt om dit mogelijk te maken

  • Unifi Flex G3 camera (IP)
  • Linux Server
  • Timelapse script
  • Youtube Upload script via API
  • een NAS om de fotos op te slaan. NB: de dagelijkse fotos worden verwijderd nadat er een timelapse is gemaakt. Alleen van de ‘wekelijkse’ timelapse om zo de gehele bouw te volgen worden voor een langere tijd opgeslagen
  • Crontab instelling om de gegevens automatisch te verwerken
  • 1 zondagmiddag gebruikt om dit op te zetten

 

Using Mail-in-a-box with Rsync ssh backup on a different port

Mailinabox has an option to send backups over rsync to a system. Default through port 22. Many users who want to store backups of mail in a box need a different SSH port other than 22, a change can be made

go to /mailinabox/management

nano -w backup.py

find line 17 to 20 what starts with:

rsync_ssh_options = [
“–ssh-options= -i /root/.ssh/id_rsa_miab”,
“–rsync-options= -e \”/usr/bin/ssh -oStrictHostKeyChecking=no -oBatchMode=yes -p 22 -i /root/.ssh/id_rsa_miab\”,
]

In my setup I have to make sure the first ssh-options line is not active so put a # in front of it

Than make sure you change the -p 22 rsync option to the -p xxxx option where your SSH rsync is running. Unfortunately this cannot be set through the admin gui.

Notice: when updating mail in a box your first need to do following:

go to ./mailinabox

enter: git stash so that your changes are accepted by git and can be overwritten again. After the upgrade of mailinabox you have to make this change again.

Video

Timelapse with Unifi G3 Flex camera

I am a timelapse ‘lover’ and I like to make timelapses. Problem mostly is that I do not have the time  or patience to setup a camera put it on a slow moving rail and watch things happen. Hell I even do not have the proper software tools, but still a fan in search to make simple timelapses around this house.

Last month I bought a Unifi G3 Flex Camera. Its a PoE (Power over Ethernet) Camera connected to my wired network. I placed it outside in front of my home. In front of my house there will be a construction site soon and I want to see if I can make a timelapse during the build of the new appartments what will be built there. So the experiments began.

First I search for a Linux shell script to take images from my Unifi G3 Flex camera. I found one on github and altered it to my needs and re-published that on my own github I am still testing several settings. Cause if  you want to catch images from a construction site over a very long period of time you need less images a day than watching moving clouds what needs at least 1 image every 30 seconds.

So check my github if you want to see the script and see here a youtube example made with the script and ffmpeg