Syncthing – Cloud-synchronisation without cloud

I would never store my important data in the cloud, so please forgive the title, which might be misleading.

Recently, I found this tool. I had heard about it some time ago, but today I decided to give it a try. Earlier, I used tools like bsync, unison, and others. I was never truly satisfied with any of them, and often found myself returning to rsync. However, with Syncthing, I’ve now found a tool that I want to implement across my entire infrastructure.

The tool is available on Linux, Windows, and apparently also on macOS. I couldn’t test the latter two operating systems, but I believe it will work just as well on them.

I use it to synchronize data across different systems. And what can I say, it works wonderfully and very quickly.

In my setup, it was important that no external relay servers are used. This way, I can ensure that synchronization happens autonomously, which in my view also slightly improves security.

Equally important are the exclusions that should be set when certain file types should not be synchronized. In my case, these are output files from various compilers like gcc and g++, which usually generate a lot of object files (*.o)

Fast transfers with dd and loop-devices

I often adjust various hard drive parameters when using dd. Although the setting options may not be very prominent, the transfer speed ultimately depends primarily on the hardware being used.

The following parameters have proven successful for me when working with loopback devices:

dd if=/dev/sdf2 of=/dev/loop0p2 status=progress bs=64K oflag=direct oflag=direct

Emacs: Failed to verify signature archive-contents.sig

Today i got the message

Failed to verify signature archive-contents.sig

from my Emacs after typing package-list-packages. The full error is listed below:

Failed to verify signature archive-contents.sig:
No public key for 645357D2883A0966 created at 2024-04-22T11:05:03+0200 using EDDSA
Command output:
gpg: Signature made Mon Apr 22 11:05:03 2024 CEST
gpg: using EDDSA key 0327BE68D64D9A1A66859F15645357D2883A0966
gpg: Can't check signature: No public key

It seem’s that i wasn’t able to install elpa-packages anymore.

After some research i got the hint to install gnu-elpa-keyring-update with the Emacs-package-manager to solve this problem.

That sounds easy but before installing a package i had to temporary set the following variable to disable the keyring-check:

(setq package-check-signature nil)

After that i call package-list-packages again an installed
gnu-elpa-keyring-update.

To be on the safe side, i restarted Emacs and the error was gone.

Setting static route in Debian-Linux

Simple network routes can be set under Linux using the route command become. Here is a simple example:

route add -net 192.168.20.0 netmask 255.255.255.0 gw 192.168.0.1

In order for this change to be active even after a restart, the file /etc/network/interfaces must be changed.

iface ens192 inet static
address 192.168.0.156/24
up /bin/ip route add 10.25.20.0/24 via 192.168.0.1

How to change the environment of a running (parent) process under Linux?

This is not actually intended and there is no clean way to do it.

However, it is possible to change an environment variable of an existing process using a debugger.

Specifically, this is about the debugger called gdb. This can attach itself to existing processes and carry out various operations. Among other things, changing environment variables.

To connect to a running process you must have gdb installed and know the respective process ID (PID). In the example below, the ID is 2811:

gdb --pid=2811

The debugger then reports with the GDB prompt:

(gdb) call putenv ("MYVAR=1234")

In the upper sample you can also find the command with which the environment variable of the process can be found. In this case, the variable MYVAR is set to the value 1234.

Please note that the respective process is stopped as soon as gdb is started with the parameter --pid. As soon as the variable has been set, the debugger can be ended again with the q and Enter keys – the process then runs again.

But the whole thing is just a dirty hack and should not be used in a productive environment.

Usenet – still alive

In times of Facebook, Google and YouTube, why should it still be necessary to
rely on outdated technology like Usenet/NNTP? Portals like phpBB also invite you
to exchange ideas with other users.

So why should you use Usenet these days?

From my point of view the question is very simple: It is a tried and tested
standard, similar to emails/SMTP, it hasn’t really changed for decades. Some
people may interpret this as a disadvantage, but for me it is clearly an
advantage because I don’t constantly need new programs to write articles and I
don’t always have to worry about having to enter my login data with the latest
web browser version. There are also many people who still use Usenet on a daily
basis.

Of course, you can’t just add colorful pictures to the content and you have to
significantly increase your brain power to put everything into
words. Ultimately, discussions with substance arise here – plain text-based and
highly optimized.

I use Usenet in combination with Emacs and Gnus – it also fits seamlessly into
my email communication and allows me to work productively.

apt-file – Find programs inside APT/Debian-repositories

Recently I noticed on a freshly installed server that there was no command called nslookup. Luckily, Debian-like distributions have the apt-file tool.

This makes it possible to search for applications that are not yet installed but are in principle available in the APT repository. The command

apt-file search nslookup

quickly shed light on the matter and delivered the following output:

bash-completion: /usr/share/bash-completion/completions/nslookup
bind9-doc: /usr/share/doc/bind9-doc/arm/man.nslookup.html
dnsutils: /usr/bin/nslookup
dnsutils: /usr/share/man/man1/nslookup.1.gz
exim4-doc-html: /usr/share/doc/exim4-doc-html/spec_html/ch-the_dnslookup_router.html
fpc-source-3.0.4: /usr/share/fpcsrc/3.0.4/packages/fcl-net/examples/cnslookup.pp
libnet-nslookup-perl: /usr/share/doc/libnet-nslookup-perl/changelog.Debian.gz
libnet-nslookup-perl: /usr/share/doc/libnet-nslookup-perl/copyright
libnet-nslookup-perl: /usr/share/lintian/overrides/libnet-nslookup-perl
libxpa-dev: /usr/share/man/man3/xpanslookup.3.gz

So it was clear that the dnsutils package had to be installed
afterwards. Calling

apt-get install dnsutils

was enough and the nslookup tool was already available on my server.

If apt-file itself is not yet installed, it can be installed beforehand using

apt-get install apt-file

It is important that the cache is set up immediately afterwards. This is done using

apt-file update

Prevent password query for SSH access on client side

Especially with automated scripts which establish SSH connections to external systems, the request for a password within the script is undesirable or leads to the script being blocked.

It is possible to simulate such password entries with the expect tool, but this is anything but secure. The best way is authentication via RSA keys, which must be entered as a public key in the target system in order to enable password-free access.

If you administrate many systems and have forgotten to import a public key into a target system, you end up with the problem of the blocked script again because a password query is initiated as a failover.

To prevent this, the -o BatchMode=yes parameter is simply passed to the SSH client. It is generally recommended to use this parameter as soon as ssh appears in a script. As a rule, however, it is not considered and prevents many a system from running automated processes.

A typical command-line will look like this:

ssh -o BatchMode=yes www.my-sample-hostname.com

If you use SSH indirectly via Rsync, this parameter can be used as follows:

rsync -av -e "ssh -o BatchMode=yes" www.mysample.com:/src1 /dst1

Convert images to a single video using ffmpeg

There are several approaches here. The order of the images is problematic. The easiest way would be by:

cat *.jpg | ffmpeg -f image2pipe -r 25 -vcodec mjpeg -i - test.mp4

A warning is issued here because the pipe is aborted and ffmpeg does not seem to be able to recognize this.

Another method would be via image names which contain a sequential counter in the file name. If this is not the case, the file names can be changed as follows:

#!/bin/bash
COUNTER=1
for i in *.jpg; do
NEW_FILENAME=$(printf "%04d.jpg" "$COUNTER")
mv -i -- "$i" "$NEW_FILENAME"
let COUNTER=COUNTER+1
done

The video can be created using:

ffmpeg -start_number 1 -i %04d.jpg -vcodec mpeg4 test.avi

In this case, the 4 fixed numerical digits on the left are filled in with 0. If the number is higher, the mask %04d must be changed accordingly.

Browse SMB/CIFS shares under Linux with smbclient

As an admin, one often wants to find out whether CIFS shares can be addressed under Linux or whether these have been created correctly in terms of authorization.

The Linux program smbclient is suitable for this purpose. It works on a console basis and thus checks can be carried out here, e.g. by scripts.

A list of possible releases of a host is requested using the following command:

smbclient -L 192.168.0.1 -U admin

An output, in this case on the host mm with the user mm, looks something like this:

aw@mm:~$ smbclient -L mm -U mm
WARNING: The "syslog" option is deprecated
Enter mm's password:
Domain=[WORKGROUP] OS=[Windows 6.1] Server=[Samba 4.5.16-Debian]

    Sharename       Type      Comment
    ---------       ----      -------
    storage         Disk      Storage
    dokumente       Disk      Dokumente
    mm              Disk      Multimedia
    elearning       Disk      eLearning
    IPC$            IPC       IPC Service (mm server)

If you want to connect to a share, you call the command smbclient as follows:

smbclient //mm/mm -U admin

In this case, the share =mm= is called on the host =mm= based on the user admin. The command does not end but you are now in a special “Browsing” mode which is indicated by the prompt smb: >.

In “Browsing” mode you move with the cd command to switch to directories or ls to display the content of the current directory.

With the commands get and put files can be downloaded or uploaded via CIFS.

You don’t need more to be able to determine whether access via CIFS is possible and whether you can access the necessary resources in terms of authorization.