Stretch and dropbear: important upgrade notes

I used cryptsetup + dropbear in initramfs often. The integration in Debian was sometimes “strange”, but reading the doc was enough. With stretch I found some problems; I found the solutions, too, so hope this would be useful

authorized_keys path

dropbear in initramfs will only allow login from root. The keys used to be read from /etc/initramfs-tools/root/.ssh/authorized_keys

The path was weird, and it changed to /etc/dropbear-initramfs/authorized_keys. This is great and more logical, but AFAIK undocumented and without retrocompatibility.

UUID for root partition in fstab is not supported

Yep. You’d better use /dev/mapper/cryptoroot as device path. It’s just as reliable as the UUID.

use the same mapper name on the install system and the installed system

That’s hard to explain. I was installing debian from a rescue system using debootstrap, then manually editing relevant files, then running

chroot /mnt/ update-initramfs -u -k all

Unforunately, I mounted /mnt/ using cryptsetup open /dev/sda1 root, but /mnt/etc/crypttab mentioned the same device as cryptoroot. update-initramfs doesn’t like this discrepancy.

ethernet persistent naming

I know, that’s not a bug, and that’s widely known. However, I find some edges confusing. For example, the initramfs called my interface “eth0” but on the main system it was “enp3s0”. I would have also liked to have an easy way to know what name my ethernet interface would have had with persistent naming enabled

logging

I’m not totally sure about it because I was testing, and maybe my conclusions are wrong. However, at first boot it did not log anything. When I created /var/log/journal, log finally appeared. I think this doesn’t match the documentation, which states that creating /var/log/journal is only needed to log in journald format, not to log to /var/log/syslog or /var/log/messages

What’s in a VPN? Posing!

So you have tincd running. You’d better add

GraphDumpFile = /tmp/mynet.dot

to its configuration.

 

After that, you can put this script in your user’s crontab, every minute

#!/bin/bash

src=/tmp/mynet.dot
export DISPLAY=:0

[[ -f “$src” ]] || exit

dot -Ksfdp -Gratio=1.7 -Gsize=18,18 -Gbgcolor=’#0000aa’ -Ncolor=red -Nfontcolor=orange -Ecolor=yellow -Edir=none “$src” -Tpng|feh –bg-center –

what do you have now? an ugly desktop background with which you can watch your vpn graph, for vpn’s sake.Metralleta_like_a_boss

I cookie tecnici

Dice il garante della privacy (vedi delibera dell’8 maggio ) che ci sono i cookie tecnici e quelli di profilazione. E non sono mica uguali, ah no.

è necessario distinguerli  posto che non vi sono delle caratteristiche tecniche che li differenziano gli uni dagli altri  proprio sulla base delle finalità perseguite da chi li utilizza [omissis] Al riguardo, e ai fini del presente provvedimento, si individuano pertanto due macro-categorie: cookie “tecnici” e cookie “di profilazione”.

Ok, li distinguono. E tra i cookie tecnici includono i

cookie analytics, assimilati ai cookie tecnici laddove utilizzati direttamente dal gestore del sito per raccogliere informazioni, in forma aggregata, sul numero degli utenti e su come questi visitano il sito stesso

mentre invece

I cookie di profilazione sono volti a creare profili relativi all’utente e vengono utilizzati al fine di inviare messaggi pubblicitari in linea con le preferenze manifestate dallo stesso nell’ambito della navigazione in rete

Archive maildir

How do you handle archiving old mails in your maildir?

#!/usr/bin/env zsh
where=(--where .)
zparseopts -K -E -D -A opts '-where:=where' -days:
if [[ -z ${opts[--days]} ]]; then
    opts[--days]=30
fi
opts['--days']=30
if [[ $# != 2 ]]; then
    cat <<EOUSAGE >&2
Usage: $0 [options] from to
Moves old emails

Options:
    --where PATH Path to move emails from; that is, move <from>/PATH/x/y/z to <to>/PATH/x/y/z
                 If omitted, assumes '.': every email inside <from> will be moved
EOUSAGE
    exit 1
fi

from=$1
to=$(realpath $2)

cd $from
echo find $from -type f -mtime "+${opts[--days]}" -print
find ${where[2]} -type f -mtime "+${opts[--days]}" -print |
while read f; do
    mkdir -p $(dirname $to/$f)
    mv $f $to/$f
done

And then I run archive_maildir.sh ~/Mail ~/Mail/Archive --where folder_I_want_to_archive --days 15

NOTE: my setup is ~/Mail is indexed by notmuch; ~/Mail/accountname for each account I have. Moving the mail to ~/Mail/Archive/ does not create any problem to notmuch.

today’s buzzwords: gitolitev3, gitweb, nginx, gitolite-shell

NOTE: This article has not been written by me. The author is someone who is able to do all this magic stuff, but still hasn’t managed to create an account on noblogs.org. Go figure.

We would set up gitolite with wild repos to have a structure like CREATOR/..* , so for example the amin manages the keys and everyone can have her repositories and permission config (given the roles supplied by the aministrator). The hosting user of gitolite will be `git` with home directory /srv/vcs/git/ and the one for nginx will be `www-data` with base root in /srv/www/hostname/ (put www-data in the git group with gpasswd -a www-data git).

We will have key authenticathed repositories with ssh, and anonymous one by https (if you like you could switch basic auth in nginx and have out of the box authenticated gitolite access).

To create a new repo a user gives the gitolite admin her ssh keys, and then does

git clone git@hotname:user/repo.git

Now to give read permission to user2 she does

ssh git@hostname perms user/repo + READERS user2

Finally, to have a repo with gitweb and anonymous clone she does

ssh git@hostname perms user/repo + READERS @PUB

In debian wheezy there is still no gitolite 3, pull that from source.
install it and config a gitolite-admin like this:

 repo gitolite-admin
      RW+     =   admin

  @PUB = gitweb daemon

  repo    CREATOR/..*
      C   =   @all
      RW+D =   CREATOR
      RW  =   WRITERS
      R   =   READERS

      config  gitweb.url = git clone https://hostname/%GL_REPO.git
      config  gitweb.owner = %GL_CREATOR
      config  receive.denyNonFastforwards = true

in /srv/vcs/git/.gitolite.rc modify

UMASK                       =>  0027,
GIT_CONFIG_KEYS             =>  'gitweb\.(owner|description|category|url) receive.denyNonFastforwards receive.denyDeletes',
COMMANDS                    =>
     {
            'help'              =>  1,
            'desc'              =>  1,
            # 'fork'            =>  1,
            'info'              =>  1,
            # 'mirror'          =>  1,
            'perms'             =>  1,
            # 'sskm'            =>  1,
            'writable'          =>  1,
            'D'               =>  1,
        },

sskm is useful to have users upgrade their supplied ssh keys, activate it if needed.

Now configure gitweb to use gitolite-shell to manage authorization and put those files in /srv/www/hostname/ , for exampe put there gitweb.conf site/gitweb.css site/gitweb.js etcetera.

# ----------------------------------------------------------------------

# Per-repo authorization for gitweb using gitolite v3 access rules
# Read comments, modify code as needed, and include in gitweb.conf

# Please note that the author does not have personal experience with gitweb
# and does not use it.  Some testing may be required.  Patches welcome but
# please make sure they are tested against a "github" version of gitolite and
# not an RPM or a DEB, for obvious reasons.

# ----------------------------------------------------------------------

# First, run 'gitolite query-rc -a' (as the gitolite hosting user) to find the
# values for GL_BINDIR and GL_LIBDIR in your installation.  Then use those
# values in the code below:

BEGIN {
    $ENV{HOME} = "/srv/vcs/git/";   # or whatever is the hosting user's $HOME
    $ENV{GL_BINDIR} = "/usr/share/gitolite";
    $ENV{GL_LIBDIR} = "/usr/share/gitolite/lib";
}

# Pull in gitolite's perl API module.  Among other things, this also sets the
# GL_REPO_BASE environment variable.
use lib $ENV{GL_LIBDIR};
use Gitolite::Easy;
use CGI;
my $cgi = new CGI;

# Set projectroot for gitweb.  If you already set it earlier in gitweb.conf
# you don't need this but please make sure the path you used is the same as
# the value of GL_REPO_BASE in the 'gitolite query-rc -a' output above.
$projectroot = $ENV{GL_REPO_BASE};

# Now get the user name.  Unauthenticated clients will be deemed to be the
# 'gitweb' user so make sure gitolite's conf file does not allow that user to
# see anything sensitive.
$ENV{GL_USER} = $cgi->remote_user || "gitweb";

$export_auth_hook = sub {
    my $repo = shift;
    # gitweb passes us the full repo path; we need to strip the beginning and
    # the end, to get the repo name as it is specified in gitolite conf
    return unless $repo =~ s/^\Q$projectroot\E\/?(.+)\.git$/$1/;

    # call Easy.pm's 'can_read' function
    return can_read($repo);
};

# stylesheet to use
$stylesheet = "gitweb.css";

# javascript code for gitweb
$javascript = "gitweb.js";

# logo to use
$logo = "git-logo.png";

# the 'favicon'
$favicon = "git-favicon.png";

# enable nicer uris
$feature{pathinfo}{default} = [1];
#
# # root link text
$home_link_str = 'hostname';
$site_name = 'hostname';

Now configure nginx to serve gitweb and gitolite-shell for anonimous clone, not the default paths of gitweb.cgi and gitolite-shell that are the defaults in debian.

server {
    listen   [::]:80;
    server_name hostname;
    rewrite ^ https://$uri permanent;
}
server {
    listen [::]:443 ssl;

    ssl_certificate /etc/ssl/private/nginx/web.pem;
    ssl_certificate_key /etc/ssl/private/nginx/web.key;

    ssl_session_timeout 5m;

    ssl_protocols SSLv3 TLSv1;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
    ssl_prefer_server_ciphers on;

    server_name hostname;

    access_log /srv/www/hostname/log/nginx/access.log;
    error_log /srv/www/hostname/log/nginx/error.log;

    root /srv/www/hostname/;

    # static repo files for cloning over https
    # requests that need to go to git-http-backend
    location ~ ^(.*\.git/objects/([0-9a-f]+/[0-9a-f]+|pack/pack-[0-9a-f]+.(pack|idx)))|(^.*\.git/(HEAD|info/refs|objects/info/.*|git-(upload|receive)-pack))$ {
        root /srv/vcs/git/;

        fastcgi_param HTTPS on;
        fastcgi_param SCRIPT_FILENAME   /usr/bin/gitolite-shell;
        fastcgi_param PATH_INFO         $uri;
        fastcgi_param GITOLITE_HTTP_HOME  /srv/vcs/git;
        fastcgi_param GIT_PROJECT_ROOT  /srv/vcs/git/repositories;
        if ($remote_user ~ ""){
            set $user "gitweb";
        }
        if ($remote_user !~ ""){
            set $user $remote_user;
        }

        fastcgi_param REMOTE_USER $user;
        include fastcgi_params;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
    }

    location ~ .cgi {
        try_files @gitweb 404.html;
    }
    location ~ ^/site/ {
        try_files $uri 404.html;
    }
    # send anything else to gitweb if it's not a real file
    location /  {
        fastcgi_param HTTPS on;
        fastcgi_param SCRIPT_FILENAME   /usr/share/gitweb/gitweb.cgi;
        fastcgi_param PATH_INFO         $uri;
        fastcgi_param AUTH_USER $remote_user;
        fastcgi_param REMOTE_USER $remote_user;
        fastcgi_param GITWEB_CONFIG     /srv/www/hostname/gitweb.conf;
        fastcgi_pass unix:/var/run/fcgiwrap.socket;
   }
}

gli scrinsciot: parto di uno scriptino

Un compare posta screenshot in continuazione, e io rosico perche’ e’ veloce, che fico, pure io pure io.Si e’ fatto uno script che preme un tasto, seleziona il rettangolo, e via, questo uppa e gli mette l’url nella clipboard.

Pero’ il copia incolla non si puo’ fare, che lui usa dropbox e io no. Io voglio un coso senza login.

Trovo imgurbash, che prende un’immagine la mette su imgur e poi mette in clipboard. Fa proprio per me.
Faccio le prime prove, va.
decido di mettere dentro uno script, che “loggasse” l’output in modo da avere a disposizione, nel futuro, il link della delete.
Non va piu’. Cioe’ non copia dentro la clipboard, il resto va.
quello script, scritto coi piedi, si intende, non funziona se lo lanci da un altro script. Fichissimo.

vabbe’, ne cerco un altro. Uno e’ scritto in haskell, uno in ruby. Gli altri hanno addirittura delle directory e un readme nel repository, non voglio nemmeno sapere come sono scritti, stanno evidentemente sbagliando. In fondo imgurbash era proprio bellino, nonostante xsel non funzioni affatto.

Vabbe’, nel mio script reimplemento il codice intorno a xsel, cosi’ va tutto. Cioe’ chiama imgurbash, che uppa e poi fa una xsel che non funziona, quindi richiama xsel per farlo funzionare davvero. Elegante eh?

Ora lo metto dentro xbindkeys. Non va.

E gia’, perche’ scrot pare si rifiuti di funzionare se chiamato da xbindkeys. E’ un problema noto.

E allora passiamo a import, suite imagemagick. Sto import pare pure meglio, perche’ se clicchi fa tutta la finestra, e se trascini fai il rettangolo. Bellissimo.
Pero’ con scrot se premi esc quello si chiude, invece import no. E non ci sono tasti che tengano, ctrl+c, tasti destri, sinistri, incrociati.Se premi il tasto sbagliato davanti a cose riservate e’ giusto che il mondo lo sappia.

E allora andiamo di pkill, che ce frega. Cioe’ lo script all’avvio controlla se c’e’ gia’ un import attivo e lo killa. Quindi per chiudere import, basta lanciare due volte il tool.

Va che risultati

#!/usr/bin/env bash

if pkill -x import --uid $UID; then
    exit 0
fi

filename="/tmp/$(date '+%F-%R:%S').jpg"
import -quality 90 -silent "$filename"
[ -f $filename ] || exit 1
url=$(imgurbash "$filename")
[ $? -eq 0 ] || exit 1
xsel --clear -b
echo -n "$url" | xsel -b
tee -a ~/.imgur.log <<<$url > /dev/null
notify-send -u low "imgur" "$url"
exit 0

Ora, sto scriptino fa mezzo schifo, anche se e’ fico avere roba del genere.

Ma e’ la classica serata che parte con uno oneliner da 2 minuti e finisce con bestemmie, scoperte di comandi esoterici e ingarbugliamenti.

Che fatica essere maniaci.

route tables: powerful and selective use of VPNs

Background

I regularly use a VPN. Unfortunately there are some sites (ie http://grooveshark.com ) that are viewable with my connection, but not with VPN’s one (yes, the opposite of the typical case).
I need to handle them.

Rules

Just ip-rule

Routing tables are cool. Here’s how to force every connection to $IP using 192.168.1.1 as gateway. This will bypass any other gateway (tipically, a VPN)

echo 200 forcelocal | sudo tee -a /etc/iproute2/rt_tables
 sudo ip route add to default via 192.168.1.1 dev eth0 table forcelocal
 sudo ip rule add to $IP table forcelocal

Of course the last line could be repeated for any IP you want to “enforce”.

More advanced filters

iptables can filter in such a good way that we want to use it. so:

sudo iptables -t mangle -A OUTPUT -m owner --uid-owner 42 -j MARK --set-mark 1
sudo iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source=$YOUR_LOCAL_IP
sudo ip rule add fwmark 1 pri 100 table forcelocal

here, 42 is the UID of “privoxy”, so that we can run a SOCKS proxy having access just to “direct” connection. Anyway you can use any user (so you can create a “novpn” user) or any iptables rule. Powerful and simple.

An opt-out configuration (based on firefox+privoxy)

Usually, I want my traffic to be tunneled through VPN. Sometimes I don’t.

So I set up a local privoxy instance. As the privoxy UID is “forcelocal”ed, it will run directly

If I want to force local connection for a website, I configure FireProxy to use a local privoxy instance as proxy for that site. This is easy and works well. And I can easily switch behaviour if I need.

If I want to force local connection for a different protocol/application, I just write an iptables rule that matches it. An example is bittorrent, for which I still don’t have a solution

 

mutt, hooks, boredom

Mutt hooks are great, altough very simple they are quite flexible for my needs. However, I hate writing them. I find it really boring.

So I came up with a “compiler” for mutt hooks, starting from a simple YAML file. You can find it on gist.github

It is made like this:

- domains: [your_university_doma.in, your_company_doma.in]
  from: 'Name Surname <my business email@gmail.com>'
- domains: [gmail.com, hotmail.*]
  from: 'sk4teNB33r <lulz@acideat.er>'
- addresses: [guy1@doma.in, some@one.else, dearly@belov.ed]
  from: 'oh another address <ano@th.er>'
- addresses: [paranoid@foo.zap, crypto@reb.el]
  from: 'secret address <hidden@na.me>'
  # Mails for these people will be encrypted and signed, for maximum security
  gpg: 'crypt sign'

and you can just run

python2 make_hooks.py hooks.yaml > hooks.mutt

Then in your muttrc you can add

send-hook . 'reset pgp_autoencrypt'
source hooks.mutt

and you’re done

Deploy nowiki

Mettere nowiki pare facile, l’ho scritto io e manco sapevo come si faceva, quindi mo socializzo.

Installa

apt-get install nginx-full python-virtualenv git uwsgi uwsgi-plugin-python uwsgi-plugin-syslog
cd /var
git clone git://github.com/boyska/nowiki
cd nowiki
sudo -u nowiki virtualenv --no-site-packages venv
source venv/bin/activate
pip install -r requirements.txt
deactivate
cd nowiki
cp nowiki.example.cfg nowiki.cfg

Configurazione

Configurare nowiki

Il file nowiki.cfg permette di cambiare alcuni settaggi base.

E’ poi importante mettere i permessi con accuratezza, in modo che uwsgi possa soltanto leggere i file di nowiki, con l’eccezione di /data/ per cui ci deve essere possibilita’ di scrittura.

chown nowiki:nogroup /var/nowiki -R
chmod g-w /var/nowiki -R
chmod o-rwx /var/nowiki -R
chown nobody /var/nowiki/nowiki/data -R
chmod ug+rwX /var/nowiki/nowiki/data -R

Configurare uwsgi

In /etc/uwsgi/apps-enabled/uwsgi-nowiki:

[uwsgi]
socket = /run/uwsgi/nowiki.sock
chdir = /var/nowiki/nowiki
virtualenv = /var/nowiki/venv
module = nowiki
callable = app
uid = nobody
gid = nogroup
chmod = 600
chown-socket = www-data
plugins = python,syslog
log-syslog = uwsgi-nowiki

In /etc/uwsgi/emperor.ini basta mettere

[uwsgi]
emperor = /etc/uwsgi/apps-enabled
uid = nobody
gid = nogroup
chown-socket = www-data
master = true

Bisogna poi configurare systemd per gestire l’avvio di uwsgi, mettendo questo in /etc/systemd/system/emperor.uwsgi.service. Nota: uwsgi potrebbe partire con il socketactivation (vedi ListenStream), ma non ho capito se c’e’ un modo decente di farlo senza dover elencare tutte le socket coinvolte dentro l’emperor.uwsgi.socket . Quindi non l’ho fatto, non mi pareva pulito.

[Unit]
Description=uWSGI Emperor
After=syslog.target

[Service]
ExecStart=/usr/bin/uwsgi --ini /etc/uwsgi/emperor.ini
ExecStartPre=/bin/mkdir -p /run/uwsgi
ExecStartPre=/bin/chown nobody:nogroup /run/uwsgi
Restart=always
Type=notify
StandardError=syslog
NotifyAccess=main

[Install]
WantedBy=multi-user.target

Configurare nginx

server  {
        listen 80;
        rewrite ^/nowiki$ /nowiki/ permanent;
        location /nowiki { #try_files $uri @nowiki; }
        #location @nowiki {
            include uwsgi_params;
            uwsgi_param SCRIPT_NAME /nowiki;
            uwsgi_modifier1 30;
            uwsgi_pass unix:/run/uwsgi/uwsgi.sock;
        }
    }