Archiv

Autor Archiv

Self-signing Firefox addons

24. Oktober 2016 Keine Kommentare

I’m heavy user of Pentadactyl – a Firefox addon which allows you to use a browser without a mouse. Occasionally I need to compile it from source to make it work on a new Firefox. In the past it could be used right away after compiling but since some time Mozilla requires all addons to be signed before they can be installed used in Firefox. Luckily it’s possible to self-sign an addon. It’s a simple and fast process but still a bit annoying. Self-signing is done with a tool called jpm provided by Mozilla. It can be installed with npm:

npm install jpm

Before using jpm to sign an addon you need to get a developer JSON Web Token issuer and secret token from https://addons.mozilla.org/developers/addon/api/key/. The command is:

$ ./node_modules/jpm/bin/jpm sign --api-key 'user:<KEY>' --api-secret \
 '<API_SECRET>' --xpi pentadactyl-1.2pre.xpi

During this process you can get several different errors. Some of them are:

Server response: You do not own this addon. (status: 403)
JPM [info] FAIL

Replace em:id=“<EMAIL>“ in install.rdf inside your addon XPI file with e-mail address you used to register at addons.mozilla.org

If you got this error:

Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/arkadiusz%40drabczyk.org/versions/1.44pre/

status: 401
response: {"detail":"Unknown JWT iss (issuer)."}

You used incorrect API key/secret. Make sure that you used correct keys.

Error: Received bad response from the server while requesting https://addons.mozilla.org/api/v3/addons/arkadiusz%40drabczyk.org/versions/1.44pre/

status: 401
response: {"detail":"JWT iat (issued at time) is invalid. Make sure your system clock is synchronized with something like TLSdate."}

Every call to Mozilla API relies on JSON web tokens which have an expiration time set. Make sure that you have your local time synchronized with one of ntp servers, for example:

sudo ntpdate ntp.ubuntu.com

If you got this error:

JPM [info] Signing XPI: /home/ja/dactyl/downloads/pentadactyl-1.2pre.xpi
Server response: Version already exists. (status: 409)
JPM [info] FAIL

Replace em:version=<VERSION> in install.rdf with a new unique version.

After a successfully signing of your xpi jpm will automatically download it to your local machine. You can now install it in Firefox without any problems.

Kategorienfirefox Tags:

Lesser known uses of grep

23. Oktober 2016 Keine Kommentare

In this article i am going to present a couple of interesting and lesser known uses of grep. I am not going to list grep options because this is what manpage is for but instead show a couple of neat things that can be achieved with grep that seem useful but are not so straightforward. Let’s get started:

Show colored matches and the entire input

Sometimes it’s handy to see all matches in color together with the entire original input on one screen. For example, we have a long text divided in many sections and we want to read the whole section that contains a given match. We don’t when in a section a match will occur. We can’t use -A or -B options because not sections are the same length. To achieve what we want we can use grep like this:

$ man grep | grep --color=always -E 'pattern|' | less

This command will show the entire grep manpage will all occurrences of `pattern` in color.

Start grepping from nth line:

Sometimes we want to start grepping a file from n-th line, for example to omit a pre-defined header we don’t care about. We can do achieve that with a little help of `tail`. For example, to start searching for a string from 15th line:

$ tail +15 FILE | grep <PATTERN>

Detect line endings in a file:

We can use grep to detect line endings:

$ egrep -l $'\r'\$ *

The above command will return success if file contains carriage returns, that line endings used in Windows world. Note though that $’r‘ is a bash feature and might non work in other shells.

Kategoriencli, grep, linux Tags:

New Slackware!

14. Juli 2016 Keine Kommentare

Finally, new Slackware! 14.2 has been released. Time to upgrade. Since I started using Slackware in 2010 I’ve always used an official method described in UPGRADE.txt and never had any problems with that. This year, however, I decided to give slackpkg method a try, especially because I also use Slackware at work now and and don’t want any downtime and if I strictly followed an official way I’d have to switch to a single-user mode for a longer while. Whole procedure turned out to be very simple and reliable and basically comes down to what is said here. I’ll just leave it here so I won’t have to look for it when the new Slackware is released.

  1. First, replace 14.1 with 14.2 in the mirror you already use in /etc/slackpkg/mirrors. For example, I use this mirror:

    ftp://mirrors.slackware.com:/slackware/slackware64-14.1/

    And now I changed it to:

    ftp://mirrors.slackware.com:/slackware/slackware64-14.2/

    Don’t forget the terminating ‚/‘, it must be there.

  2. Update list of available packages for 14.2:
    $ slackpkg update
    
  3. Upgrade slackpkg itself:
    $ slackpkg upgrade slackpkg
    

    When asked what to do with new config files:

    • /etc/slackpkg/mirrors.new – you can either remove it (R) as we already chose a valid mirror for 14.2 in step 1 or overwrite it (O) and select a mirror again before proceeding to a next step.
    • /etc/slackpkg/blacklist.new – we are going to use it later. If you have never modified /etc/slackpkg/blacklist by hand just tell slackpkg to overwrite it. Otherwise, see what changes have been introduced and decide what’s the best for you. Note that /etc/slackpkg/blacklist has changed in slackpkg 2.82.1.
  4. After upgrading slackpkg, update list of available packages again:
    $ slackpkg update
    
  5. Now that we have a new slackpg and /etc/slackpkg/blacklist and a package list we are going to add packages that should be upgraded outside of slackpkg to the slackpkg blacklist list. The most important package that should be blacklisted is kernel. The point is that we don’t want slackpkg to remove our running kernel and end up with an unbootable system in case a new kernel didn’t work on our hardware. Of course it’s always possible to let slackpkg replace the kernel and manually restore a last working official kernel later using a Live CD/USB in case of any problems but it would just be more difficult and time-consuming.

    You should also consider blacklisting all packages that you upgraded outside of Slackware official channel because otherwise slackpkg will replace it with whatever it downloads without looking at the version number so it can effectively downgrade the packages. For example, I use latest-firefox.sh script to get the newest version of Firefox instead of relying on an ESR version of Firefox included in stock Slackware and for this reason I also added mozilla-firefox-* to the list of blacklisted packages.

    At the end of the day my /etc/slackpkg/blacklist is:

    # This is a blacklist file. Any packages listed here won't be
    # upgraded, removed, or installed by slackpkg.
    #
    # The correct syntax is:
    #
    # To blacklist the package xorg-server-1.6.3-x86_64-1 the line will be:
    # xorg-server
    #
    # DON'T put any space(s) before or after the package name or regexp.
    # If you do this, the blacklist will NOT work.
    
    #
    # Automated upgrade of kernel packages aren't a good idea (and you need to
    # run "lilo" after upgrade). If you think the same, uncomment the lines
    # below
    #
    kernel-firmware
    kernel-generic
    kernel-generic-smp
    kernel-headers
    kernel-huge
    kernel-huge-smp
    kernel-modules
    kernel-modules-smp
    kernel-source
    
    #
    # aaa_elflibs should NOT be blacklisted!
    #
    
    # You can blacklist using regular expressions.
    #
    # Don't use *full* regex here, because all of the following
    # will be checked for the regex: series, name, version, arch,
    # build and fullname.
    #
    # This one will blacklist all SBo packages:
    #[0-9]+_SBo
    mozilla-firefox-*
    
  6. Now we’re ready to install all new additions to Slackware and upgrade existing packages, it’s the most time-consuming part of the upgrade process, it took more than an hour on my machine. Do:
    slackpkg upgrade glibc-solibs
    slackpkg install-new
    slackpkg upgrade-all
    

    Watch especially for startup scripts that have been modified such as /etc/inittab or /etc/rc.d/rc.4. All changes can be undone if you have a backup of old configuration files but it takes time.

    Now you should see cat /etc/slackware-version is already saying:

    Slackware 14.2
    
  7. Now we are going to remove all packages that are no longer part of Slackware base. Before doing that it’s a good idea to blacklist all packages you have installed outside of slackpkg, for example manually or via slackbuilds.org or you will have to skip them manually. For example, I uncommented the following entry in /etc/slackpkg/blacklist to blacklist all packages from slackbuilds.org:
    [0-9]+_SBo
    

    Now do this:

    $ slackpkg clean-system
    

    If you’re not sure whether a given package has been installed by you or came with previous Slackware release check if it’s listed as removed in CHANGES_AND_HINTS.TXT.

  8. Finally, we are going to upgrade kernel. First, comment back ‚kernel-*‘ entries in /etc/slackpkg/blacklist we uncommented in step 5 and download new kernel packages:
    $ slackpkg download kernel
    

    Install them manually:

    $ installpkg /var/cache/packages/slackware64/a/kernel*txz
    

    Now you should have 2 kernels in /boot:

    $ ls -Alhtr /boot/vmlinuz*
    -rw-r--r-- 1 root root 3.3M Feb 14  2014 /boot/vmlinuz-generic-3.10.17
    -rw-r--r-- 1 root root 6.2M Feb 14  2014 /boot/vmlinuz-huge-3.10.17
    -rw-r--r-- 1 root root 4.2M Jun 24 10:31 /boot/vmlinuz-generic-4.4.14
    -rw-r--r-- 1 root root 7.3M Jun 24 10:38 /boot/vmlinuz-huge-4.4.14
    lrwxrwxrwx 1 root root   22 Aug 19 21:18 /boot/vmlinuz-generic -> vmlinuz-generic-4.4.14
    lrwxrwxrwx 1 root root   19 Aug 19 21:18 /boot/vmlinuz-huge -> vmlinuz-huge-4.4.14
    lrwxrwxrwx 1 root root   19 Aug 19 21:18 /boot/vmlinuz -> vmlinuz-huge-4.4.14
    

    Note that /boot/vmlinuz which was previously pointing to vmlinuz-huge-3.10.17 is now pointing to a new kernel.

    In your /etc/lilo.conf you already something like have this:

    image = /boot/vmlinuz
      root = /dev/sda1
      label = Linux
      read-only
    

    That means that if you rebooted your machine now lilo would start a new kernel. But as said previously, we want to find a quick way to use an old kernel in new one didn’t work properly. To do that, we need to temporarily restore an old entry and give it an uniqe name, for example:

    image = /boot/vmlinuz-huge-3.10.17
      root = /dev/sda1
      label = old
      read-only
    

    Add the above entry to your /etc/lilo.conf, run `lilo‘ to write changes, reboot and when LILO screen shows up choose `Linux‘ as usual. If system booted successfully, `uname‘ should show you’re running kernel 4.4.14:

    $ uname -r
    4.4.14
    

    If system didn’t boot successfully you can reboot and go back to an old kernel by choosing `old‘ entry in lilo menu and take some time to figure out what doesn’t work with a new kernel.

    When you’re ready you can safely remove 3.10.17 kernel packages:

    $ find  /var/log/packages/kernel*3*  | rev | cut -d / -f1 | rev | xargs slackpkg remove
    

    And you can also remove `old‘ LILO entry from /etc/lilo.conf, we won’t need it any more.

That’s it – you’re now running a new version of Slackware.

Now, you will notice that some packages installed from slackbuilds.org or generally outside of Slackware mainline won’t start anymore with errors such as:

ristretto: error while loading shared libraries: libxfce4util.so.6: cannot open shared object file: No such file or directory

It is caused by the fact that these packages are linked against old libraries which are no longer present on the system. To fix the problem recompile a given package on an updated system.

Kategorienlinux, slackware, upgrade Tags:

Fun with monitors

2. Mai 2016 Keine Kommentare

Recently I got 2 Dell U2415 monitors. They are 16:10 what is rare today, are very thin and quite nice overall. They come with DDC support what allows to control their settings by software and this is when the fun part starts. To use DDC on Linux you need a program called ddccontrol, it’s quite old but still does its job. One caveat for Slackware users – to build it on Slackware 14.1 you need to recompile pciutils with SHARED=yes, otherwise configure script will complain it cannot find libpci (on Slackware current pciutils package is already built with SHARED=yes by default). Now when ddccontrol is installed we can start using it. First, let’s check if ddccontrol can detect our monitors:

$ sudo ddccontrol -p
ddccontrol version 0.4.2
Copyright 2004-2005 Oleg I. Vdovikin (oleg@cs.msu.su)
Copyright 2004-2006 Nicolas Boichat (nicolas@boichat.ch)
This program comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of this program under the terms of the GNU General Public License.

Probing for available monitors.I/O warning : failed to load external entity "/usr/share/ddccontrol-db/monitor/DELA0B9.xml"
Document not parsed successfully.
..I/O warning : failed to load external entity "/usr/share/ddccontrol-db/monitor/DELA0BA.xml"
Document not parsed successfully.
......
Detected monitors :
 - Device: dev:/dev/i2c-7
   DDC/CI supported: Yes
   Monitor Name: VESA standard monitor
   Input type: Digital
  (Automatically selected)
 - Device: dev:/dev/i2c-5
   DDC/CI supported: Yes
   Monitor Name: VESA standard monitor
   Input type: Digital

Here ddccontrol detected 2 monitors as we expected. Using their device identifiers we can list all controls they support that can be controlled by ddccontrol:

$ sudo ddccontrol -d dev:/dev/i2c-5

Controls (valid/current/max) [Description - Value name]:
Control 0x02: +/2/2 C [???]
Control 0x04: +/0/1 C [Restore Factory Defaults]
Control 0x05: +/0/1 C [Restore Brightness and Contrast]
Control 0x06: +/0/1   [???]
Control 0x08: +/0/1 C [Restore Factory Default Color]
Control 0x0b: +/100/0   [???]
Control 0x0e: +/100/0   [???]
Control 0x10: +/75/100 C [Brightness]
Control 0x12: +/75/100 C [Contrast]
Control 0x14: +/5/12 C [???]
Control 0x16: +/100/100 C [Red maximum level]
Control 0x18: +/100/100 C [Green maximum level]
Control 0x1a: +/100/100 C [Blue maximum level]
Control 0x1e: +/0/1   [???]
Control 0x1f: +/0/1   [???]
Control 0x20: +/0/1   [???]
Control 0x30: +/0/1   [???]
Control 0x3e: +/0/1   [???]
Control 0x52: +/242/255 C [???]
Control 0x60: +/17/18 C [Input Source Select]
Control 0x6c: +/17/18   [???]
Control 0x6e: +/17/18   [???]
Control 0x70: +/17/18   [???]
Control 0xaa: +/1/255 C [OSD Orientation - Landscape]
Control 0xac: +/8564/1 C [???]
Control 0xae: +/6000/65535 C [???]
Control 0xb2: +/1/1 C [???]
Control 0xb6: +/3/5 C [???]
Control 0xc0: +/8/65535   [???]
Control 0xc6: +/17868/65535 C [???]
Control 0xc8: +/4361/17 C [???]
Control 0xc9: +/258/65535 C [???]
Control 0xca: +/1/2   [???]
Control 0xcc: +/2/11   [???]
Control 0xd6: +/1/5 C [DPMS Control - On]
Control 0xdc: +/0/5 C [???]
Control 0xdf: +/513/65535 C [???]
Control 0xe0: +/0/1 C [???]
Control 0xe1: +/0/1 C [Power control - Off]
Control 0xe2: +/0/25 C [???]
Control 0xf0: +/0/255 C [???]
Control 0xf1: +/3/255 C [???]
Control 0xf2: +/0/255 C [???]
Control 0xfc: +/1/1   [???]
Control 0xfd: +/98/255 C [???]

As you see some controls are not described, fortunately some are. Let’s try 0xe1 for example:

$ sudo ddccontrol -r 0xe1 -w 0 dev:/dev/i2c-5

After doing it a monitor went off. We can bring it back again by writing 1 instead of 0:

$ sudo ddccontrol -r 0xe1 -w 1 dev:/dev/i2c-5

Set brightness level to 100:

$ sudo ddccontrol -r 0x10 -w 100 dev:/dev/i2c-5

By the way, you can also change a screen brightness using a well known utility called xrandr in the following way:

$ xrandr --output HDMI3 --brightness 0.5

However, doing this will not modify your monitor internal settings in contrary to DDC.

Kategorienfun, monitors Tags:

Spammers everywhere

24. Februar 2016 Keine Kommentare

Since I started this blog in June 2015 I got as many as 60 spam comments from various strange people (or rather bots I guess) who think that this blog is fantastic and that I should consider getting a loan, mattress dealers, SEO specialists, experts who know how to lose 10 pounds in a month and others. I often wonder who replies to such king of posts or e-mails and how profitable is this for spammers. Wikipedia article on spam says that:

According to the Message Anti-Abuse Working Group, the amount of spam email was between 88-92% of email messages sent in the first half of 2010.

90% of all e-mails is spam! And this article proves that spammers are profitable because apparently there are still many naive people in this world.

Kategorienspammers Tags:

Use magit with SOCKS proxy

23. Februar 2016 Keine Kommentare

Sometimes it is necessary to use git in places where it’s blocked by overzealous firewalls. Instead of a successful push you end up with this:

ssh: connect to host github.com port 22: Connection timed out
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.

A known way to overcome this problems is tunneling a traffic through an external server with SOCKS protocol over SSH using tsocks, possibly with autossh instead of regular ssh to make a connection permanent, for example:

autossh -M 15000 home -D 12000

In this example home is a name of our external server configured in ~/.ssh/config. We tell autossh to listen on port 15000 and create SOCKS proxy on port 12000 – these are not privileged ports so we don’t need sudo. To actually use SOCKS we need tsocks. First we need to tell tsocks what’s address and a port of our SOCKS proxy. Add this to /etc/tsocks.conf:

server = 127.0.0.1
server_port = 12000

Using tsocks is very simple – just put it before every command you want to tunnel like this:

tsocks git push

Now how to make magit use git in conjunction with tsocks? It’s not correct to do this:

(setq magit-git-executable "tsocks git")

When pushing with magit:

Searching for program: no such file or directory, tsocks git

The problem is that magit will look for a program „tsocks git“ which of course does not exist. What we need to do is to wrap git inside a shell script and call tsocks in it. For example, make ~/bin/tgit with the following contents:

#!/usr/bin/env sh

tsocks git "$@"

Now tell magit to use it:

(setq magit-git-executable "~/bin/tgit")

That’s it – now it’s possible to use magit comfortably as usual.

Kategoriengit, magit, networking, proxy, socks, ssh Tags:

Create VirtualBox virtual machine in command line

30. Januar 2016 Keine Kommentare

Although I don’t think about it every day today I realized that I used quite a few Linux based and Unix-like systems so far: a short adventure with Fedora known as Fedora Core 3 around 10 years ago, Slackware at home (and soon also at work), Tizen at work, Ubuntu at work and on parents‘ machine, FreeBSD in VirtualBox, OpenWRT on router and Android on a phone and tablet and OpenELEC on Raspberry Pi. Each system is different – they differ in package manager, startup style, libc, some provide GNU-based coreutils and some don’t. Time to try even more – let’s try new Fedora, Arch, Debian, CentOS, Gentoo, NetBSD, MacOS, OpenIndiana, BeOS, Haiku, ReoctOS and others! I am especially interested in learning new concepts and easy testing of shell scripts I wrote. Ideally they should be as portable as possible. VirtualBox is great for testing new systems. It allows to run multiple system at the same time without the constant need to reboot and tinkering with partitions layout and boot order. To make experimenting with new systems I made a short script that creates a virtual machine automatically. It takes two parameters: a name of a new virtual machine and a path to ISO file. Third paramater is optional and describes system type – it defaults to x64 Linux. Here it is:

#!/usr/bin/env sh
# vb.sh: create a new virtualbox machine
#
## (c) 2016, Arkadiusz Drabczyk

if [ "$#" -lt 2 ]
then
    echo Usage: "$0" \<MACHINE-NAME\> \<ISO\> [RAM] [DISK] [TYPE]
    echo Example:
    echo ./vb.sh \"Slackware x64 14.1\" slackware64-14.1-install-dvd.iso
    echo RAM is 256 megabytes if not explicitly requested.
    echo Disk is 20 gigabytes if not explicitly requested.
    echo Type is Linux_64 if not explicitly requested.
    exit 1
fi

ram=${3:-256}
hdd=${4:-20000}
type=${5:-Linux_64}

set -e

vboxmanage createvm --name "$1" \
                    --ostype "$type" \
                    --register

vboxmanage modifyvm "$1" \
                    --memory "$ram" \
                    --usb on

vboxmanage createhd --filename "$PWD/${1}.vdi" \
                    --size "$hdd"

vboxmanage storagectl "$1" \
                    --name "IDE Controller" \
                    --add ide

vboxmanage storageattach "$1" \
                    --storagectl "IDE Controller" \
                    --port 0 \
                    --device 0 \
                    --type hdd \
                    --medium "$PWD/${1}.vdi"

vboxmanage storageattach "$1" \
                    --storagectl "IDE Controller" \
                    --port 1 \
                    --device 0 \
                    --type dvddrive \
                    --medium "$2"

Also available as Github gist: https://gist.github.com/ardrabczyk/65b68d0121f2964cd99e

Kategorienshell, virtualbox Tags:

Git diff

23. August 2015 Keine Kommentare

Sometimes it’s necessary to apply a single commit done in Git repository to another project and output file produced git format-patch may be unsuitable for this when another project uses a different VCS. In such situation we can generate a patch using git diff command. It will produce a standard patch file in unified format:

git diff 411fde71965dd79900f553b28655f4c751744505^ 411fde71965dd79900f553b28655f4c751744505 > PATCH

We can apply PATCH with a standard patch command:

patch -p1 < PATCH
Kategoriengit, patch Tags: ,

My new e-mail setup

12. August 2015 Keine Kommentare

Managing e-mail is a science. Filtering hundreds of incoming messages and still not missing important ones, correct formatting and addressing can be problematic. Add a huge number of available tools and e-mail providers to the mix. I was in a need to rethink my current e-mail setup because it just didn’t work. I am subscribed to a number of mailing lists. Because of that my mailbox was full of less important e-mails coming from mailing lists. I enjoy reading them although I consider unsubscribing from most of them and read them vi Gmane NNTP interface. It became harder and harder to find a reply for an important e-mail I expected. I also maintain several folders for specific purposes, for example a folder name TRAVEL for all e-mails related to traveling or BANKS for all e-mails sent by my bank.

I had a specific set of requirements that a new setup must meet:

1) make it possible to access all e-mails, also sent items both on PC and Android mobile devices
2) keep a local backup of all e-mails except ones sent to mailing lists on the PC for a peace of mind
3) filter incoming e-mails directly on the server and not locally. There is no procmail for Android, and even if there was it would probably cause battery drainage and it would require me to maintain procmail filters on several devices

But first things first – I needed to clean my current inbox. All e-mails sent to mailing lists should go to the respective folders. I had already thousands of them so moving them manually was out of question. I have my e-mail account at Hostgator so I can access it by SSH and run procmail directly inside e-mail directory. I created a ~/.procmailrc that looks like this:

MAILDIR=$HOME/mail/drabczyk.org/arkadiusz
DEFAULT=$MAILDIR

LOGFILE=$HOME/.procmail.log
#Uncomment below for troubleshooting
VERBOSE=YES
LOGABSTRACT=YES

:0:
* ^TO_git@vger.kernel.org
.git_list/

:0:
* ^TO_fxos-feedback@lists.mozilla.org
.fx_os_list/

:0:
* ^TO_gdb@sourceware.org
.gdb_list/

(...)

# all other mails will be moved to the inbox
:0:
$DEFAULT/

and a following simple CLEANUP.sh script similar to this:

#!/usr/bin/env sh

set -e

for dir in cur new; do
    find $INPUTMAILDIR/$dir -type f | while read file; do
        procmail < $file
        rm $file
    done
done

It took some time to process all e-emails but finally my mailbox was clean. All other e-mails required maunal work, they were either spam sent to the mailing list using Bcc: or advertisements and special offers from various legitimate companies. The only thing I didn’t predict is that they were several e-mails, reddit registration e-mail amongst them that ended up first on the e-mail list in SquirrelMail because they lacked a date header but it’s not a big problem.

Next setup was to setup procmail to filter all incoming e-mails on the server. It wasn’t so simple as I thought it would be because Hostgator uses CPanel, CPanel uses Exim and Exim has its owner filtering capabilities. I didn’t feel like I want to rewrite all procmail rules to Exim format and learn another tool that would do the same. Luckily it’s possible to convince Exim to hand e-mail filtering to procmail by modifying /etc/vfilter/<USERNAME> file, for example to this:

# Exim filter

# Auto Generated by cPanel.  Do not manually edit this file as your changes will be overwritten.  If you must edit this filter, edit the corresponding .yaml file as well.

if not first_delivery and error_message then finish endif

#filter
if
 $header_from: matches ".*"
then
 pipe "/usr/bin/procmail /home1/rkumvbrh/.procmailrc"
endif

Note that /etc/vfilters/ directory itself has no +r for anybody except root so it’s not possbile to list its contents but it has +x so it’s possbile to modify existing files inside the directory if you know their names.

So far, so good. My mailbox was now clean and it will always be because all e-mails are now filtered on the server. Next step was to synchronize mailbox state between various devices. For example, if I move a given e-mail to TRAVEL folder I want to see this e-mail in TRAVEL folder on all devices. Of course IMAP is a way to go here. As I said, I also want to keep a backup of all incoming e-mails except for e-mails sent to the mailing lists. I used to use getmail for downloading e-mails but this time I decided to give OfflineIMAP a try. It turned out it works really well although some people don’t favor it. It does 2-way synchronization, makes it possible to exclude certain folders from being synchronized and can transpose folder names on the fly. In my case procmail puts all e-mails sent to the mailing list in folders named <LIST_NAME>.list on the server. I can easily tell Offlineimap not to download them without specifying all folders names:

folderfilter = lambda folder: not re.search('(_list$)', folder)

I got used to box1 as the main folder name instead of INBOX and sent for a folder with sent items instead of INBOX.sent. No problem for Offlineimap:

nametrans = lambda folder: {'box1':   'INBOX',
                            'sent':   'INBOX.Sent',
                            }.get(folder, folder)

I added Offlineimap to my crontab to start every 5 minutes. As I said, Offlineimap does 2-way synchronization. It means that when I delete an e-mail or move it e-mail from box1 to TRAVEL folder it will be reflected on the server and therefore visible in all other clients such as SquirrelMail or Android mail client. And when it comes to Android, the best client I found is k9mail. It can be healed from top posting disease, can send all e-mails in plain text and is open source. Setting up k9mail on Android doesn’t require any special attention apart from not marking mailing lists folder to be synchronized. As all e-mails are filtered on the server, I don’t have to care about synchronization here. I can access all sent items any time on any device I want because they’re synchronized, all other folders are too. I can move a message to a different folder on my phone on the go and then sit at my PC, open up mutt and see the same thanks to Offlineimap.

Last thing I did was to unsubscribe from multiple newsletters. I didn’t need most of them anyway and as they were not automatically moved to their folders because it was hard to filter them or I didn’t even see them before but they caused my phone to notify me about the catching my attention unnecessarily.

Now I can finally stop worrying and love e-mail again.

Kategoriene-mail Tags:

Test

8. Juni 2015 Keine Kommentare

This post, creatively titled Test is the first post on my blog. I used Emacs org2blog to make it.

More might come.

Kategorienemacs, org2blog Tags: