Poor-man’s disk usage monitoring

Here is a Bash script that will monitor the usage (in %) of all disks and send a notification by e-mail if it reaches the given limit.


#!/bin/bash
#
# Poor-man's disk usage monitoring
#
# Call this script as a cronjob for instance very 5 minutes. It will send an
# e-mail if one of the filesystems is filled to more than LIMIT (%) every
# MAIL_PERIOD seconds. An e-mail will also be sent once when the disk usage
# goes back below LIMIT.
#
# If you use Outlook as a client, set a fixed-width for "unformatted" e-mails in
# the options (File / Options / Mail / Stationary and fonts / At the bottom)

set -e

LIMIT="${LIMIT:-80}" # per cent
MAIL_PERIOD="${MAIL_PERIOD:-600}" # seconds
MAIL_TO=replace@me.invalid          # recipients, comma separated

# Timestamp files
MAIL_TS_FILE=/tmp/dfwarning-mail.ts
WARN_TS_FILE=/tmp/dfwarning-warn.ts

dfout=`df -h | awk -v "LIMIT=$LIMIT" '(NR >= 2 && $5 >= LIMIT) {print $0}'`
# echo $dfout

if [ "$dfout" ] ; then
# echo "Warning detected"
touch "$WARN_TS_FILE"
if [ ! -e "$MAIL_TS_FILE" -o "`find '$MAIL_TS_FILE' -mmin +$MAIL_PERIOD 2>/dev/null`" ] ; then
# echo "Sending mail"
( echo "Warning issued" ; echo "$dfout" ) | mailx -E -s "Disk usage on `hostname`: WARNING" "$MAIL_TO"
touch "$MAIL_TS_FILE"
fi
else
# echo "No warning"
if [ -e "$WARN_TS_FILE" ] ; then
( echo "Warning cleared" ; df -h ) | mailx -E -a 'Content-Type: text/plain' -s "Disk usage on `hostname`: OK" "$MAIL_TO"
fi
rm -f "$MAIL_TS_FILE" "$WARN_TS_FILE"
fi

Installing Linux on old iMac’s

I got a few iMac’s that were old but too good to go to the trash, for instance:

  • iMac5,1 from 2007, Core 2 Duo, 2GB RAM, 250GB HDD
  • iMac8,1 from 2008, Core 2 Duo, 2GB RAM, 1TB HDD

Apple does not support these models since a long time in macOS, so I gave Linux a try. Except a few rough edges during the installation described below, it was pretty easy. Both models are pretty usable for a light usage.

I was not successful with the installers started from USB sticks, so I ended up using a CD with the “netinstall” variant of Debian 64bit. To start from the CD, keep “Alt/Option” pressed while booting until the boot menu is displayed. Select to boot from the CD. The “netinstall” installation is straightforward. Just make sure that the iMac has internet access through the Ethernet port. I picked Gnome as a desktop, but you might want to use one that needs less hardware resources.

After the installation and the first reboot, you might have two issues:

1. Black screen

This happened to me with the older iMac. If the system seems to start normally, but the screen stays black:

  • Press the power button to shut down, and again to power on
  • When the Grub menu shows up, with “Debian…” as option, press “e” to edit
  • On the line starting with “linux” and the kernel options, add: nomodeset radeon.modeset=0
  • Start the system. The desktop should show up. To make the change persistent, perform the next steps
  • Edit the Grub options: sudo /etc/default/grub
  • Add the options to the kernel parameters, as above: nomodeset radeon.modeset=0
  • Update: sudo update-grub
  • Reboot

2. Wifi is not detected

The Wifi chip from Broadcom requires some proprietary firmware blobs. This applied to both iMac models. To get them:

  • Add “contrib” as APT source to “/etc/apt/sources.list”
  • Run “sudo apt update”
  • Run “sudo apt install firmware-b43-installer”
  • Reboot
  • The Wifi should work now

Create an address and phone book from active directory

E-Mail addresses and phone numbers are usually available in any AD, so I expected to find quickly a solution to create a human-readable list for many users. Surprisingly, I did not find any simple turn-key solution, so I came up with the following Powershell one-liner script:

Get-ADUser -Filter "enabled -eq 'true'" -SearchBase "OU=Users,DC=mydomain,DC=de" -Properties DisplayName, Title, EmailAddress, OfficePhone | Sort DisplayName | Select DisplayName, Title, EmailAddress, OfficePhone | ConvertTo-Html | Out-File "phonebook.html"

You just have to adapt the search base in accordance to your domain and AD structure.

Creating an image from S.M.A.R.T. values

Every now and then I have to sell used hard drives. Providing S.M.A.R.T. values contributes greatly to the trust and interest of buyers, but selling platforms typically cannot deal well with output of smartctl. So I wrote the following script, that will generate an image containing the values.

#!/bin/bash

# Usage: smart2png </dev/...> <file.png>

if [ -z "$1" ] ; then
    echo "Please provide the device as first argument, for instance /dev/sdd" 1>&2
    exit 1
fi
if [ -z "$2" ] ; then
    echo "Please provide the output file as second argument, for instance /tmp/file.png" 1>&2
    exit 1
fi

tmpfile=$(mktemp)


# You should consider running a test before, e.g.: 
# sudo smartctl -t short /dev/...
# sudo smartctl -t long /dev/...

rm -f "$2"

sudo smartctl -a "$1" > "$tmpfile"
retval=$?

fatalerror=$(($retval & 3))

if [ $fatalerror -ne 0 ] ; then
  ( cat "$tmpfile" ; echo "smartctl failed with fatal error $retval" ) >&2
else
  cat "$tmpfile"
  ansilove -f terminus -c 100 -r -t ans -o "$2" "$tmpfile" > /dev/null
  if [ $retval -ne 0 ] ; then
    echo "WARNING: smartctl returned error $retval"
  else
    echo "Everything is OK" 
  fi
fi

rm "$tmpfile"

exit $retval

Simple, user-centric virtualization on Linux

On macOS, I was using VMware Fusion to run Windows or Linux in a virtual machine for experimenting or for tools not available on macOS. It is a commercial product, but it made this really easy.

Since I migrated to Linux, I have been struggling to find an alternative. An option would be of course to buy VMware Workstation for Linux, but would prefer an opensource solution. I tried:

  • virt-manager: it is very powerful but overkill for my usage. In addition, stuff like attaching USB devices to the VM is cumbersome. This is a tool for system administrators and not for normal users or developers.
  • Gnome Boxes: it is very user-friendly and provides a lot of handy features for my use case: automatic download of ISO files for VM installation, express installation for Windows, OS-specific default VM settings…
  • VirtualBox: also quite user-friendly, but it feels old and I always have some problems or open questions about it

After going back and forth between them, I found out that Gnome Boxes is the best fit for me, but I also use virt-manager for some of the more complex settings. This is possible because Gnome Boxes and virt-manager both use the QEMU/KVM under the hood. However, virt-manager uses the “QEMU/KVM” system hypervisor by default (qemu:///system), Boxes uses the user hypervisor (qemu:///session). You can connect to user session from virt-manager easily using the menu entry “File”, “Add connection”. See here for more details. Once you have opened the user session, you will see the VMs you created with Boxes and can edit them with the editor provided with virt-manager, which is much more powerful (and complex) than Boxes.

Once thing which is annoying is that by default, all VMs created with Boxes use the “user” network interface type, as you can see in the XML file available from “Preferences” / “Resources” / “Edit configuration”:

    <interface type="user">
      <mac address="..."/>
    </interface>

This kind of interface allows the guest to access the network through the host, but it does not allow any direct communication between the guest and the host, or between the guests. VirtualBox has the same kind of problems and requires to define port forwarding, which is very cumbersome for network services, remote desktop sessions…

To solve this, it is required to use a “bridge” interface instead of the “user” one. Unfortunately, Boxes does not allow to set this natively, but this is where the other libvirt based tools come in handy.

First, enable the bridge interface on the host:

sudo mkdir -p /etc/qemu
echo "allow virbr0" | sudo tee -a /etc/qemu/bridge.conf
sudo chmod 0644 /etc/qemu/bridge.conf
chmod u+s /usr/lib/qemu/qemu-bridge-helper

You might have to repeat the two lines after system updates. They are required for the bridge to work using an unprivileged user.

Some more information:

Once this is done, open the network interface (NIC) settings of the VM, and set:

  • Network source: Bridge device
  • Device name: virbr0

By default, the bridge network in 192.168.122.0/24. You can change it by editing the following file:

$ cat /usr/share/libvirt/networks/default.xml 

<network>
  <name>default</name>
  <bridge name='virbr0'/>
  <forward/>
  <ip address='192.168.122.1' netmask='255.255.255.0'>
    <dhcp>
      <range start='192.168.122.2' end='192.168.122.254'/>
    </dhcp>
  </ip>
</network>

See the following page for more information: https://wiki.libvirt.org/page/Networking

Processing/filtering video with Python (to remove a logo)

I needed recently to remove a semi-transparent, white logo from a video, for private use. I tried various tools on Linux and Windows but none of them did a good job, so I wrote a Python script that does the processing for me.

The script uses MoviePy to read the video file, and apply a filter (the function “frame_filter” below) to every frame. The script can be easily adapted for other use cases, for instance to actually add a logo.

MoviePy is a pretty cool project, it allows to do a lot of video editing programmatically. Here is the documentation.

import cv2
import numpy as np
import sys
from moviepy.editor import VideoFileClip, AudioFileClip, CompositeVideoClip

input_base_name = "my_video"

logo_frame = cv2.imread(f'logo.png')
mask = logo_frame.astype(np.float64)

# B G R
mask = 1 + mask * (0.2275, 0.272, 0.265) / (49, 54, 52)

logo_frame = logo_frame.astype(np.float64)

# Read original video
# fps_source has to be set as workaround for following issue:
# https://stackoverflow.com/questions/73341202/moviepy-doubles-the-speed-of-the-video-without-affecting-audio-i-suspect-a-fram/74579552#74579552
video_clip = VideoFileClip(f"{input_name}.mp4", fps_source="fps")


def frame_filter(frame):
    """Function to apply to every video frame"""
    dst = frame.astype(np.float64)

    # Substract logo:
    dst = cv2.addWeighted(dst, 1.0, logo_frame, -1.0, 0.0)

    # Make sure no value is below 0:
    dst = cv2.max(dst, 0)

    # Compensate brightness loss due to logo change:
    dst = cv2.multiply(dst, mask, dst)

    # Make sure no value is below 0 or above 255:
    dst = cv2.min(dst, (255,255,255))
    dst = cv2.max(dst, 0)

    return dst


# The following sample code can be used to apply the filter function to
# sample pictures, saved as "frame-n.png".
# This is useful for debugging and tweaking.

# for index in range(1, 9):
#   video_frame = cv2.imread(f'frame-0{index}.png').astype(np.float64)
#   cv2.imwrite(f'frame-0{index}-filtered.png', frame_filter(video_frame))

# Apply the filter to the whole video:
result_clip = video_clip.fl_image(frame_filter)

result_clip.write_videofile(
    f"{input_name}-filtered.mp4",
    ffmpeg_params=["-crf", "1"]  # Minimum compression
)

video_clip.close()
result_clip.close()

The tricky part is to write the filter function and generate the write input for it. In my case, I was able to extract the logo from some parts of the original video where it was displayed on a black background. The source was a (bad) analog signal, so the corresponding frames were showing a lot of artifacts and noise. I applied a median filter on the relevant frames to get a clear frame, cleaned it further manually. The result was stored in “logo.png”, used above.

To compute the median logo frame, I adapted this a script from here, here is the change.

Simple photo collage on Linux

I was looking today for a tool to create a simple collage from a set of pictures on Linux, basically a simple 3×3 grid. I was not keen on using an interactive tool like Gimp since I wanted to be able to tweak the list of pictures or the size of the grid easily. Unfortunately, I did not find anything convincing for this specific use case, but it was quite easy to program in Python using PIL:

from PIL import Image, ImageDraw
import os
import math

collage_width = 6240
collage_height = 4160
collage_rows = 3
collage_columns = 3

root_path = '/path/to/single/pictures/dir'
file_names = [
    'file1.jpg',
    'file2.jpg',
    'file3.jpg',
    'file4.jpg',
    'file5.jpg',
    'file6.jpg',
    'file7.jpg',
    'file8.jpg',
    'file9.jpg',
]

save_path = "collage.jpg"

collage = Image.new("RGB", (collage_width, collage_height), color=(255, 255, 255))

size_x = math.ceil(collage_width / collage_columns)
size_y = math.ceil(collage_height / collage_rows)
collage_n_pics = collage_rows * collage_columns

for index in range(0, collage_n_pics):
    print(f"Loading image {index}/{collage_n_pics}")
    file_path = os.path.join(root_path, file_names[index])
    photo = Image.open(file_path).convert("RGB")
    photo = photo.resize((size_x, size_y))
    position_x = (index % collage_columns) * size_x
    position_y = int(index / collage_columns) * size_y
    collage.paste(photo, (position_x, position_y))

collage.save(save_path)
print(f'Collage saved as "{save_path}"')

Just adapt the variables at the beginning of the program to your own needs. The assumption is that all single images and the final collage have the same as aspect ration, otherwise there will be some holes or overlapping.

Linux: keyboard not working after waking up from standby

On my laptop, about every third time, the keyboard did not work after waking it up from standby. Putting it to sleep and waking it up again always helped. The following kernel options helped for good:

i8042.reset i8042.nomux i8042.nopnp i8042.noloop

These options are not well documented, but they seem to help with various input devices such as keyboards, mice, touchpads, trackpads… but only if the use the i8042 driver. You can get a hint by running:

lshw | grep i8042

If it does not return anything, the driver is probably not used. Otherwise you will see something like:

capabilities: i8042

You can pass the parameter to the kernel by editing the grub configuration file

sudo nano /etc/default/grub

Add the parameters to GRUB_CMDLINE_LINUX_DEFAULT, for instance:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash i8042.reset i8042.nomux i8042.nopnp i8042.noloop"

Update Grub:

sudo update-grub

Finally, reboot.

Using FreeMind on Linux in 2023

FreeMind is a MindMap tool. The latest version is 1.0.1 from 2014, since it is not maintained anymore, unfortunately. After trying a few alternatives, including FreePlane, FreeMind is still my favorite. It still works fine on Linux with a manual installation (on Windows too, but that will be covered in another post).

  • Download freemind-bin-max-1.0.1.zip from the official project page on SourceForge
  • Extract it, for instance as /home/user/freemind-1.0.1
  • Download and extract a Java 8 JVM, for instance Correto. I recommend using a .tar.gz package, to keep the system untouched. Let’s assume that the Java executable is now available as /home/user/java-8/bin/java
  • To start FreeMind manually, run:
/home/user/java-8/bin/java -jar /home/user/freemind-1.0.1/lib/freemind.jar

Next. you might want to integrate it in your desktop environment. First, extract the FreeMind icon from the JAR file:

unzip -p /home/user/freemind-1.0.1/lib/freemind.jar images/76812-freemind_v0.4.svg > /home/user/freemind-1.0.1/FreeMind.svg

Then create the following file:

/home/user/.local/share/applications/FreeMind.desktop

with the following content:

[Desktop Entry]
Type=Application
Name=FreeMind
Comment=Mind-mapping software written in Java
Icon=/home/user/freemind-bin-1.0.1/FreeMind.svg
Exec=/home/user/java-8/bin/java -jar /home/user/freemind-bin-1.0.1/lib/freemind.jar
Terminal=false
Categories=Office

Finally, update the desktop:

update-desktop-database ~/.local/share/applications

CMake/GCC/Clang: clean source paths, strip base directory

The C/C++ macro __FILE__ is commonly used to embed the source of the messages in log files, exceptions… With GCC and Clang, it contains the path provided on the command line for the input file. Depending on the build system and its configuration, it might be relative or absolute. The later is better for clarity’s sake, but in the best case, it makes the message unnecessary long and confusing for end-users, and in the worst case, it reveals details that should not be shared. To avoid that, the macro prefix map option can be used. In CMake:

   if( ${CMAKE_BUILD_TYPE} STREQUAL "Release" )
        # Strip absolute path from __FILE__ & co for release builds.
        # This prevents leaking irrelevant information to end-users.
        # The options are the following and supported by GCC and Clang:
        #   -fmacro-prefix-map: mainly __FILE__
        #   -fdebug-prefix-map: debug information
        #   -fprofile-prefix-map: profiling information
        #   -ffile-prefix-map: implies -fmacro-prefix-map, -fdebug-prefix-map and -fprofile-prefix-map

        list(APPEND CMAKE_CXX_FLAGS
            -fmacro-prefix-map=${CMAKE_SOURCE_DIR}=SRC
            -fmacro-prefix-map=${CMAKE_BINARY_DIR}=BIN
        )
    endif()

Notes:

  • The code will only work for single config CMake generators, like Ninja. It will need to be adapted for multi-configuration generators, maybe using generator expressions
  • CMAKE_BINARY_DIR is also replaced to cover generated files
  • If you use other base directories than CMAKE_SOURCE_DIR and CMAKE_BINARY_DIR, you can extend the list
  • We use macro-prefix-map to affect only the macros. Use the other options with care.

Case sensitive label search in Jira

in Jira, search is case insensitive, so the following JQL statement:

label = 'MyLabel'

will match ‘MyLabel’, but also ‘mylabel’ or ‘MYLABEL’. It makes it particularly hard to fix case issues in existing tickets.

If you happen to have ScriptRunner, a pretty common add-on for Jira, you can use the the regular expression matching function for a case sensitive search:

issueFunction in issueFieldMatch( "labels is not empty", labels, "\\bMyLabel\\b" )

Replace ‘MyLabel’ with your label. The `\b’ is a placeholder for word boundaries, which is necessary because the whole list of labels will be used for matching, not only the single labels.

If you do not have ScriptRunner, then, well, sorry, because Atlassian does not care about you. This use case has been known to be an issue for years.

Installing Linux on an old iMac (early 2009)

Recently, someone asked me to make an old and slow iMac (early 2009) fast again. In short, the solution I came up with is:

  • Install the operating system on an external USB SSD instead of the internal hard drive
  • Install Xubuntu 22.04 instead of macOS
  • Use an alternative installation source for the Nvidia driver

Hardware

The first obvious optimization was to use an SSD instead of the original magnetic hard drive. Unfortunately, Apple made this upgrade relatively cumbersome. I wanted to save me the hassle and the risk to break something. It turns out this model can boot fine from an external USB device. I also tried with a Firewire 400 device, which should be a bit faster than USB 2.0. Even though this is supposed to work according to multiple online sources, it did not for me. So I sticked to an external USB 2.0 SSD.

Operating system

When it arrived, the iMac was running the latest macOS version supported by Apple, El Capitan. As of 2022, it is 6 years old and not maintained anymore since 4 years. It was not an option to keep using it, for security reasons, but also because it is only a matter of time before some new applications refuse to work on it.

So we chose to give Linux a shot. I started with the most common desktop distribution, Ubuntu 22.04. The installation on the external drive worked smoothly, but there were two issues:

  • While usable and performing better than El Capitan on the internal hard drive, the OS was relatively slow to start and use. After trying out a few alternative distributions, I opted for Xubuntu. It is Ubuntu based, so it still has the advantage of being a well supported by most applications, and it was faster and nicer to use than Lubuntu in my short experimentation. The “data” footprint is much lower than the standard Ubuntu with Gnome, which is a relief for the relatively slow USB 2.0 connection.
  • Starting Firefox worked, but the system freezed shortly after it started to display some web pages.

Alternative Nvidia driver

As I found out, the later issue was due to the graphics driver. Ubuntu installs by default the open source “nouveau” driver for Nvidia GPUs. This computer has a GeForce 9400. Ubuntu also supports installing Nvidia proprietary driver by enabling the use of “3rd party and proprietary software” in the software sources. This worked for the Wifi (Broadcom), but it did not report the GPU. According to Nvidia, the last version of the proprietary drivers that supported this GPU was 340.108, while the current version was 515.76. Probably Ubuntu dropped support for this version a while ago.

I tried to download and install the legacy Nvidia driver myself, like in the good old days, but it failed due to build issues. The legacy driver does not build with recent kernel versions. It turns out Butterfly on lauchpad.net created a custom software source (PPA) with the legacy driver that works out of the box on Ubuntu 22.04.

To use this driver, three command lines are sufficient:

sudo add-apt-repository ppa:kelebek333/nvidia-legacy
sudo apt update
sudo reboot

After the reboot, the Nvidia logo was shown when the graphical interface started. The performance was better, and the crash was solved. Thank you Butterfly!

Final notes

A nice aspect of this setup is that El Capitan is still installed on the internal hard drive, which makes the transition from macOS to Linux smoother and safer. The users can just hold the “Alt” key on boot to start it instead of Xubuntu.

The performance is OK for computer of this age. It takes about 1 minute to start the OS, and a few seconds to start the applications like Firefox or LibreOffice. Then, they are usable. Mission accomplished. That is one computer less that will be thrown away due to the recklessness of big companies regarding environment issues. I wish Apple would use a tiny part of its billions of benefits and saving to keep its old hardware usable. Linux proves that it is technically possible.

Custom ROMs for mobile phones

This article contains links related to custom ROMs for mobile phones, mainly for my own reference, but maybe it is useful to someone else. The main advantages of custom ROMs are:

  • Increased lifetime of the hardware, long after the manufacturer stopped to provide updates
  • Clean OS (no bloatware…)
  • Better privacy

Custom ROM developers:

Articles (in german):

How to build ART 1.15 on Ubuntu 22.04

ART is a a free raw image processing program, a kind of spin-off from RawTherapee with some major improvements, as described on its web page. ART provides some binary bundles for Linux, unfortunately they do not work on recent versions of Ubuntu, like 22.04. The error is the following (or similar):

(ART.bin:71335): GLib-GIO-ERROR **: Settings schema
'org.gnome.settings-daemon.plugins.xsettings' does not contain a key
named 'antialiasing'
/tmp/.mount_ART.Ap9M9vEx/art/ART: line 18: 71335 Trace/breakpoint trap  
(core dumped) "$d/ART.bin" "$@" 

It turns out it is pretty easy to circumvent this problem by building ART from the source code. First, install the dependencies:

sudo apt install build-essential cmake ninja-build curl git libcanberra-gtk3-dev libexiv2-dev libexpat-dev libfftw3-dev libglibmm-2.4-dev libgtk-3-dev libgtkmm-3.0-dev libiptcdata0-dev libjpeg-dev liblcms2-dev liblensfun-dev libpng-dev librsvg2-dev libsigc++-2.0-dev libtiff5-dev zlib1g-dev 

Then clone the Git repository:

git clone https://bitbucket.org/agriggio/art.git

Check out the version you need (ART_VERSION):

cd art
git tag
ART_VERSION=1.15
git checkout $ART_VERSION

Create a build directory:

mkdir build
cd build

Choose where to “install” ART:

ART_TARGET_DIR=$HOME/Applications/ART-$ART_VERSION

Configure the build using CMake. It takes about 2-3 minutes on my computer:

cmake \
    -GNinja \
    -DCMAKE_BUILD_TYPE=Release  \
    -DPROC_TARGET_NUMBER="2" \
    -DBUILD_BUNDLE="ON" \
    -DBUNDLE_BASE_INSTALL_DIR="$ART_TARGET_DIR" \
    -DOPTION_OMP="ON" \
    -DWITH_LTO="OFF" \
    -DWITH_PROF="OFF" \
    -DWITH_SAN="OFF" \
    -DWITH_SYSTEM_KLT="OFF" \
    -DWITH_BENCHMARK="OFF" \
    -DENABLE_TCMALLOC="ON" \
    ".."

Then build and install it:

ninja -j install

You can then run it using (adapt the version as required):

$HOME/Applications/ART-1.13/ART

You can also create a desktop file to get it displayed be Gnome desktop:

~/.local/share/applications/ART.desktop
[Desktop Entry]
Type=Application
Name=ART
Comment=Raw image processing program
Icon=/home/<user>/Applications/ART-1.15/images/ART-logo-256.png
Exec=/home/<user>/Applications/ART-1.15/ART
Terminal=false
Categories=AudioVideo

CMake/CPack: start menu and desktop shortcuts with Wix

The CMake documentation is pretty thin on the subject and the information on the internet are contradicting themselves or it works only with NSIS or other generators, so here is how to add start menu and desktop shortcuts with CPack and Wix.

For a shortcut in the start menu:

    set_property(
        INSTALL
            subdir/$<TARGET_FILE_NAME:myexecutable>
        PROPERTY
            CPACK_START_MENU_SHORTCUTS
                "My executable"
    )

For a shortcut on the desktop:

    set_property(
        INSTALL
            subdir/$<TARGET_FILE_NAME:myexecutable>
        PROPERTY
            CPACK_DESKTOP_SHORTCUTS
                "My executable"
    )

Note that “subdir” might not be the string “.” because file paths provided to set_property(INSTALL) must be normalized. This can be a problem if subdir is stored in a variable (${subdir}/$<TARGET_FILE_NAME:myexecutable>). It will work in the general case but not if subdir is equal to “.”. As a workaround, you may want to use this helpers function:

function(set_install_property install_file property_name property_value)
    if(NOT DEFINED install_file OR NOT DEFINED property_name OR NOT DEFINED property_value)
        message(FATAL_ERRROR "Invalid usage: ${install_file};${property_name};${property_value}")
    endif()

    if ( install_file MATCHES "^\.\/")
        # Workaround for https://gitlab.kitware.com/cmake/cmake/-/issues/18540
        string(SUBSTRING ${install_file} 2 -1 install_file)
    endif()

    set_property(
        INSTALL
            ${install_file}
        PROPERTY
            ${property_name}
                ${property_value}
    )
endfunction()

And call it like this:

set_install_property(${subdir}/$<TARGET_FILE_NAME:myexecutable> CPACK_START_MENU_SHORTCUTS "My executable")

macOS: pkg installation not installing anything

If you create your own PKG files on macOS, for instance with CMake/CPack, you might come to the situation where installing the PKG does not seem to install anything.

First you should check if you PKG file actually contains the relevant data:

  • Check the PKG file size
  • Use unpkg to extract it and verify the content

If the PKG file contains the proper data, the installation is successful, but you don’t find the files where you expect them (for instance, /Application/MyApp.app), then try this:

sudo grep relocated /var/log/install.log

You might see something like:

2022-01-21 16:43:10+01 hostname installd[25864]: PackageKit: Applications/MyApp.app relocated to Users/colleague/Development/some_build_dir/bin/HisApp.app

By default, bundles are relocatable on macOS. It means that after installation, the user can move the .app or .framework files manually, and when a new version of the PKG gets installed again, it will install the bundles where the user put them. macOS keeps track of them somehow.

In the case above, macOS (PackageKit) decided to install my bundle in the user directory of one of my colleagues. One of these cases where the system tries to be too clever…

I did not find a good documentation how this works and how to troubleshoot it. While I understand the motivation behind the feature, it is not always wanted: during development, on multi-user computers (like above), but also for normal deployment: a user might be confused that installing the PKG a second time does not lead to the same results, especially because there is not feedback whatsoever about the relocation, except the small line hidden in /var/log/install.log.

As a developer, you can set BundleIsRelocatable to False when you create the bundles and pkg files to prevent the relocation altogether.

How to create a USB installation stick for macOS on Linux

There are multiple ways to do this, but one I found particularly convenient is to leverage the work of @FoxletFox in his Github project macOS-Simple-KVM.

First, clone the repository:

git clone https://github.com/kholia/OSX-KVM

Go into the local clone:

cd OSX-KVM

Fetch the required version with the according script:

./fetch-macOS-v2.py
1. High Sierra (10.13)
2. Mojave (10.14)
3. Catalina (10.15)
4. Big Sur (11.6) - RECOMMENDED
5. Monterey (latest)

Choose a product to download (1-5): 4

Convert then the downloaded file. The source name is always “BaseSystem.dmg”, you might want to adapt the target name to the actual version (for instance, Catalina.img):

qemu-img convert BaseSystem.dmg -O raw BaseSystem.img

Finally, copy the img file to the USB stick (adapt sdX to the actual device):

sudo dd if=BaseSystem.img of=/dev/sdX bs=10000000 status=progress
sudo sync

CMake, clang-tidy and MinGW

To enable clang-tidy during the build using CMake and MinGW, add this to the CMakeLists.txt:

if ( TARGET_WINDOWS )
    set(CMAKE_CXX_CLANG_TIDY
        /path/to/clang-tidy.exe
        --extra-arg=--target=x86_64-w64-mingw32
    )
endif()

You can either put the clang-tidy options directly in CMAKE_CXX_CLANG_TIDY, or in a file “.clang-tidy” located in the root directory of your project.

Old printers on Windows 10 (for instance Canon Bubble Jet i560)

The Canon Bubble Jet i560 is a very old printer from 2003 with USB support, but some of them are still lying around and working fine. Plugging it to a Windows 10 computer produces the usual sound of a new device plugged in, but Windows does not configure it as a printer automatically. Canon does not offer any drivers, not even for Windows 7. At this point you might think that all is lost, but as it turns out, Windows 10 supports it all right with drivers provided with Windows update. It just requires a few manual steps.

Start the standard procedure to add a new printer:

Windows won’t find it, but click on “The printer that I want isn’t listed”. The following window will appear. Select “Add a local printer…”

Select the proper port, e.g. the USB port (not visible on the next screenshot).

Now comes the trick: click on “Windows Update”:

Windows will then download stuff from Windows Update, it might take a while. Once this is finished, you can select the proper manufacturer on the left side (for instance “Canon”) and the right model.

I guess that this works for other old models as well.

Installing / downgrading an Android app manually

First, enable ADB in the developer options.

Then execute the following command on a computer with the Android SDK. In this example, I downgrade Firefox for Android to the version 68.11 (https://ftp.mozilla.org/pub/mobile/releases/68.11.0/android-api-16/multi/fennec-68.11.0.multi.android-arm.apk)

adb push fennec-68.11.0.multi.android-arm.apk /data/local/tmp
adb shell pm install -d /data/local/tmp/fennec-68.11.0.multi.android-arm.apk
adb shell rm /data/local/tmp/fennec-68.11.0.multi.android-arm.apk

Custom mouse configuration on Linux

As I found out, it is not easy to customize the mouse behavior on Linux. I currently use a Logitech MX Master 3 which has a few extra thumb buttons and a thumb scroll wheel.

It is possible in user space to customize the buttons using xbindkeys , even though it is far from user-friendly (see this page on StackOverflow). However, the thumb wheel doesn’t trigger any X events, as can be seen with xev. I gave evdevremapkeys a try, but it has the same issue.

The solution came from logiops. The downsides are that it needs to be built manually (this is very easy though), runs as a service and is limited to Logitech mice, but it allows a very fine configuration of them.

The documentation of logiops is not great unfortunately. So here is my configuration file (/etc/logid.cfg) in case it can help someone. It maps the thumb keys to CTRL-UP and CTRL-DOWN for easy tab switching, and the thumb wheel to vertical scrolling.

devices: (
    {
        name: "Wireless Mouse MX Master 3";
        smartshift:
        {
            on: true;
            threshold: 30;
        };
        hiresscroll:
        {
            hires: true;
            invert: false;
            target: false;
        };
        dpi: 1000;

        buttons: (
            # Map side buttons to CTRL-UP and CTRL-DOWN, which are common
            # key shortcuts to move between tabs, for instance in Firefox
            # or the Gnome terminal.
            {
                cid: 0x53;
                action =
                {
                    type: "Keypress";
                    keys: ["KEY_LEFTCTRL", "KEY_PAGEUP"];
                };
            },
            {
                cid: 0x56;
                action =
                {
                    type: "Keypress";
                    keys: ["KEY_LEFTCTRL", "KEY_PAGEDOWN"];
                };
            }
        );
        thumbwheel:
        {
            divert: true;
            invert: false;
            # Map the thumb wheel to the vertical scroll axis.
            # This creates a redundancy, but I find it more pleasing to scroll
            # with the thumb than with the index or middle finder.
            left:
            {
                mode: "axis";
                axis: "REL_WHEEL";
                axis_multiplier: -1;
            };
            right:
            {
                mode: "axis";
                axis: "REL_WHEEL";
                axis_multiplier: 1;
            };
        };
    }
);

Downloading Contour Storyteller

The tool “Contour Storyteller” was developed by the company Contour to configure the camera they developed and sold. Unfortunately, the company doesn’t exist anymore, and the links on the .com page (leading to http://update.contour.com/) are dead (error 403).

Fortunately, the links on the Japanese website are still working: http://www.contour.jp/software.html

The files are also available on some 3rd party websites, but you should be careful with that, it might contain some malware. Here are the checksums of the files to check they were not tampered with:

  • Contour-Storyteller-Installer.dmg
    • MD5: d50d945ac2c8b8304ad10031e0c700b9
    • SHA256: 029c1c69cef727d546cfa4ff7c0f8f7784f147048d42711b7ac8014e4bcd9207
  • Contour-Storyteller-Installer.exe
    • MD5: 92c0a49f3e277cf0327cc94c9330aa8c
    • SHA256: d9c00d15606dcdd77f318ffbb7c8e64b2a3a1ea6ffb18135bf57c12caa0ecf52

On Windows, you will also need Quicktime, which is also deprecated, but still available on apple.com. Here are the checksums of the last version (7.7.9):

  • QuickTimeInstaller.exe
    • MD5: 1a762049bef7fc3a53014833757de2d2
    • SHA256: 56eff77b029b5f56c47d11fe58878627065dbeacbc3108d50d98a83420152c2b

Note that for configuring the camera, you can also edit the FW_RTC.txt file directly (see here for instance).

Showing the keyboard layout on Gnome

Gnome doesn’t provide a convenient way to show the keyboard layout graphically. Here is a trick that might help. It is based on a desktop file that starts the “gkdb-keyboard-tool” from the activities overview.

As setup, create the following file:

~/.local/share/applications/show-keyboard-layout.desktop

With the following content:

[Desktop Entry]
Type=Application
Name=Show keyboard layout
Comment=
Icon=/usr/share/icons/Yaru/256x256/devices/input-keyboard.png
Exec=gkbd-keyboard-display -l "us(altgr-intl)"
Terminal=false
Categories=Utility

Press ALT-F2, and enter ‘r’ to restart the Gnome desktop.

To open the layout window, press the Super key (e.g. Windows key), and then enter the first letters of “Show keyboard layout” until the proper entry is shown. Finally, press enter.

The layout shown is hard coded. You can query the keyboard currently used with the following command and adapt the .desktop file accordingly.

setxkbmap -query

Tested on Ubuntu 20.10

Thumbnails for RAW images in Gnome Files (Nautilus)

By default, Gnome’s file browser, Nautilus, doesn’t show thumbnails for RAW images. Here is how to fix this. It was tested on Ubuntu 20.10 and relies on the following tools that must be installed manually:

  • dcraw, to extract the thumbnails as JPEG from the RAW file
  • exiftool, to get the orientation of the image
  • ImageMagick’s convert, to create the PNG file required by Nautilus from the JPEG file

First, create the following script as:

/usr/bin/dcraw-thumbnailer

Note that this script cannot be be located in any folder, especially not in a user folder, since for security reasons, recent versions of Nautilus run the thumbnail creation scripts in a sandbox. See this link for more details.

Here is the content of the file:

#!/bin/bash

# Usage:
#   dcraw-thumbnailer [size] [source] [destination]

# Exit with error on any error
set -e

size=$1
source=`echo "$2" | sed 's/file:\/\///'`
destination=$3

# Conversion table from EXIF orientation code to degrees (Bash 4 and newer)
declare -A conv=( ["1"]="0" ["3"]="180" ["6"]="90" ["8"]="270" )

# Read orientation with exiftool
# Triple -s : output only the value
# -n        : output numerical value, do not convert to human readable form
rotation_code=`exiftool -Orientation -s -s -s -n "$source"`
rotation_deg="${conv[$rotation_code]}"

# Extract the thumbnail (usually stored as JPEG) with dcraw, resize, rotate  and convert it.
# The target format will be inferred from the destination file name (usually PNG).
dcraw -c -e -w "$source" | convert -resize ${size}x${size} -rotate "${rotation_deg}" - "$destination"

Set execution flag:

chmod 755 /usr/bin/dcraw-thumbnailer

Check if the script is working correctly:

/usr/bin/dcraw-thumbnailer 256 /path/to/some/raw_file /tmp/test.png
gio open /tmp/test.png

This should open the default image viewer with the thumbnail.

Then, create a thumbnailers configuration file. For the current user, you can put it in:

~/.local/share/thumbnailers/dcraw.thumbnailer

Alternatively, you can make it system wide by putting it in:

/usr/share/thumbnailers/

It must have the following content:

[Thumbnailer Entry]
TryExec=/usr/bin/dcraw-thumbnailer
Exec=/usr/bin/dcraw-thumbnailer %s %i %o
MimeType=image/x-sony-arw;image/x-canon-cr2;image/x-canon-crw;image/x-kodak-dcr;image/x-adobe-dng;image/x-epson-erf;image/x-kodak-k25;image/x-kodak-kdc;image/x-minolta-mrw;image/x-nikon-nef;image/x-olympus-orf;image/x-pentax-pef;image/x-fuji-raf;image/x-panasonic-raw;image/x-sony-sr2;image/x-sony-srf;image/x-sigma-x3f;

The MIME type is set to act on common RAW file formats. You can extend the list with other types if required. To find out the MIME type of any file, see this article.

(Only for old versions of Gnome) Finally, open the Files (e.g. Nautilus) preferences and set the maximum file size to create thumbnails to a value that suits you, since the default value might be too low and prevent thumbnails to be created.

Notes:

  • I use dcraw instead of exiftool because it is faster, at least in my case
  • Thumbnails are located in “~/.cache/thumbnails”, delete it to force a refresh
  • For debugging and troubleshooting, you might want to use the following command line, which shows what is going in on in Nautilus:
    G_MESSAGES_DEBUG=”all” NAUTILUS_DEBUG=”Window” strace -s 300 -v nautilus

In case you need the same thing for videos, give ffmpegthumbnailer a try. Ubuntu provides a package for it that works out of the box with Nautilus.

Getting MIME type from GTK or Gnome for a given file

For scripting, I needed to get the MIME type as seen from GTK and Gnome perspectives. I didn’t find any standard tool to do this, but this Python one-liner does the trick:

python3 -c 'from gi.repository import Gio ; import sys ; filename=sys.argv[1] ; mime_type = Gio.content_type_guess("filename="+filename, data=None)[0] ; print(filename + ": " + mime_type)' FILENAME.ext

Example:

$ python3 -c 'from gi.repository import Gio ; import sys ; filename=sys.argv[1] ; mime_type = Gio.content_type_guess("filename="+filename, data=None)[0] ; print(filename + ": " + mime_type)' /usr/share/pixmaps/debian-logo.png

/usr/share/pixmaps/debian-logo.png: image/png

If you need this often, you can make easily a shell alias or a Python script (*.py) out of it.

An alternative is to use the tool “xdg-mime”, but I am not sure if it always gives the same results as the GTK/Gnome itself. The usage is:

$ xdg-mime query filetype /usr/share/pixmaps/debian-logo.png
image/png

Brother printer/scanner on Ubuntu

On Ubuntu 20.10, my Brother MFC-9330CDW was found out of the box on the network as printer, but to use it as a scanner, I had to download the drivers from the Brother web site, and install them:

sudo dpkg -i brscan4-0.4.10-1.amd64.deb

Then use the brsaneconfig4 tool to configure the scanner:

/opt/brother/scanner/brscan4/brsaneconfig4  -a  name=MFC-9330CDW-Scanner  model=MFC-9330CDW  ip=192.168.x.y

Then the tool simple-scan was able to see it, but not to use it, it constantly reporting the following error message:

unable to connect to scanner

Adding the relevant users to the lp group helped:

sudo adduser $USER lp

Restoring data from an encrypted Ubuntu installation

A friend of mine had recently issues with his laptop running Ubuntu. It reached the point where the data on the hard drive was not accessible anymore. He tried to restore it using a live CD, but didn’t succeed, so he asked me for help. Here is the story in case it can help someone else.

The simplified story

Starting from the live CD, let’s have a look at the hard drive.

# fdisk -l /dev/sda
Disk /dev/sda: 698.7 GiB, 750156374016 bytes, 1465149168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7c1e66cd
Device        Boot  Start        End    Sectors   Size Id Type
/dev/sda1 *      2048     499711     497664   243M 83 Linux
/dev/sda2      501758 1465147391 1464645634 698.4G  5 Extended
/dev/sda5      501760 1465147391 1464645632 698.4G 83 Linux

The interesting partition is obviously sda5:

# mount /dev/sda5 /mnt/tmp1
mount: /mnt/tmp1: unknown filesystem type 'crypto_LUKS'.

It is an encrypted LUKS partition. Let’s unlock it:

# udisksctl unlock -b /dev/sda5
Passphrase: 
Unlocked /dev/sda5 as /dev/dm-3.

Let’s try to mount it now:

# mount /dev/dm-3 /mnt/tmp2
mount: /mnt/tmp2: unknown filesystem type 'LVM2_member'.

OK, it’s a LVM partition. Let’s try to activate it:

# vgscan
  Reading volume groups from cache.
  Found volume group "ubuntu-vg" using metadata type lvm2
# lvscan
  inactive          '/dev/ubuntu-vg/root' &#091;<694.46 GiB] inherit
  inactive          '/dev/ubuntu-vg/swap_1' &#091;<3.94 GiB] inherit
# lvchange -ay /dev/ubuntu-vg/root

Now mount it:

# mount /dev/ubuntu-vg/root /mnt/tmp2

Let’s see if we can access the data, in home:

# ls -la /mnt/tmp2/home/user/
total 8
drwxr-xr-x 5 root    root    4096 Oct 12  2017 ..
lrwxrwxrwx 1 user user   56 Dec  7  2014 Access-Your-Private-Data.desktop -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.desktop
lrwxrwxrwx 1 user user   32 Dec  7  2014 .ecryptfs -> /home/.ecryptfs/user/.ecryptfs
lrwxrwxrwx 1 user user   31 Dec  7  2014 .Private -> /home/.ecryptfs/user/.Private
lrwxrwxrwx 1 user user   52 Dec  7  2014 README.txt -> /usr/share/ecryptfs-utils/ecryptfs-mount-private.txt

Great, another layer, this time with ecryptfs.

# apt-get install ecryptfs-utils
# ecryptfs-recover-private /mnt/tmp2/home/.ecryptfs/user/.Private
INFO: Found [/mnt/tmp2/home/.ecryptfs/user/.Private].
Try to recover this directory? [Y/n]: Y
INFO: Found your wrapped-passphrase
Do you know your LOGIN passphrase? [Y/n] Y
INFO: Enter your LOGIN passphrase...
Passphrase: 
Inserted auth tok with sig [805ad16ae6710569] into the user session keyring
INFO: Success!  Private data mounted at [/tmp/ecryptfs.bHADxhOE].

Here we are. The user data can now be found in /tmp/ecryptfs.bHADxhOE

The rest of the story

Actually this was not as easy, because the hard drive was failing and accessing it directly lead to sector read errors when trying to activate the LVM stuff. So I first created an image of the whole disk using ddrescue, which is a great tool for this:

ddrescue /dev/sda sda.img sda.map

The map file is where ddrescue saves the current progress so that you can interrupt the process and resume it later. Since creating an image from a failing disk can take hours or even days, you should use this.

Once the image was created, you can check the partitions:

# fdisk -l sda.img
Disk sda.dd: 698,7 GiB, 750156374016 bytes, 1465149168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x7c1e66cd
Device Boot   Start   End        Sectors  Size    Id Type
sda.dd1 *    2048     499711     497664   243M    83 Linux
sda.dd2      501758 1465147391 1464645634 698,4G  5  Extended
sda.dd5      501760 1465147391 1464645632 698,4G  83 Linux

To mount a partition, first mount it as a loopback device, for instance for sda5:

losetup -o $((501760*512)) /dev/loop20 sda.dd

Where 501760 is the start of the partition from the fdisk output, and 512 the sector size.

Since the LVM metadata was corrupted, I was not able to activate the group after having unlocked the LUKS partition. I had to use testdisk to find the EXT4 partition within the LVM volume, using the “Analyze” command and the “Deeper search”.

# testdisk /dev/dm-3
Disk /dev/dm-3 - 749 GB / 698 GiB - 1464643312 sectors
Analyse sector      589824/1464643311: 00%
  Linux LVM2                     0 1464641535 1464641536
  ext4                        2048 1456383999 1456381952

From there I was able to access directly the ext4 partition using a loopback device as above with an offset of 2048*512, and fsck.ext4 on it and finally mount it.

Setting file modification from timestamp in MOV files

The MOV format allows to store metadata, among other thing the timestamp when the video was shot. To set the file modification and creation time to the embedded timestamp, the following script comes in handy. It uses exiftool, which is known to work with JPEG files, but also works with MOV files.

#!/bin/bash

set -e

for i in "$@" ; do
    ls -l "$i"
    exiftool "-FileCreateDate<CreateDate" "-FileModifyDate<CreateDate" "$i" > /dev/null
    ls -l "$i"
done

Disabling the web search in Windows 10 start menu

By default, when you search in the Windows start menu, it will search for local files and applications not only locally, but also on the web. I don’t like that, first because mixing local and online results confuses me more than it helps. Second, for privacy reason. Microsoft (or anyone for that matter) doesn’t need to know what I am looking for on my computer, especially in real time.

Searching on the web revealed many manuals to solve this, however most didn’t work on my recent version of Windows (20H2). I found this article which provides details on a registry key which does the job. Here is a registry file to set the key easily.

  • Copy the following code:
Windows Registry Editor Version 5.00

[HKEY_CURRENT_USER\SOFTWARE\Policies\Microsoft\Windows\Explorer]
"DisableSearchBoxSuggestions"=dword:00000001
  • Save it in a file called “DisableSearchBoxSuggestions.reg”
  • Double click on the file to open and import it

Creating an implib (name import library) on Windows

If you want to link an external DLL to your C/C++ application on Windows, you will need a “.lib” file as intermediate. GCC (MinGW) allows to link directly the DLL, but most compilers, like clang or Microsoft’s, will need the implib.

If it is not available or provided by the vendor, it is easy to generate from the DLL if you have the GNU toolchain installed, for instance from MSYS2:

set DLLNAME=dllname
gendef %DLLNAME%.dll
dlltool --dllname %DLLNAME%.dll --def %DLLNAME%.def --output-lib %DLLNAME%.lib
del %DLLNAME%.def

Fix timestamps on Windows

On Windows, if you try to copy data from, say, a NTFS partition to an exFAT partition, you might get strange error messages about an “invalid parameter”. The reason might be that one of the file has a timestamp too far away in the past for an exFAT system, for instance 1970, the oldest possible date of a NTFS system.

Here is a one liner in Powershell that will fix this for the last access time. If it lies more than 800 days in the past, it will be set to now. Adapt to your own needs.

Get-ChildItem -recurse -force C:\Dir\ * | Where-Object {$_.LastAccessTime -le (Get-Date).AddDays(-800)} | ForEach-Object{$_.LastAccessTime =  (Get-Date)}

Gerrit hook to notify Teamcity

By default, Teamcity polls the configured VCS to detect new changes, but it also supports a notification with the REST API. Using this is more efficient in terms of resources, and also reduces the latency to queue the builds. Here is a Gerrit hook (using the standard “hooks” plugin) that will notify Teamcity everytime a patch set is updated or merged. Follow the inline documentation to set it up.

#!/bin/bash
# This script runs as "ref-updated" hook in Gerrit and notifies Teamcity
# when a reference is updated (new patch set, merge...)
#
# In the Teamcity WUI:
# - create a user if required, for instance, "teamcity-gerrit"
# - create a role for the integration, for instance "Gerrit integration"
# - give the following permissions to the role:
#     View project and all parent projects
#     View build configuration settings
# - assign the role to the "teamcity-gerrit" user for all required projects,
#   typicaly for "Root project"
# - create an access token for the user
#
# In this script:
# - set TC_SERVER to the hostname of the Teamcity server, used for the REST
#   requests
# - set TC_ACCESS_TOKEN to the token created above
# - set TC_VCS_PREFIX to the prefix used to access Gerrit from Teamcity,
#   for instance "ssh://gerrit.mycompany.com" or "https://gerrit.mycompany.com"
#
# On the Gerrit side:
# - make sure the "hooks" plugin is active and loaded. Using the SSH admin
#   interface:  ssh  gerrit plugin ls
#      Name                           Version    Status   File
#      ------------------------------------------------------------
#      hooks                          v3.1.2     ENABLED  hooks.jar
# - save the script in "gerritsite/hooks" as "ref-updated"
# - test the hooks by calling it manually.
#   Replace  by a real project name. The other values are irrelevant:
#     ref-updated --oldrev dummy --newrev dummy --refname dummy --project  --submitter dummy --submitter-username dummy
#   You should get the following respond back:
#     Scheduled checking for changes for 1 VCS roots
# - test by pushing a commit. In case of issues, uncomment the "echo" command
#   below to get a log file
#
# Since Teamcity should increase the polling interval every time the hook runs,
# it is not necessary to change the default polling interval for the system or
# the single VCS roots.
TC_SERVER=
TC_ACCESS_TOKEN=
TC_VCS_PREFIX=
# Expected call:
#     ref-updated --oldrev ... --newrev ... --refname ... --project ... --submitter ... --submitter-username ...
PROJECT=$8
TC_VCS_LOCATOR="vcsRoot:(type:jetbrains.git,count:99999),property:(name:url,value:$TC_VCS_PREFIX/$PROJECT,matchType:contains,ignoreCase:true),count:99999"
# Uncomment following line for debugging
# echo "$*" >> /tmp/ref-updated.log
curl --header "Authorization: Bearer $TC_ACCESS_TOKEN" -X POST "https://$TC_SERVER/app/rest/vcs-root-instances/commitHookNotification?locator=$TC_VCS_LOCATOR"

Teamcity documentation: https://www.jetbrains.com/help/teamcity/configuring-vcs-post-commit-hooks-for-teamcity.html
Gerrit documentation: https://gerrit.googlesource.com/plugins/hooks/+/refs/heads/master/src/main/resources/Documentation/config.md and https://gerrit.googlesource.com/plugins/hooks/+/refs/heads/master/src/main/resources/Documentation/hooks.md

MP4 videos in Picasa

Picasa has been long discontinued by Google, who wants your data on their servers, but it still works fine even on Windows 10. One issue though is that it doesn’t show or play most of the videos (like MP4 files) out of the box. An easy fix is to install the codec collection “K-Lite Codec Pack” from codecguide.com.

In case you need the installer, Google does not offer it unfortunately, but you can find copies on various sites, for instance wiki.ordi49.fr. The file name is “picasa-3.9.141-setup.exe” and the MD5 checksum “f5e535745f0e2140c31623df8f9ad746”.

Notepad++: clear prefilled search/replace text boxes

Notepad++ pre-fills the text boxes of the “Find” and “Replace” dialog with previous values, which I find annoying. Here is an Autohotkey script to clear them.

See also:
https://community.notepad-plus-plus.org/topic/12864/notepad-should-not-fill-the-search-field-with-default-text-when-search-window-was-already-open-and-filled/7
and
https://github.com/notepad-plus-plus/notepad-plus-plus/issues/3243

#If WinActive("ahk_exe notepad++.exe")
^f::
if WinExist("Find")
  ; Switch back and forth between editor and dialog
  if WinActive("Find")
    WinActivate, ahk_class Notepad++
  else
    WinActivate, Find
else if WinExist("Replace") {
  ; Switch from "Replace" to "Find"
  ; The following coordinates are system specific. Use the Window spy to find your own values
  ControlClick x33 y48, Replace ; click on "Find"
}
else {
  ; Open the dialog
  SendInput, ^f
  ; Clear pre-filled text boxes
  ; See also https://github.com/notepad-plus-plus/notepad-plus-plus/issues/3243
  WinWaitActive, ahk_class #32770 ahk_exe notepad++.exe,,1
  ControlSetText, Edit1,
}
return
^h::
if WinExist("Replace")
  ; Switch back and forth between editor and dialog
  if WinActive("Replace")
    WinActivate, ahk_class Notepad++
  else
    WinActivate, Replace
else if WinExist("Find")
  ; Switch from "Find" to "Replace"
  ; The following coordinates are system specific. Use the Window spy to find your own values
  ControlClick x90 y50, Find ; click on "Replace"
else {
  SendInput, ^h
  WinWaitActive, ahk_class #32770 ahk_exe notepad++.exe,,1
  ; Clear pre-filled text boxes
  ; See also https://github.com/notepad-plus-plus/notepad-plus-plus/issues/3243
  ControlSetText, Edit1,
  ControlSetText, Edit2,
}
return
#If

How to split an MKV file on the command line

It turns out it is really easy with mkvmerge from mkvtoolnix:

mkvmerge --split size:1000m input.mkv -o output.mkv

About -o, quoting the documentation:

It may contain a printf like expression ‘%d’ including an optional field width, e.g. ‘%02d’. If it does then the current file number will be formatted appropriately and inserted at that point in the filename. If there is no such pattern then a pattern of ‘-%03d’ is assumed right before the file’s extension: ‘-o output.mkv’ would result in ‘output-001.mkv’ and so on. If there’s no extension then ‘-%03d’ will be appended to the name.

If you use MacPorts to install mkvtoolnix, you might disable the Qt graphical interface, which is enabled by default, to reduce the dependencies: port install mkvtoolnix -qtgui

LibreOffice Writer: show and hide header and footer

Just like Word in Microsoft Office, LibreOffice Writer can hide the page footers and headers, to increase the screen space available for viewing and editing the document content. Just like Word, hiding and showing again can be controlled by double-clicking in the space between two pages. However, this only works in the “Single-page view”, not in “Multiple-page view”, for whatever reason. See here for how to switch.

Lightroom 6 freezing on splash screen on macOS

Today Lightroom 6 was freezing on the splash screen every time I would try to start it on macOS. I tried every hint I could find online without any luck. In the end, I realized with “Instruments” from XCode that Lightroom is storing some data in “~/Library/Application Support/Adobe/Lightroom” in addition to the catalog and the preferences. I restored this directory from a TimeMachine backup, and Lightroom started again immediately.

 

Deleting outdated Teamcity builds using the REST interface

The cleanup rules of Teamcity are pretty limited. For instance, if you keep the N last builds on the default branch, it will also keep so many builds on all other branches, which is a problem in case there is a branch for every change (workflow based on code reviews or merge requests).

Until TW-8717 is finally implemented, I wrote a Powershell script that will just delete all the pins older than a specific date, if they are not pinned.

param (
       [String]$OlderThan = "20190501",
       [String]$TeamcitySessionId = "...",
       [String]$TeamcityHost = "teamcity.mydomain"
)

Function Get-Teamcity-Session()
{
  $cookie = New-Object System.Net.Cookie
  $cookie.Name = "TCSESSIONID"
  $cookie.Value = $TeamcitySessionId
  $cookie.Domain = $TeamcityHost
 
  $session = New-Object Microsoft.PowerShell.Commands.WebRequestSession
  $session.Cookies.Add($cookie);
 
  return $session
}
 
Function Get-Builds-To-Delete()
{
  # defaultFilter:false is required, otherwise Teamcity will return only builds on default branch
  $uri = "https://$TeamcityHost/httpAuth/app/rest/builds/?count=100000&locator=defaultFilter:false,pinned:false,state:finished,finishDate:(date:" + $olderThan + "T000000%2B0100,condition:before)"
  return (Invoke-RestMethod -WebSession (Get-Teamcity-Session) -Method Get -Uri $uri);
}

Function Delete-Build($id)
{
  $uri = "https://$TeamcityHost/httpAuth/app/rest/builds/?count=100000&locator=pinned:false,state:finished,id:$id"
  return (Invoke-RestMethod -WebSession (Get-Teamcity-Session) -Method Delete -Uri $uri);
}

$CsvFile = "Builds-To-Delete-$(get-date -f yyyy-MM-dd_HH_mm_ss).csv"
(Get-Builds-To-Delete).builds.build | Export-Csv -Delimiter ";" -Path $CsvFile

# Start $CsvFile

$builds = (Get-Builds-To-Delete).builds.build
$count = $builds.Count
$current = 0
foreach ($build in $builds) {
    $current++
    Write-Host "$current / $count"
    # $build
    $id = $build.id -as [int]
    if($build.state -ne "finished" ) {
        # Safety net:
        throw "Build $id did not finish";
    }
    if($build.pinned) {
        # Safety net:
        throw "Build $id is pinned";
    }

    if($build.branchName -and $build.branchName.StartsWith("release")) {
        Write-Host "Skipping release build $id"
        continue;
    }

    Delete-Build($id)
}

Accept Outlook invitations automatically with custom response

I often need to accept invitations and set the corresponding event as private. This is very cumbersome in Outlook, therefore I wrote the following macro to do it for me:

Sub AcceptAndSetPrivate()
    Set CurrentItem = Application.ActiveExplorer.Selection.Item(1)
    Dim metAppt As AppointmentItem
    Set metAppt = CurrentItem.GetAssociatedAppointment(True)
    metAppt.Sensitivity = olPrivate
    metAppt.BusyStatus = olFree
    metAppt.Save
    Dim metResponse
    Set metResponse = metAppt.Respond(olMeetingAccepted, True)
    metResponse.Send
    CurrentItem.Delete
    metAppt.Display
End Sub

To call it easily, put it in the Quick Access bar. Once you press the ALT key, Outlook will show which key to press next, “3” in the screenshot below:

2019-05-19-outlook

I tried to also set the “BusyStatus” to “Free” automatically, like you can see in the code, but for an unknown reason, it has not effect.

Posting HTML to WordPress.com

Posting HTML in a post on wordpress.com is still a nightmare, here is the recipe that will hopefully help me next time I need it:

  • Switch to HTML mode
  • Add [code language="html"]
  • Encode your HTML code to replace HTML special characters by the corresponding HTML entities (there are many online converter, just search for HTML encoder)
  • Copy the encoded text in the post
  • Add [/code]

Note that you will have to copy the encoded HTML again if you edit the post, because the HTML entities will be interpreted and replaced by their clear test equivalent.

What a mess. I wish there was an easier method.

 

Converting RSS to HTML

Firefox unfortunately removed the support for RSS in the version 64. I am relying on RSS for multiple web pages I visit regularly, so I looked for alternatives. I was not convinced by the available Firefox plugins, they were either overkill for my use, not opensource, or both. So I though I could write a static local HTML page that would get the RSS, parse it, and generate a DOM. It turns out this is not that easy, because I couldn’t find any RSS parser in Javascript. All Javascripts libraries for RSS I found are relying on external services from Google or other providers for that. This was not an option for me either. I finally recalled SimplePie, an RSS parser in and for PHP that I used in the past for web development. It turns out it is very easy to write an RSS to HTML converter with SimplePie:


<?php 
require_once 'autoloader.php'; 

// ini_set('display_errors', 1);
// ini_set('display_startup_errors', 1);
// error_reporting(E_ALL);


$url = "";
if(isset($_GET["url"])) {
    $url = $_GET["url"];
}
$sp = null;
if( ! empty($url) ) {
    $sp = new SimplePie();
    $sp->set_feed_url($url);
    $sp->enable_cache(false);
    $sp->strip_htmltags(false);

    $sp->init();
    $sp->handle_content_type();
}

?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<title><?php echo ( $sp != null ? $sp->get_title() : "RSS to HTML" ) ?></title>
</head>

<body>

    <form action="rss2html.php" method="GET">
         <p>RSS feed URL: <input type="text" name="url" size="100" value="<?php echo htmlentities($url); ?>" /></p>
    </form>

    <?php
        if ($sp != null) {
    ?>
        <div class="header">
            <h1><a href="<?php echo $sp->get_permalink(); ?>"><?php echo $sp->get_title(); ?></a></h1>
            <p><?php echo $sp->get_description(); ?></p>
        </div>
    <?php
        foreach ($sp->get_items() as $item):
    ?>
        <div class="item">
            <h2><a href="<?php echo $item->get_permalink(); ?>"><?php echo $item->get_title(); ?></a></h2>
            <p><?php echo $item->get_description(); ?></p>
            <p><small>Posted on <?php echo $item->get_date('j F Y | g:i a'); ?></small></p>
        </div>

    <?php
        endforeach;
        }
    ?>

</body>
</html>



Just download SimplePie, add this PHP script to it, and you’re good to go. The URL is passed as GET parameter, so you can easily bookmark the whole address.

Compiling QT 5.2 with MinGW 7.x

As usual, building QT brought its load of challenges, here is what you need to know when building QT 5.2 with MinGW 7.x.

1. make sure you extract the source code and build it in relatively short path, for instance C:\Qt, otherwise the build might fail with random “file not found” issues

2. make sure you have “python” available in your PATH. “configure” will not check this and if you don’t have it, it will lead to the following error:

python C:/.../qt/src/qtdeclarative/src/3rdparty/masm/create_regex_tables > RegExpJitTables.h
python C:/.../qt/src/qtdeclarative/src/3rdparty/masm/create_regex_tables > RegExpJitTables.h
mingw32-make[4]: *** [Makefile.Debug:568: RegExpJitTables.h] Error 1

3. in “qtbase/mkspecs/win32-g++/qmake.conf”, set QMAKE_CXXFLAGS as following:

QMAKE_CXXFLAGS          = $$QMAKE_CFLAGS  -std=gnu++98

This will fix build error in javascriptcore which are due to the fact that the code base doesn’t build with a C++ 11 compiler, which is the default standard used in MinGW 7.

4. in “qtwinextras/src/winextras/winextras.pro”, add -lgdi32 to LIBS_PRIVATE:

LIBS_PRIVATE += -lole32  -lgdi32 -lshlwapi -lshell32

This fixes the link errors like:

undefined reference to "__imp_CreateRectRgn"

MinGW/GCC: error: stdlib.h: No such file or directory in include_next

A common error with MinGW or GCC is the following:

C:\...\lib\gcc\x86_64-w64-mingw32\7.2.0\include\c++\cstdlib:75: error: stdlib.h: No such file or directory
at #include_next 

This is typically due to the fact that something messed up the system include path with -isystem. Specifying again some paths changes the search order, so that #include_next doesn’t find what it is supposed to.

If you use CMake, a workaround is to set CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES and CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES to the list of default system paths. CMake will then not pass it the the compiler anymore. It will still use them, in the correct, hard-coded order.

Instead of hard-coding these paths in you CMakeList file, you can retrieve them automatically from the C preprocessor:

if("${CMAKE_MINGW_IMPLICIT_INCLUDE_DIRECTORIES}" STREQUAL "")
    # Run the preprocessor in verbose mode on an empty input
    execute_process(
        COMMAND
            "${CMAKE_CXX_COMPILER}"
            "-E"
            "-Wp,-v"
            "-"
        INPUT_FILE "NUL" # Special Windows file, equivalent to /dev/null
        OUTPUT_VARIABLE _mingw_cpp_out # Capture stdout
        ERROR_VARIABLE _mingw_cpp_error # Capture stderr
    )
 
    # Create list of lines from stderr output:
    string(REGEX REPLACE ";" "\\\\;" _mingw_cpp_error "${_mingw_cpp_error}")
    string(REGEX REPLACE "\n" ";" _mingw_cpp_error "${_mingw_cpp_error}")
 
    # Look for this text block and gather the paths:
    #   #include  search starts here:
    #   C:/..../bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/include
    #   C:/..../bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/include-fixed
    #   C:/..../bin/../lib/gcc/x86_64-w64-mingw32/7.2.0/../../../../x86_64-w64-mingw32/include
    #   End of search list.
    set(_mingw_cpp_list)
    foreach(_mingw_cpp_line ${_mingw_cpp_error})
        if("${_mingw_cpp_line}" MATCHES "#include  search starts here:")
            # Block starts
            set(_mingw_cpp_state "ON")
        elseif("${_mingw_cpp_line}" MATCHES "End of search list.")
            # Block ends
            set(_mingw_cpp_state "OFF")
        elseif("${_mingw_cpp_state}")
            # Within block
            # Clean up and beautify the path
            string(STRIP "${_mingw_cpp_line}" _mingw_cpp_line)
            get_filename_component(_mingw_cpp_line ${_mingw_cpp_line} REALPATH)
            list(APPEND _mingw_cpp_list ${_mingw_cpp_line})
        endif()
    endforeach()

    # Set the list in the cache, so that we don't have to run the external process again

    set(CMAKE_MINGW_IMPLICIT_INCLUDE_DIRECTORIES ${_mingw_cpp_list} CACHE INTERNAL "List of MinGW system include paths")
endif()

list(APPEND CMAKE_C_IMPLICIT_INCLUDE_DIRECTORIES ${CMAKE_MINGW_IMPLICIT_INCLUDE_DIRECTORIES})
list(APPEND CMAKE_CXX_IMPLICIT_INCLUDE_DIRECTORIES ${CMAKE_MINGW_IMPLICIT_INCLUDE_DIRECTORIES})

Building 32bit applications with mingw-w64 and CMake

The mingw-w64 provides ready-to-use packages with GCC for Windows. The toolchain is by default targeting the 64bit Windows architecture. It is possible to build 32bit binaries by using the -m32 option on the command line.

The toolchain can be easily integrated with CMake, however it gets more complicated when trying to build a 32bit application from CMake. After some trial and error, reading the documentation and debugging the CMake modules, I found out a simple solution that works for me:

    set(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -m32")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -m32")

    project(MyProject ...)

    set_property(GLOBAL PROPERTY FIND_LIBRARY_USE_LIB32_PATHS TRUE)

    find_library(gcc_lib libgcc_s_sjlj-1.dll)
    find_library(cpp_lib libstdc++-6.dll)
    find_library(pthread_lib libwinpthread-1.dll)

Some notes:

    • According to the CMake documentation, CMake should set “FIND_LIBRARY_USE_LIB32_PATHS” automatically because MinGW requires it, but as of now (mingw-7.2.0 and CMake 3.12), it is not the case. So set it explicitly. Without this, find_library() will happily find the 64bit version of the libraries in the MinGW installation, causing linker errors.
    • I set CMAKE_C_FLAGS and CMAKE_CXX_FLAGS before the project() statement, because in project(), CMake will look for the compiler and retrieve the library locations by introspection. If the -m32 flag is not used by then, CMake may find and use the location of the 64bit libraries, also causing linker errors later on

Building protobuf with MinGW

Even if the documentation of the project doesn’t mention it, building protobuf with MinGW is easy when using CMake. The CMakeLists.txt is located in the “cmake” subdirectory.

I used following versions:

  • MinGW 7.2.0 (64 bit)
  • CMake 3.12.1
  • Protobuf 3.6.1

Two workarounds are however required with these versions:

Add “-DCMAKE_NO_SYSTEM_FROM_IMPORTED=ON” to the CMake command line to avoid the error:

fatal error: stdlib.h: No such file or directory
#include_next 

See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=70129 for more information

Also, you will need to apply this patch to cmake/tests.cmake:

# Patch from commit a69dfe63bc26c12fd2786aec9239076997110315
# https://github.com/protocolbuffers/protobuf/commit/a69dfe63bc26c12fd2786aec9239076997110315#diff-f9c045cbe267fdd0dfff7a28d4b5365e
if(MINGW)
  set_source_files_properties(${tests_files} PROPERTIES COMPILE_FLAGS "-Wno-narrowing")

  # required for tests on MinGW Win64
  if (CMAKE_SIZEOF_VOID_P EQUAL 8)
    set(CMAKE_EXE_LINKER_FLAGS "${CMAKE_EXE_LINKER_FLAGS} -Wl,--stack,16777216")
    set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wa,-mbig-obj")
  endif()

endif()

Groovy: cloning nodes in an XML document

It’s pretty easy to clone nodes in a XML document using Groovy once XmlParser is used instead of XmlSlurper:

import groovy.xml.XmlUtil

def original = """
<catalog>
   <book id="sample">
      <author>sample</author>
      <title>sample</title>
   </book>
</catalog>
"""

def catalog = new XmlParser().parseText( original )

def sampleBook = catalog.book.find { it.@id == "sample" }

catalog.remove(sampleBook)

3.times {
  def c = sampleBook.clone()
  c.@id = "$it"
  c.author[0].value = "Author $it"
  c.title[0].value = "Title $it"

  catalog.append(c)
}

println XmlUtil.serialize(catalog)


The result is:

<?xml version="1.0" encoding="UTF-8"?><catalog>
  <book id="0">
    <author>Author 0</author>
    <title>Title 0</title>
  </book>
  <book id="1">
    <author>Author 1</author>
    <title>Title 1</title>
  </book>
  <book id="2">
    <author>Author 2</author>
    <title>Title 2</title>
  </book>
</catalog>

 

Apache: undefined symbol: ap_proxy_location_reverse_map

If you get the following error when starting Apache:

apache2: Syntax error on line ...:
Cannot load mod_proxy_http.so into server:
mod_proxy_http.so: undefined symbol: ap_proxy_location_reverse_map

…then make sure that mod_proxy is enabled, and that it is loaded BEFORE mod_proxy_http. Apache doesn’t support dependency management of modules, so they have to be enabled in the appropriate order.

You might want to disable the modules and reenable them in the right order with a2enmod, or change the order in for instance /etc/sysconfig/apache2

VMWare Fusion: the path “” is not valid path to the gcc binary.

A common error installing the VMWare tools in a Ubuntu guest is:

Searching for GCC...
the path "" is not valid path to the gcc binary. 
Would you like to change it? [yes]

In some cases, like when using an older version of Fusion with newer version of Ubuntu, an easy workaround is to use the opensource version of the VMWare tools provided by the distribution:

apt-get install open-vm-tools open-vm-tools-desktop

PHP not working in Apache virtual hosts

I just had the situation where PHP worked fine from the default/main site configured in Apache, but not in a virtual host. Instead of parsing/executing the PHP script, Apache would rather send its source code back to the browser, which would propose to download it.

I found various potentials solutions online, like:

  • Checking if the PHP module was enabled (it was)
  • Setting “php_admin_flag engine” or “php_admin_value”  to “on” (it didn’t help)

What helped was to enable explicitly the PHP handlers in the Directory section of the corresponding vhosts. On OpenSUSE Leap, the file /etc/apache2/conf.d/php7.conf does exactly that, so I could just include it:

<VirtualHost ...>
        DocumentRoot /some/path
        <Directory /some/path>
                Include /etc/apache2/conf.d/php7.conf
                Options ...
                AllowOverride ...
                Require all granted
        </Directory>
</VirtualHost>

Uploading a file with HTTP PUT in Groovy with Basic Auth

Dependencies:

dependencies {
   compile 'org.codehaus.groovy.modules.http-builder:http-builder:0.7.1'
   compile 'org.apache.httpcomponents:httpmime:4.5'
}

Code:

import groovyx.net.http.HTTPBuilder
import groovyx.net.http.Method
import org.apache.http.client.entity.EntityBuilder
import org.apache.http.util.EntityUtils


class HttpPut {
   File inputFile
   String url
   String username
   String password

   HttpPut withFile(File inputFile) {
       this.inputFile = inputFile
       return this
   }

   HttpPut withUrl(String url) {
       this.url = url
       return this
   }

   HttpPut withCredentials(String username, String password) {
       this.username  = username
       this.password = password
       return this
   }

   void put() {
       def http = new HTTPBuilder(url)

       http.auth.basic(username, password)

       http.request(Method.PUT) { request ->
           request.entity = EntityBuilder.create().setBinary(inputFile.bytes).build()
       }
   }
} 

Gerrit: robust regular expression to create links from text automatically (commentlink)

When there is a link to a bug tracking ticket in a commit message, and Gerrit is already configured to make links out of ticket numbers automatically, it results in a mess. There is a bug report about this since quite a while already, but no progress yet.

A workaround is to use a lookbehind regular expression, to ignore ticket numbers that are followed by a double quote or

</a>

, because they are most likely part of a generated URL.

For Jira issue IDs, it would look like this:

[commentlink "bugtracking"]
       match = \\b([A-Z][A-Z0-9]+-\\d+)(?!(\"|))
       link = https://jira/path/$1

If you need to troubleshoot this kind of issue: it is very cumbersome to test the regex directly in Gerrit, because you have to modify the config file, stop Gerrit, flush the caches and restart it. Since the regex syntax is the one of Javascript, you can use a tool like https://regex101.com. Just enter as test string a set of lines you want to match, another set that you don’t want to match, enable the “g” option, and experiment with the regular expression until it works as expected. Then you need only to escape the \ and the ” and put it in the Gerrit config file.

Notepad++: automatic configuration of tabs vs. spaces

Notepad++ is a great text editor with lots of features out of the box, but I miss particularly one: the ability to use automatically tabs or spaces for indentation according to the content of an existing file when opening it.

Luckily enough, it’s easy to add this feature using the Python scripting plugin.

  • Install the Python scripting add on for Notepad++ with the plugin manager
  • Open C:\Program Files (x86)\Notepad++\plugins\PythonScript\scripts\startup.py
  • Append the code below at the bottom of the file
  • Save the file
  • Open Notepad++
  • Choose Plugins -> Python Script -> Configuration
  • Ensure Initialisation is set to ATSTARTUP and save
  • Restart Notepad++
from Npp import *

def indent_auto_detect(arg):
    for i in range(editor.getLineCount()):
        pos = editor.positionFromLine(i)
        indent = editor.getLineIndentPosition(i)-pos
        if indent > 0:
            if ord('\t') == editor.getCharAt(pos):
                console.write("Indentation: Tabs\n")
                editor.setUseTabs(True)
                return
            elif indent in [2, 3, 4, 8]:
                console.write("Indentation: %d spaces\n" % indent)
                editor.setUseTabs(False)
                editor.setIndent(indent)
                return

notepad.clearCallbacks([NOTIFICATION.BUFFERACTIVATED, NOTIFICATION.READY])
notepad.callback(indent_auto_detect, [NOTIFICATION.BUFFERACTIVATED])
notepad.callback(indent_auto_detect, [NOTIFICATION.READY])
console.write("Automatic indentation detection started\n")
indent_auto_detect(None)

The code comes from: https://gist.github.com/patstew/8dc8a4c0b816e2f33204e3e15cd5497e

Configuring Apache to serve multiple domains with a single SSL certificate

Here are some notes on how to configure Apache to server multiple domains with a single SSL certificate. If using a single certificate is not an option, you will have to use SNI, which is not covered in this howto.

# Create root CA
openssl genrsa -out rootCA.key 2048
 
# Self sign the CA cert
openssl req -x509 -new -nodes -key rootCA.key -sha256 -days 1024 -out rootCA.pem

Create the configuration file of the certificate request for all domains (multi.conf):

[req]
distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no
 
[req_distinguished_name]
countryName = XY
stateOrProvinceName = XY
localityName = City
organizationName = My organization
organizationalUnitName = My unit
commonName = alias1.domain.com
 
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
 
[alt_names]
DNS.1 = alias1.domain.com
DNS.2 = alias2.domain.com

Be aware that the semantic of the fields in the configuration file changes depending on the value of “prompt”. With “prompt = no”, countryName is the value for the country. Without “prompt”, it set the label that will be displayed when the user is prompted, and a default value can be provided in “countryName_default”. Very confusing…

One of the aliases has to be specified as commonName and again as alternate name, because in some cases only alternate names will be considered.

Now you can create the server key and the corresponding certificate:

openssl genrsa -out multi.key 2048
openssl req -new -out multi.csr -key multi.key -config multi.conf
openssl x509 -req -in multi.csr -CA rootCA.pem -CAkey rootCA.key -CAcreateserial -out multi.crt -days 500 -sha256 -extensions v3_req -extfile multi.conf

Note: the multi.conf file has to be used twice, once to create the request (2nd line), and again to create the certificate (3rd line).

And finally, use it in Apache:

NameVirtualHost *:443
 
SSLCertificateFile /root/ca/multi.crt
SSLCertificateKeyFile /root/ca/multi.key
 
 
<VirtualHost *:443>
    ServerName alias1.domain.com
    ...
</VirtualHost>

<VirtualHost *:443>
    ServerName alias2.domain.com
    ...
</VirtualHost>

Msys2/pacman: list dependencies with versions

Here is a one-liner to list all the dependencies for an Msys2 package, or for any pacman based system for that matter:

$ pactree -u mingw-w64-x86_64-gcc | xargs -r pacman -Si | gawk '/^Name *:/ {name=$3} /^Version *:/ {version=$3; printf "%s-%s\n",name,version}'
mingw-w64-x86_64-gcc-7.2.0-1
mingw-w64-x86_64-binutils-2.29.1-1
mingw-w64-x86_64-libiconv-1.15-1
mingw-w64-x86_64-zlib-1.2.11-1
mingw-w64-x86_64-bzip2-1.0.6-6
mingw-w64-x86_64-gcc-libs-7.2.0-1
mingw-w64-x86_64-gmp-6.1.2-1
mingw-w64-x86_64-mpc-1.0.3-2
mingw-w64-x86_64-mpfr-3.1.6-1
mingw-w64-x86_64-libwinpthread-git-5.0.0.4850.d1662dc7-1
mingw-w64-x86_64-crt-git-5.0.0.5002.34a7c1c0-1
mingw-w64-x86_64-headers-git-5.0.0.5002.34a7c1c0-1
mingw-w64-x86_64-isl-0.18-1
mingw-w64-x86_64-windows-default-manifest-6.4-3
mingw-w64-x86_64-winpthreads-git-5.0.0.4850.d1662dc7-1

Note: on my system, it works only for installed packages, even using the “-s” option:

$ pacman -Si mingw-w64-i686-qtwebkit
Repository      : mingw32
Name            : mingw-w64-i686-qtwebkit
...

$ pactree -s mingw-w64-i686-qtwebkit
error: package 'mingw-w64-i686-qtwebkit' not found

I don’t know why.

Getting BLN to work with CyanogenMod / LineageOS

“Backlight Notification” is a great feature of some Android Linux kernels, that allows to use the backlight of the “Menu” and “Back” buttons to signal pending notifications for phones that don’t have a dedicated notification LED.

There is an app called BLN control in the play store from the developer “neldar”, that allows to control this feature. However, if you try to use it on a recent version of CyanogenMod or LineageOS, it will report “This kernel does not support BLN”. This may not be true, the kernel might support it, like in many recent custom ROMs, but the app cannot configure it due to the SELinux (Security-Enhanced Linux). It’s a module that increases the security of Android, but prevents BLN control to work.

One option (that I do not recommend) is to disable SELinux by settings its mode to permissive (“setenforce permissive” as root).

A better option, that doesn’t compromise the security of the phone, is to enable BLN at each boot. For that, enable “adb” in the developer options of the phone, also enable “root” access for adb, then connect to the phone with ADB:

Computer$ adb shell
Phone$ su -
Phone# cat > /data/local/userinit.sh
echo 1 > /sys/devices/virtual/misc/backlightnotification/enabled
[Press CTRL-D]
Phone#

Now reboot the phone, and call or text your phone. The “menu” and “back” buttons should glow until you dismiss the notifications.

Don’t forget to disable the root access and adb in the developer options!

And if the directory “/sys/devices/virtual/misc/backlightnotification” is missing, then the kernel really doesn’t support BLN, sorry for you.

Please let me know in the comments if this worked for you (or not).

Perl’s “quote word” in Groovy

When it comes to embedding a long list of simple text strings in the code, Perl’s “quote word” is very handy:

my @l = qw(a b c d e f g h);

This is the closest equivalent I could find in Groovy:

def l = "a b c d e f g h".split("\\s+").findAll { it.length() > 0 }

You can also use a multiline string :

def l = """
a
b
c
d
e
f
g
h""".split("\\s+").findAll { it.length() > 0 }

The “findAll” avoid to get empty strings at the beginning of the end of the list in case of empty lines or leading and trailing spaces.

Linux serial TTY hanging

Today, I had the problem that my serial TTY (/dev/ttyACM0, serial over USB) was apparently hanging when receiving data.

However, when more data was sent by the other side, the communication would eventually resume. So at first, I though some kind of receive buffering was involved, but I was wrong. The reason was that the “icanon” option of the TTY was set. In this mode, the bytes are interpreted in some way I don’t (want to) understand, that caused the delays in the data transmission. Disabling it fixed the issue.

# stty -F /dev/ttyACM0 -a
speed 9600 baud; rows 0; columns 0; line = 0;

intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^A; eol = ; eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;

werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;

-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon ixoff -iuclc ixany -imaxbel -iutf8
-opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

Disable icanon:

# stty -F /dev/ttyACM0 -icanon

Check again:


# stty -F /dev/ttyACM0 -a

speed 9600 baud; rows 0; columns 0; line = 0;

intr = ^C; quit = ^\; erase = ^?; kill = ^U; eof = ^A; eol = ; eol2 = ; swtch = ; start = ^Q; stop = ^S; susp = ^Z; rprnt = ^R;

werase = ^W; lnext = ^V; flush = ^O; min = 1; time = 0;

-parenb -parodd -cmspar cs8 hupcl -cstopb cread clocal -crtscts
-ignbrk -brkint ignpar -parmrk -inpck -istrip -inlcr -igncr -icrnl -ixon ixoff -iuclc ixany -imaxbel -iutf8
-opost -olcuc -ocrnl -onlcr -onocr -onlret -ofill -ofdel nl0 cr0 tab0 bs0 vt0 ff0
isig -icanon iexten echo echoe echok -echonl -noflsh -xcase -tostop -echoprt echoctl echoke

How to identify the end of lines used in a text file

It’s pretty easy to find files that have Windows end of lines (CRLF) with the GNU grep:

grep -lUP '\r$'

And if you need to find files with Unix end of lines:

grep -lUP '[^\r]$'

But you may need more, for instance to find out if files have mixed end of lines. Then you should give “file” a try:

$ file mixed unix windows
mixed:   ASCII text, with CRLF, LF line terminators
unix:    ASCII text
windows: ASCII text, with CRLF line terminators

If you need even more information, for instance the number of CRLF and LF in each file, then you could use the following C program (eol-id). It will tell you this:

$ ./eol-id mixed unix windows
mixed LF=3 CRLF=3 VERDICT:MIXED
unix LF=3 VERDICT:LF
windows CRLF=3 VERDICT:CRLF

Here is the code (eol-id.c):

#include 
#include 

#define CR 0x0D
#define LF 0x0A

int readfile(char *filename) {
    FILE * fptr = fopen(filename, "rb"); // Read in binary mode

    if ( fptr == NULL ) {
        fprintf(stderr, "Failed to open %s\n", filename);
        exit(1);
    }

    int current;
    int previous = 0;
    long cr = 0;
    long lf = 0;
    long crlf = 0;
    long lfcr = 0;
    int result = 0;

    do {
        current = fgetc (fptr);
        switch (current) {
            case CR:
                if ( previous == LF ) {
                    lf--;
                    lfcr++;
                    previous = 0;
                }
                else {
                    cr++;
                    previous = current;
                }
                break;
            case LF:
                if ( previous == CR ) {
                    cr--;
                    crlf++;
                    previous = 0;
                }
                else {
                    lf++;
                    previous = current;
                }
                break;
            default:
                previous = current;
                break;
        }
    } while (current != EOF && ! result );

    fclose(fptr);

    printf("%s", filename);

    int n = 0;
    char *verdict;
    if ( lf > 0 ) {
        printf(" LF=%ld", lf);
        verdict = "LF";
        n++;
    }
    if ( crlf > 0 ) {
        printf(" CRLF=%ld", crlf);
        verdict = "CRLF";
        n++;
    }
    if ( lfcr > 0 ) {
        printf(" LFCR=%ld", lfcr);
        verdict = "LFCR";
        n++;
    }
    if ( cr > 0 ) {
        printf(" CR=%ld", cr);
        verdict = "CR";
        n++;
    }
    if ( n > 1 ) {
        verdict = "MIXED";
    }
    printf(" VERDICT:%s\n", verdict);

    return result;
}

int main(int argc, char **argv) {
    int i;
    for ( i = 1 ; i < argc ; i++ )
        readfile(argv[i]);
    return 0;
}

How to detect a transparent proxy with nmap

On some networks, the outbound traffic to web servers (ports 80 and 443) might be intercepted on the fly by a transparent proxy.
A simple way to try to detect such a proxy with nmap is to run the following command:


nmap -sT cn.pool.ntp.org -p 80

Starting Nmap 6.00 ( http://nmap.org )
Nmap scan report for cn.pool.ntp.org (202.112.29.82)
Host is up (0.00042s latency).
rDNS record for 202.112.29.82: dns1.synet.edu.cn
PORT   STATE SERVICE
80/tcp open http

Nmap done: 1 IP address (1 host up) scanned in 0.09 seconds

It tells nmap to initiate a standard TCP connection (-sT) with an NTP server that is far way from me, e.g. China (cn.pool.ntp.org), on port 80 (-p 80).

In the output of nmap, we can see that:

  • the connection was successful
  • the network latency is 42ms

It’s certain in this case that there is a transparent proxy, because the typical latency from Europe to China is almost 10 times higher (370ms vs. 42ms), and typically, NTP server don’t have their port 80 open.

The reason why this is working is that this transparent proxy will blindly intercept the connections to web servers independently of their existence. It’s only when the client sends the HTTP headers that the proxy will try to contact the remote server.

If you want to double check, you can ping the same server from the command line, that will give you the latency using ICMP packets (instead of TCP):


$ ping 202.112.29.82
PING 202.112.29.82 56(84) bytes of data.
64 bytes from dns1.synet.edu.cn (202.112.29.82): icmp_seq=1 ttl=46 time=376 ms
64 bytes from dns1.synet.edu.cn (202.112.29.82): icmp_seq=2 ttl=46 time=378 ms

Compare this latency with the one from nmap.

You may also call the same nmap command from a host that is known not to be behind a proxy, and compare the results. If one nmap tells the port is open and another one the port is closed, and no firewall comes into play, then  it’s one more sign for a proxy.

So in this case, we are able to prove the existence of the proxy, but be aware that if the results were negative, it wouldn’t prove its absence. The proxy might well behave in a way that is not detectable using this method, for instance if it contacts the target server before replying to the TCP connection request from the client.

 

What you should know before choosing JIRA

At my current work place, 2 years ago, we needed an application to track the bugs, features and other todo items for several of our projects. We chose JIRA for various reasons:

  • Large feature set
  • Pretty good end user interface
  • Proven stability and reliability
  • Supported by a solid company (Atlassian)

After 2 years of active use and scaling up both in terms of users and supported projects, JIRA does a pretty good job, however, there is one aspect that is such a pain that we would reconsider our choice if we had to do it again. If you are in the evaluation process and considering JIRA, you should definitely know about this…

During these 2 years of use, we found some shortcomings, like:

Key Resolution Summary Created Votes
JRA-3821 Unresolved Priorities per Project and Resolutions per Issue Type 27/May/2004 1948
JRA-1369 Unresolved Reduce JIRA email chatiness 25/Feb/2003 685
JRA-1991 Unresolved Custom fields for Projects 10/Jul/2003 443
JRA-3406 Unresolved Threaded Comments 16/Mar/2004 404
JRA-5006 Unresolved Allow users to watch a project 21/Oct/2004 232
JRA-5493 Unresolved Ability to add watchers during issue creation 14/Dec/2004 564
JRA-6798 Unresolved Allow admins to translate items configurable in the administration section 26/May/2005 265
JRA-8943 Unresolved WYSIWYG / Rich Text Editor 05/Jan/2006 390
JRA-14543 Unresolved Better support for reply emails from Outlook by mailhandlers 28/Feb/2008 122
JRA-22640 Unresolved Filter by fix version release date 29/Oct/2010 162
JRA-24907 Unresolved labels should be case insensitive 24/Jun/2011 420
JRA-28444 Fixed “Add a comment from a non quoted email body” does not set the Strip Quotes to be TRUE 31/May/2012 16
JRA-29069 Unresolved /rest/api/latest/user/search api doesn’t return all values if username is not specified 24/Jul/2012 47
JRA-29149 Won’t Fix Filter out inactive users in the Users list 30/Jul/2012 264
JRA-34423 Unresolved Add the ability to update issues via REST without sending notifications 21/Aug/2013 108
JRA-35449 Unresolved Translation of Custom Field Description 22/Oct/2013 39

At first, we found it great that Atlassian gives us a chance to be able to give feedback, and voted for the existing tickets. Since the number of votes was pretty high, we expected those issues to be addressed quickly. However, after a while, we realized that this is not going to happen. The only updates were pure PR. From my perspective, their updates can be summarized by: “We don’t plan to fix this in a timely fashion, so just deal with it. But please continue to provide feedback that we will kindly ignore”.

If you read the comments of these tickets, you will realize that we are not the only ones frustrated by this situation. Have a look at JRA-1369, JRA-24907 and JRA-14543.

Digging a bit deeper, a pattern emerges: there is a bunch of very old tickets with lots of votes. I think that the 2 following diagrams are showing well how bad the situation is:

2016-01-05 09_53_55-jira-all-tickets.xlsx - Excel

How to read this: the ticket JRA-3821 is almost 12 years old and has gathered 1900 votes (!) since its creation. The request is clearly legitimate, and you would actually even assume that this has always been covered by JIRA. All 1948 voters probably assumed so. Yet, the latest comment from Atlassian is:

Thank you to everyone who has voted or commented on this suggestion. […] Unfortunately, we are not planning on addressing this in the foreseeable future.

Which I personally interpret as:

Dear 1948 customers,  we have absolutely no interest in you or your needs. Deal with it.”

Another diagram showing a bigger picture:

2016-01-05 09_53_32-jira-all-tickets.xlsx - Excel

How to read this: the tickets that are 11 years old have gathered 11000 votes in total since their creation.

From this diagram, it’s clear that Atlassian has accumulated a huge backlog that they fail to process. So the bottom line for you if you consider using JIRA:

  • You should go through the tickets with the most votes and find out if you can live with them never being fixed (or evaluate potential 3rd party plugins)
  • If you find new shortcomings, you should not count on Atlassian to fix them in a timely fashion, however reasonable or obvious they sound, and despite the significant yearly support fees

 

16. Feb. 2016: compare this with how Jetbrains do it with Youtrack…

17. May 2016: here are some alternatives to JIRA:

  • Assembla
  • Axosoft
  • BugZilla
  • Gemini
  • Jixee
  • YouTrack

 

Converting unsigned to signed integers (using Powershell or Excel)

Let’s assume you got unsigned 32 bit integers, that actually represent 32 bit signed integers (using 2’s complement). How to get back the original (negative) values?

The formula is simple:

 signed = (raw+2^31) % 2^32 - 2^31

In Excel:

=MOD(A1+2^31;2^32)-2^31

In Powershell (2.0):

Function Convert-UInt-To-SInt($input)
{
  return ( $inputAsInt + [Math]::Pow(2, 31) ) % [Math]::Pow(2, 32) - [Math]::Pow(2, 31)
}

In Groovy:

 (raw + (1L << 31)) % ( 1L << 32) - ( 1L << 31 )

Zimbra: How to delete read-only events from a calendar

Today I needed to delete all the events from a calendar in Zimbra. It seemed straight-forward:

  • open the calendar
  • switch to the list view
  • set appropriate start date and end dates
  • select all
  • hit “Delete”

Unfortunately, Zimbra refused to proceed, because some of the messages were “Read-only”. I didn’t find any way to find out which messages are “Read-only”, let alone to change that.

However, I found a workaround which might be useful if you are in the same situation:

  • Create a new temporary calendar
  • Move all the events from the main calendar to the temporary one
  • Delete the temporary calendar

Shell, wc : getting progress in real time

On Unix/Linux, “wc” is a very useful tool to count the number of lines in a file or a stream. However, sometimes, the file or stream is so big that it takes minutes or longer to get the final result. In such cases, you might want to get feedback regularly, in real-time, about the progress. Here a simple awk script that will do just that, reporting the number of lines every second, and the finaly the final number of lines:

awk 'BEGIN {T=0} (T!=systime()) { printf "%s %s\n",NR,$0 ; T=systime()} END { print NR}'

(Tested with GNU awk)

Android / CyanogenMod : moving contacts between accounts

Contacts in Android are stored in one or more address books. One of them it the “Local” address book.

I needed to move my local contacts to a remote account, using CyanogenMod 12. I found out it’s not obvious, and Google didn’t help. This looks like the most straight-forward method:

  • Start the Contacts app
  • First, export the contacts:
    • In the context menu, select “Contacts to display”
    • Select the source address book
    • In the context menu, select “Import/export”
    • Export to storage
  • Now delete all contacts in the source address book:
    • In the context menu, select “Delete”
    • In the context menu, select “All”
  • Now import your contact in the target address book:
    • In the context menu, select “Contacts to display”
    • Select the source address book
    • In the context menu, select “Import/export”
    • Import from storage

Voilà!

 

Shell scripting: triming text from start to end markers with sed or awk

Let’s assume you have a huge log file looking like this:

some more log
2015-09-07 12:10 some log
some more log
...
2015-09-07 12:11 some log
...
some more log
2015-09-07 12:15 some log
some more log

Let’s assume you are interested only in the part between 2015-09-07 12:10 and 2015-09-07 12:15.

Here is a sed script that will do the job:

sed -n '/2015-09-07 12:10/,/2015-09-07 12:15/p' file.log

Here is also an awk script that does the same job:

#!/bin/sh                                                                       

awk -v "FROM=$1" -v "TO=$2" '($0 ~ FROM) {i=1} ($0 ~ TO) {i=0} (i) {print $0}' $3 

Save it and call it like this:

trim "2015-09-07 12:10" "2015-09-07 12:15" file.log

You can even use regular expressions as markers.

BTRFS: moving data between subvolumes efficiently

If you move files naively between BTRFS subvolumes like this:

cd /btrfs
mv volume1/dir volume2/dir

… the data will be effectively read from and written back to the physical storage, which can become a problem if you have many or big files, especially when the partition is encrypted.

Instead, you should use “reflink”‘s, creating effectively lightweight a copy of your data, e.g. a copy that shares initially the physical data of the original files. Only when you modify the original or copy, the data will be physically duplicated.

cd /btrfs
cp -pr --reflink=always volume1/dir volume2/dir
rm -rf volume1/dir

I usually prefer rsync over cp, because it can resume easily when aborted, and also because of the –progress option, but unfortunately, it doesn’t support reflink’s yet, even though some work has been done on this.

OSX/Java: “To open … you need to install the legacy Java SE 6 runtime.”

Don’t follow Apple’s stupid advice to install Java 6, which is deprecated for years, rather follow these instructions:

sudo vi `/usr/libexec/java_home`/../info.plist

Change:

<key>JVMCapabilities</key>
 <array>
  <string>CommandLine</string>
 </array> 

To:

<key>JVMCapabilities</key>
 <array>
  <string>BundledApp</string>
  <string>CommandLine</string>
 </array>

“BundledApp” was enough for me, you may also use:

  • WebStart
  • Applets
  • JNI

Terratec Noxon iRadio “Track not found”

Since a few days, most of the internet radios won’t play anymore on my Terratec Noxon iRadio. It looks like the Terratec servers are down. There are some other problems with other streams (for instance the french radio France Inter), which play fine on a computer. After a session of troubleshooting and network capturing, I found out a workaround that works well for me :

  • Go the web page of the radio you want to listen to
  • Download the PLS or M3U file of the radio. There are typically reachable by choosing an alternate player to the web player, like WinAmp, iTunes or VLC
  • Open this file with a text editor
  • Copy the URL of the MP3 stream
  • Open the web interface of your iRadio
  • Go to favorites
  • Enter a name, and the URL of the MP3 stream
  • Add the favorite
  • Play it

The favorites page is really really slow on my device. Pages take 30 seconds or more to refresh. I don’t know why, the workaround is to be patient 😉

Archiving a WordPress blog in a human-readable form

When a blog must be taken offline, it is still nice to keep a copy in a form that is easily readable. I gave wget and its mirroring option a try, but I wasn’t happy with the results. There is a much better approach if the blog contains mainly text posts. Just set the number of posts per page in the section “Settings/Reading” of the administration to the total number of posts on the blog (or more), then open the main page of the blog with your browser (Firefox in my case), and save it as the complete page. This will keep the CSS and embedded images.
Advantages:

  • really simple
  • keeps the layout and design of the blog

Drawbacks:

  • ignores the blog pages
  • ignores the linked medias

Calling Git from Powershell

Git uses UTF-8 by default in its output, but Powershell uses typically UTF-16 Little Endian. So if you try to get information from the Git command line directly, you will run into encoding issues.

You can however tell Powershell to decode the Git output using [Console]::OutputEncoding. Here is the code I use:

function rungit () {
    $repopath = $args[0]
    $gitargs = $args[1..($args.count-1)]
 
    cd $repopath
   
    $enc = [Console]::OutputEncoding
   
    # Git uses UTF-8 encoding, but Powershell expects UTF-16 by default
    # Let's switch temporarily to UTF-8
    # (against expectation, OutputEncoding defines also the encoding of the stdout we read)
    [Console]::OutputEncoding = [text.encoding]::utf8
   
    Write-Host "Calling: " $GIT $gitargs
   
    $output=(&$GIT $gitargs) | Out-String
   
    [Console]::OutputEncoding = $enc
   
    if ($LastExitCode -ne 0) {
        Write-Host $output
        error "Git command failed: $GIT $gitargs (in path: $repopath)"
    }
   
    return $output
}

Git: capitalization of file names and name conflicts

File and directory names are case sensitive in Git, but not on a typical standard file system on Windows. This can create a tricky situation if two files have names that differ only in their capitalization in Git. The most obvious symptom can be observed when you check them out on Windows: Git will write them at the same location. More precisely, Git will overwrite the first one with the second one. If they have different contents, Git will think that the first file is modified, and report this in “git status”. “git checkout” and “git reset” won’t help. The state will always stay “modified”.

How to fix conflicts

In most cases, this situation is not wanted, e.g. the 2 files in Git should actually be one, and the duplication is unintentional. It’s easy to fix the conflict on the command line (with Git bash for instance).

git rm --cached myfile.txt
git rm --cached MYFILE.TXT

Now put the version you want in “MyFile.txt”, or whatever name you want, and commit it:

git add MyFile.txt
git commit

How to find conflicts

Here is a quick and dirty way to find conflicting files:

git ls-files | tr 'A-Z' 'a-z' | sort | uniq -d | xargs -r git ls-files

It reports conflicts with the following format:

 myfile.txt
 MYFILE.TXT
 x.txt
 X.TXT

To find directories with conflicting names, use:

git ls-files | sed -E 's/\/[^/]+$//' |  sort | uniq | tr 'A-Z' 'a-z' | sort | uniq -d

It will report each conflict with a single name (all lower case), but it should be easy to find the culprits with “git ls-tree”.

There is an alternate method for files. It’s slower, but cleaner and more reliable:

git ls-files . | xargs -n 1 git ls-files  | sort | uniq -d

It supports blanks in file names and special chars, and will find conflicts caused by ANY file system limitations, not only the capitalization (I am thinking charset issues…).
It’s quite slow however, because it starts a Git process for every file in the repository. You may want to disable your virus scanner before starting (4x speed up in my case).

Other methods, like working with the inodes, are not reliable on Windows, they report false positives.

How to create conflicts

As a bonus, here are a few way to create this situation:

  • create commits in an environment where file names are case-sensitive, for instance Linux, and check them out on Windows
  • from a commit containing myfile.txt, merge from a commit containing MYFILE.TXT
  • same thing with cherry pick

There are probably other ways, let me know if you find any.

Getting information about USB and other devices on Windows with C#

I was looking for a way to get information about all connected devices (USB and otherwise) with C#. It turned out it’s not that easy to find out how, considering the following requirements:

  • No admin rights required
  • Return all devices, including those connected via USB hubs or additional USB cards
  • Only standard Windows API

But the solution itself is pretty simple:

    public class WinDevices
    {
        static public List GetUSBDevices()
        {
            List devices = new List();
 
            ManagementObjectCollection collection;
            using (var searcher = new ManagementObjectSearcher(@"Select * From Win32_PnPEntity"))
                collection = searcher.Get();
 
 
            foreach (var device in collection)
            {
                var deviceInfo = new DeviceInfo();
                deviceInfo.DeviceID =       (string)device.GetPropertyValue("DeviceID");
                deviceInfo.PNPDeviceID =    (string)device.GetPropertyValue("PNPDeviceID");
                deviceInfo.Description =    (string)device.GetPropertyValue("Description");
                deviceInfo.Name =           (string)device.GetPropertyValue("Name");
                deviceInfo.Caption =        (string)device.GetPropertyValue("Caption");
                deviceInfo.Service =        (string)device.GetPropertyValue("Service");
                devices.Add(deviceInfo);

                // Other properties supported by Win32_PnPEntity
                // See http://msdn.microsoft.com/en-us/library/aa394353%28v=vs.85%29.aspx
                //var keys = new string[] {
                //        "Availability",
                //        "Caption",
                //        "ClassGuid",
                //        "CompatibleID[]",
                //        "ConfigManagerErrorCode",
                //        "ConfigManagerUserConfig",
                //        "CreationClassName",
                //        "Description",
                //        "DeviceID",
                //        "ErrorCleared",
                //        "ErrorDescription",
                //        "HardwareID[]",
                //        "InstallDate",
                //        "LastErrorCode",
                //        "Manufacturer",
                //        "Name",
                //        "PNPDeviceID",
                //        "PowerManagementCapabilities[]",
                //        "PowerManagementSupported",
                //        "Service",
                //        "Status",
                //        "StatusInfo",
                //        "SystemCreationClassName",
                //        "SystemName"
                //};

            }
 
            collection.Dispose();
            return devices;
        }
 
        public class DeviceInfo
        {
            public string Name              { get; set; }
            public string DeviceID          { get; set; }
            public string PNPDeviceID       { get; set; }
            public string Description       { get; set; }
            public string Caption           { get; set; }
            public string Service           { get; set; }
        }
    }
 

Creating hyperlinks from text in an Office document

I needed to add hyperlinks to specific text patterns in a Microsoft Office document. As usual, at least for me, it ended up in a painful trial and error session of VBA programming. So here is the final script for the record. It will transform any text like “JIRA-123” into a hyperlink. Modify it to your own needs.

Sub AddLinks()
With Selection.Find
         .Text = "JIRA\-[0-9]@>"
         .Replacement.Text = ""
         .Forward = True
         .Wrap = wdFindContinue
         .Format = False
         .MatchCase = True
         .MatchWholeWord = True
         .MatchWildcards = True
         .MatchSoundsLike = False
         .MatchAllWordForms = False
         .Execute
    End With
    Do While Selection.Find.Found
      URL = "https://jira.domain.com/browse/" & Selection.Text
      ActiveDocument.Hyperlinks.Add Anchor:=Selection.Range, Address:=URL, TextToDisplay:=Selection.Text
      Selection.End = Selection.End + 1
      Selection.Collapse wdCollapseEnd
      Selection.Find.Execute
    Loop
End Sub
 

TeamCity : Propagating parameters to snapshot dependencies

At $DAYWORK, we are using TeamCity to automate our release process. For each release, we have a set of independent builds. For the example, we will call them “Windows”, “Mac” and “Linux”. They are build from the same source code, and have the same version string for a given release.
On a release day, we want to press the “Run” button on a “Release” build in TeamCity, enter the version string (like v1.2), and that from them on, the following happens automatically:

  •   “Windows” is triggered with the version string as parameter
  •   “Mac” is triggered with the version string as parameter
  •   “Linux” is triggered with the version string as parameter
  •   All builds use the same source code, as selected
  •   If one of the build fails, “Release” fails too

Most of this is easy. “Release” has “Windows”, “Mac” and “Linux” as snapshot dependencies, and a “version_string” parameter of type “prompt”.

The problem is now: how to pass the version string entered by the user from “Release” to the dependencies ? TeamCity supports passing the parameters between dependencies via the dep.* parameters, but only in the other direction.

I contacted the support from JetBrains, and while TeamCity doesn’t support this use case directly, there is a workaround, documented here for the record.

The trick is to separate the preparation from the execution. There are 2 builds in addition to the 3 “real” ones:

  • Initialize
  • Execute

The role of “Initialize” is to capture:

  • the version string
  • the revisions to use from the VCS root

While “Execute”s role is to actually execute the 3 builds.

“Initialize” has a build parameter “version_string”, of kind “prompt”, that cannot be empty, and its VCS roots are the ones used by the 3 builds.

The 3 builds have a snapshot dependency to “Initialize”. That allows them to get the right revisions, and access to the version string through “dep.Initialize.version_string”.

“Execute” has snapshot dependencies to the 3 builds and “Initialize”.

With this configuration, you can start “Initialize” manually, enter the version_string. That will not trigger the 3 builds. You have to do this manually, by promoting the “Initalize” build to “Execute”. That will trigger the builds with the right revision and version_string.
But it’s easy to automate this: add a “Finish Build Trigger” to in “Execute”, watching “Initialize”.

Note that for this exact use case (release with a specific version string), you could also use an approach where the version string is not entered manually but retrieved automatically from a branch name, as described here.

Lightroom (dynamiclinkmediaserver) taking 400% of the CPU

I had the problem for some time that Lightroom, or more specifically its process dynamiclinkmediaserver,  took 400% of the CPU (that is, the 4 cores available). Even after waiting several hours, that wouldn’t change, it seemed to be in an endless loop of some kind. Killing the process didn’t help, Google didn’t either (hence this post).

I created a new catalog and imported the old one. Since then the problem seems to be gone. You should try this if it happens to you. And please post your experience here as a comment to build up the knowledge on this issue!

How to extract the JAR dependencies from a Maven project using M2E

If you need for one reason to extract all the JAR your Maven project is depending on, and you happen to be using M2E, use the following snippet in your pom.xml:

<build>
  <plugins>
    <plugin>
      <artifactId>maven-dependency-plugin</artifactId>
      <executions>
       <execution>
         <phase>install</phase>
         <goals>
           <goal>copy-dependencies</goal>
         </goals>
         <configuration>
           <outputDirectory>${project.build.directory}/lib</outputDirectory>
         </configuration>
       </execution>
      </executions>
    </plugin>
  </plugins>
  <pluginManagement>
    <plugins>
      <plugin>
       <groupId>org.eclipse.m2e</groupId>
       <artifactId>lifecycle-mapping</artifactId>
       <version>1.0.0</version>
       <configuration>
         <lifecycleMappingMetadata>
           <pluginExecutions>
            <pluginExecution>
              <pluginExecutionFilter>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-dependency-plugin</artifactId>
                <versionRange>[2.0,)</versionRange>
                <goals>
                 <goal>copy-dependencies</goal>
                </goals>
              </pluginExecutionFilter>
              <action>
                <execute />
              </action>
            </pluginExecution>
           </pluginExecutions>
         </lifecycleMappingMetadata>
       </configuration>
      </plugin>
    </plugins>
  </pluginManagement>
</build>

Analyzing a build system on Windows with the Process Monitor and Groovy

It’s often a challenge to understand how a big software stack is built. To help me in this task on a new project, I did the following:

  • Start recording the system events using the Process Monitor. This includes what processes are started, by which process and which file they access
  • Run the build
  • When the build is finished, export the results from the Process Monitor in XML
  • Parse the XML
  • Analyse the data and print the results

You will find below a Groovy script that does the parsing and the analysis. For a C/C++ program, the output could look like:

build.exe
  codegenerator.exe ...
  codegenerator.exe ...
  gmake.exe ...
    compiler.exe ...
    compiler.exe ...
    linker.exe
  gmake.exe ...
    compiler.exe ...
    compiler.exe ...
    compiler.exe ...
    compiler.exe ...
    linker.exe ...
  cp.exe ...

The complexer the project is, the more interesting the output is of course.

Here are some possible command lines:

groovy procmon-parsing.groovy --help
groovy procmon-parsing.groovy --xml Logfile.XML
groovy procmon-parsing.groovy --xml Logfile.XML -p ProcessIndex,ProcessName,CommandLine -r build.exe

And here is the code. It’s my first Groovy program, and kind of a quick hack.

import javax.xml.parsers.SAXParserFactory
import org.xml.sax.helpers.DefaultHandler
import org.xml.sax.*

class Process {
    def ProcessIndex;
    def ProcessId;
    def ParentProcessId;
    def ParentProcessIndex;
    def CreateTime;
    def FinishTime;
    def ProcessName;
    def ImagePath;
    def CommandLine;
}

class Event {
    def ProcessIndex;
    def Time_of_Day;
    def Process_Name;
    def PID;
    def Operation;
    def Path;
    def Result;
    def Detail;
}

class RootHandler extends DefaultHandler {
    XMLReader reader;
    def objectsByType = [:]

    RootHandler(XMLReader reader) {
        this.reader = reader;
    }

    void startElement(String uri, String localName, String name, Attributes attributes) throws SAXException {
        if (name.equals(&quot;process&quot;)) {
            reader.setContentHandler(new ProcessHandler(reader, this, name, new Process()));
        }
        else if (name.equals(&quot;event&quot;)) {
            reader.setContentHandler(new ProcessHandler(reader, this, name, new Event()));
        }
    }
}

class ProcessHandler extends DefaultHandler {
    XMLReader reader;
    RootHandler parent;
    Object object;
    StringBuilder content;
    String elementName

    ProcessHandler(XMLReader reader, RootHandler parent, String elementName, Object object) {
        this.reader = reader;
        this.parent = parent;
        this.content = new StringBuilder();
        this.elementName = elementName
        this.object = object;
        if ( ! parent.objectsByType[elementName] )
            parent.objectsByType[elementName] = []
    }

    void characters(char[] ch, int start, int length) throws SAXException {
        content.append(ch, start, length);
    }

    void startElement(String uri, String localName, String name, Attributes attributes) throws SAXException {
        content.setLength(0);
    }

    void endElement(String uri, String localName, String elementName) throws SAXException {
        if (elementName.equals(this.elementName)) {
            parent.objectsByType[elementName].add(this.object)
            // Switch handler back to our parent
            reader.setContentHandler(parent);
        }
        else if ( Process.metaClass.getMetaProperty(elementName) ) {
            def value = content.toString()
            try {
                // Convert value to integer if possible
                value = value.toBigInteger()
            }
            catch(Exception e) {
            }
            this.object.setProperty(elementName, value);
        }
    }
}

def scriptName = new File(getClass().protectionDomain.codeSource.location.path).name

def cli = new CliBuilder(
   usage: &quot;$scriptName -x  [options]&quot;,
   header: '\nAvailable options (use -h for help):\n')
import org.apache.commons.cli.Option

cli.with
{
   h(longOpt: 'help', 'Help', args: 0, required: false)
   p(longOpt: 'property', 'Properties to print out, comma-separated', args: Option.UNLIMITED_VALUES, valueSeparator: ',')
   x(longOpt: 'xml', 'XML file from ProcessMonitor', args: 1, required: true)
   r(longOpt: 'rootproc', 'Name of the process to start at', args: 1, required: false)
}
def options = cli.parse(args)
if (!options) return
if (options.h) {
    cli.usage()
    System.exit(0)
}

propertiesToPrint = (options.ps &amp;&amp; options.ps.size() &gt; 0 ? options.ps : [&quot;ProcessName&quot;])
rootProcessName = (options.r ? options.r : null)

def reader = SAXParserFactory.newInstance().newSAXParser().XMLReader
def handler = new RootHandler(reader)
reader.setContentHandler(handler)

InputStream inputStream = new FileInputStream(new File(options.x));
InputSource inputSource = new InputSource(new InputStreamReader(inputStream));

reader.parse(new InputSource(inputStream))

processesByIndex = [:]
handler.objectsByType['process'].each{
     processesByIndex[it.ProcessIndex] = it
}

def rootProcesses = []
processesByIndex.each {
    index = it.key
    process = it.value
    isRoot = false
    if ( rootProcessName ) {
        if ( process.ProcessName.equals(rootProcessName) ) {
            isRoot = true
        }
    }
    else if ( ! processesByIndex[process.ParentProcessIndex] )
        isRoot = true
    if ( isRoot )
        rootProcesses.add(process.ProcessIndex)
}

def printProcessLine(process, depth) {
    prefix = &quot;  &quot;.multiply(depth)
    fields = propertiesToPrint.collect { process.getProperty(it) }
    fieldsStr = fields.join(&quot;\t&quot;)
    println prefix + fieldsStr
}

def printProcessTreeRecursively(index, depth) {
    process = processesByIndex[index]
    printProcessLine(process, depth )
    depth++
    processesByIndex.each {
        if ( it.value.ParentProcessIndex == index )
            printProcessTreeRecursively(it.value.ProcessIndex, depth)
    }
}

rootProcesses.each {
    printProcessTreeRecursively(it, 0)
}

gmake 4.0 for Windows

If you want to use gmake 4.0 under Windows, you have 2 options:

1. Cygwin

This variant is modified to integrate into the Cygwin (e.g. UNIX) environment, and will for instance expect UNIX file paths. If you still want to use it, just copy make.exe from a Cygwin installation, along with the few Cygwin DLLs Windows will complain about if you miss them.

2. Pure GNU Make

This version is available only as source code, but a Visual Studio project comes along, so it’s trivial to build it.
HOWEVER, the release 4.0 is broken under Windows. It can produce some very weird and unexplainable behaviors.
Therefore, you should download the source code with Git, and forget about the 4.0 archive files. Make sure you use at least the commit 87e5b64f419c4873e8340dc71d5553949157601c.

See also : http://lists.gnu.org/archive/html/help-make/2013-12/msg00015.html

Rewriting the “From” address with Exim (for local users)

Lately, the mail relay I use stopped accepting the “From” my Debian system was sending by default for the cron jobs results, something like:

root@mydomain.fr

I don’t have any MX records for “mydomain.fr”, nor a mail server for it, even locally.

Exim is configured to relay all non-local emails to a smart host, and I had some entries in

/etc/aliases

to forward the emails for root and some other users to my real mailboxes, e.g. for cron jobs. That worked fine for the “To” field, but not the “From”.

I started to look at Exim rewriting rules, but there is a much more straightforward solution. Exim supports the /etc/email-addresses file, that does cover exactly this use case. Check it out:

man etc-email-addresses

.

jmap throwing RuntimeException about unknown CollectedHeap type

Trying to find out the heap space usage of a Java process with jmap on a 64 bit Red Hat machine, I got the following error message:

Heap Configuration:
  MinHeapFreeRatio = 40
  MaxHeapFreeRatio = 70
  MaxHeapSize      = 6442450944 (6144.0MB)
  NewSize          = 1310720 (1.25MB)
  MaxNewSize       = 17592186044415 MB
  OldSize          = 5439488 (5.1875MB)
  NewRatio         = 2
  SurvivorRatio    = 8
  PermSize         = 21757952 (20.75MB)
  MaxPermSize      = 174063616 (166.0MB)
  G1HeapRegionSize = 0 (0.0MB)

Heap Usage:
Exception in thread "main" java.lang.reflect.InvocationTargetException
       at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
       at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
       at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
       at java.lang.reflect.Method.invoke(Method.java:606)
       at sun.tools.jmap.JMap.runTool(JMap.java:197)
       at sun.tools.jmap.JMap.main(JMap.java:128)
Caused by: java.lang.RuntimeException: unknown CollectedHeap type : class sun.jvm.hotspot.gc_interface.CollectedHeap
       at sun.jvm.hotspot.tools.HeapSummary.run(HeapSummary.java:146)
       at sun.jvm.hotspot.tools.Tool.start(Tool.java:221)
       at sun.jvm.hotspot.tools.HeapSummary.main(HeapSummary.java:40)

The reason was not obvious and Google didn’t help. In the end, I found out I was just missing the following package:

java-1.7.0-openjdk-debuginfo.x86_64

After installing this package, jmap worked as expected.

String quoting issues in Jenkins using the Gerrit and Gradle plugins

At $CURRENTJOB, I just faced a problem using the Gerrit and Gradle plugins in Jenkins.

The problem is that the Gerrit plugins defines build parameters which are passed to the Gradle plugin. Some of them contain the full name and email address of uploaders, committers or alike, in the following format:

-DGERRIT_PATCHSET_UPLOADER="John Doe <john.doe@domain.com>"

The Gradle plugin tries to pass this to Gradle on the command line, but because of the complexity (quotes, less and great than signs…), it typically ends up in a mess, at least on Windows.

Here are some other users confronted with the same issue:

“gluck” provides a fix for the Gradle plugin, but it doesn’t seem to have made his way into the mainline:
https://github.com/jenkinsci/gradle-plugin/pull/14

The solution for me has been the option “Do not pass compound ‘name and email’ parameters” of the Gerrit plugin from macbutch:
https://github.com/jenkinsci/gerrit-trigger-plugin/pull/41

Like he says himself, it’s a workaround, but at least it’s working. This option is hidden in the “Advanced” options of the Gerrit settings in the job configuration.

Some other related pointers:

Create list of recent contributors of a MediaWiki instance

For one of the wikis I am maintaining, we needed to contact the people who contributed less than “X” days ago. I quickly checked the MediaWiki REST-based API, but I didn’t seem easy or feasible, so I did it in SQL:

select
  u.user_email as email,
  max(r.rev_timestamp) as last_change
from revision r, user u
where
  r.rev_user = u.user_id
  and
  r.rev_timestamp
group by u.user_email;

Import the result in Excel, transform the MediaWiki timestamp to an Excel date, using the following formula:

=DATEVALUE(MID(B2,1,4)&"/"&MID(B2,5,2)&"/"&MID(B2,7,2))

From that point on, it’s trivial to compute the age in days, sort and export.

How to generate a list of dates from the shell (bash)

Here is a shell script that will generate the dates of the monday’s within the given range. It requires bash and the GNU version of the date command (gdate in my case). Currently it works only if the date format allows to compare date by comparing strings, but you can easily adapt it if required.

#!/bin/bash

DATE=gdate
FORMAT="%Y-%m-%d"
start=`$DATE +$FORMAT -d "2013-05-06"`
end=`$DATE +$FORMAT -d "2013-09-16"`
now=$start
while [[ "$now" < "$end" ]] ; do
  now=`$DATE +$FORMAT -d "$now + 1 week"`
  echo "$now"
done

Simple user management for Gerrit

UPDATE: The script is now part of the Gerrit code base: https://gerrit-review.googlesource.com/#/c/40480/

Gerrit has advanced authentication mechanisms (LDAP, HTTP based…), but unfortunately none that would simple enough to be convenient for simple use cases and test purposes.
Here is a Perl script that will start a simple LDAP service that does exactly what Gerrit expects, no more, no less. See the inline help for installation and usage.

Let me know in the comments of your feedback or suggestions.

#!/usr/bin/env perl

# Fake LDAP server for Gerrit
# Author: Olivier Croquette <ocroquette@free.fr>
# Last change: 2012-11-12
#
# Abstract:
# ====================================================================
#
# Gerrit currently supports several authentication schemes, but 
# unfortunately not the most basic one, e.g. local accounts with
# local passwords.
#
# As a workaround, this script implements a minimal LDAP server
# that can be used to authenticate against Gerrit. The information
# required by Gerrit relative to users (user ID, password, display
# name, email) is stored in a text file similar to /etc/passwd
#
# 
# Usage (see below for the setup)
# ====================================================================
#
# To create a new file to store the user information:
#   fake-ldap edituser --datafile /path/datafile --username maxpower \
#     --displayname "Max Power" --email max.power@provider.com
#
# To modify an existing user (for instance the email):
#   fake-ldap edituser --datafile /path/datafile --username ocroquette \
#     --email max.power@provider2.com
#
# To set a new password for an existing user:
#   fake-ldap edituser --datafile /path/datafile --username ocroquette \
#     --password ""
#
# To start the server:
#   fake-ldap start --datafile /path/datafile
#
# The server reads the user data file on each new connection. It's not
# scalable but it should not be a problem for the intended usage
# (small teams, testing,...)
# 
#
# Setup
# ===================================================================
#
# Install the dependencies
# 
#   Install the Perl module dependencies. On Debian and MacPorts,
#   all modules are available as packages, except Net::LDAP::Server.
#
#   Debian: apt-get install libterm-readkey-perl
#
#   Since Net::LDAP::Server consists only of one file, you can put it
#   along the script in Net/LDAP/Server.pm
#
# Create the data file with the first user (see above)
#
# Start as the script a server ("start" command, see above)
#
# Configure Gerrit with the following options:
#
#   gerrit.canonicalWebUrl = ... (workaround for a known Gerrit bug)
#   auth.type = LDAP_BIND
#   ldap.server = ldap://localhost:10389
#   ldap.accountBase = ou=People,dc=nodomain
#   ldap.groupBase = ou=Group,dc=nodomain
#
# Start Gerrit
#
# Log on in the Web interface
#
# If you want the fake LDAP server to start at boot time, add it to
# /etc/inittab, with a line like:
#
# ld1:6:respawn:su someuser /path/fake-ldap start --datafile /path/datafile
#
# ===================================================================

use strict;

# Global var containing the options passed on the command line:
my %cmdLineOptions;

# Global var containing the user data read from the data file:
my %userData;

my $defaultport = 10389;

package MyServer;

use Data::Dumper;
use Net::LDAP::Server;
use Net::LDAP::Constant qw(LDAP_SUCCESS LDAP_INVALID_CREDENTIALS LDAP_OPERATIONS_ERROR);
use IO::Socket;
use IO::Select;
use Term::ReadKey;

use Getopt::Long;
 
use base 'Net::LDAP::Server';

sub bind {
  my $self = shift;
  my ($reqData, $fullRequest) = @_;

  print "bind called\n" if $cmdLineOptions{verbose} >= 1;
  print Dumper(\@_) if $cmdLineOptions{verbose} >= 2;
  my $sha1 = undef;
  my $uid = undef;
  eval{
    $uid = $reqData->{name};
    $sha1 = main::encryptpwd($uid, $reqData->{authentication}->{simple})
  };
  if ($@) {
    warn $@;
    return({
        'matchedDN' => '',
        'errorMessage' => $@,
        'resultCode' => LDAP_OPERATIONS_ERROR
    });
  }

  print $sha1 . "\n" if $cmdLineOptions{verbose} >= 2;
  print Dumper($userData{$uid}) . "\n" if $cmdLineOptions{verbose} >= 2;

  if ( defined($sha1) && $sha1 && $userData{$uid} && ( $sha1 eq $userData{$uid}->{password} ) ) {
    print "authentication of $uid succeeded\n" if $cmdLineOptions{verbose} >= 1;
    return({
      'matchedDN' => "dn=$uid,ou=People,dc=nodomain",
      'errorMessage' => '',
      'resultCode' => LDAP_SUCCESS
    });
  }
  else {
    print "authentication of $uid failed\n" if $cmdLineOptions{verbose} >= 1;
    return({
      'matchedDN' => '',
      'errorMessage' => '',
      'resultCode' => LDAP_INVALID_CREDENTIALS
    });
  }
}

sub search {
    my $self = shift;
    my ($reqData, $fullRequest) = @_;
    print "search called\n" if $cmdLineOptions{verbose} >= 1;
    print Dumper($reqData)  if $cmdLineOptions{verbose} >= 2;
    my @entries;
    if ( $reqData->{baseObject} eq 'ou=People,dc=nodomain' ) {
        my $uid = $reqData->{filter}->{equalityMatch}->{assertionValue};
        push @entries, Net::LDAP::Entry->new ( "dn=$uid,ou=People,dc=nodomain",
       , 'objectName'=>"dn=uid,ou=People,dc=nodomain", 'uid'=>$uid, 'mail'=>$userData{$uid}->{email}, 'displayName'=>$userData{$uid}->{displayName});
   }
   elsif ( $reqData->{baseObject} eq 'ou=Group,dc=nodomain'  ) {
        push @entries, Net::LDAP::Entry->new ( 'dn=Users,ou=Group,dc=nodomain',
       , 'objectName'=>'dn=Users,ou=Group,dc=nodomain');
   }

    return {
        'matchedDN' => '',
        'errorMessage' => '',
        'resultCode' => LDAP_SUCCESS
    }, @entries;
}


package main;

use Digest::SHA1  qw(sha1 sha1_hex sha1_base64);

sub exitWithError {
  my $msg = shift;
  print STDERR $msg . "\n";
  exit(1);
}

sub encryptpwd {
  my ($uid, $passwd) = @_;
  # Use the user id to compute the hash, to avoid rainbox table attacks
  return sha1_hex($uid.$passwd);
}

my $result = Getopt::Long::GetOptions (
  "port=i"        => \$cmdLineOptions{port},
  "datafile=s"    => \$cmdLineOptions{datafile},
  "email=s"       => \$cmdLineOptions{email},
  "displayname=s" => \$cmdLineOptions{displayName},
  "username=s"    => \$cmdLineOptions{userName},
  "password=s"    => \$cmdLineOptions{password},
  "verbose=i"     => \$cmdLineOptions{verbose},
);
exitWithError("Failed to parse command line arguments") if ! $result;
exitWithError("Please provide a valid path for the datafile") if ! $cmdLineOptions{datafile};

my @commands = qw(start edituser);
if ( @ARGV != 1 || ! grep {$_ eq $ARGV[0]} @commands ) {
	exitWithError("Please provide a valid command among: " . join(",", @commands));
}

my $command = $ARGV[0];
if ( $command eq "start") {
  startServer();
}
elsif ( $command eq "edituser") {
  editUser();
}
  

sub startServer() {

  my $port = $cmdLineOptions{port} || $defaultport;

  print "starting on port $port\n" if $cmdLineOptions{verbose} >= 1;
  
  my $sock = IO::Socket::INET->new(
    Listen => 5,
    Proto => 'tcp',
    Reuse => 1,
    LocalAddr => "localhost", # Comment this line if Gerrit doesn't run on this host
    LocalPort => $port
  );
  
  my $sel = IO::Select->new($sock);
  my %Handlers;
  while (my @ready = $sel->can_read) {
    foreach my $fh (@ready) {
      if ($fh == $sock) {
        # Make sure the data is up to date on new every connection
        readUserData();
        
        # let's create a new socket
        my $psock = $sock->accept;
        $sel->add($psock);
        $Handlers{*$psock} = MyServer->new($psock);
      } else {
        my $result = $Handlers{*$fh}->handle;
        if ($result) {
          # we have finished with the socket
          $sel->remove($fh);
          $fh->close;
          delete $Handlers{*$fh};
        }
      }
    }
  }
}

sub readUserData {
  %userData = ();
  open (MYFILE, "<$cmdLineOptions{datafile}") || exitWithError("Could not open \"$cmdLineOptions{datafile}\" for reading");
  while (<MYFILE>) {
    chomp;
    my @fields = split(/:/, $_);
    $userData{$fields[0]} = { password=>$fields[1], displayName=>$fields[2], email=>$fields[3] };
  }
  close (MYFILE);
}

sub writeUserData {
  open (MYFILE, ">$cmdLineOptions{datafile}") || exitWithError("Could not open \"$cmdLineOptions{datafile}\" for writing");
  foreach my $userid (sort(keys(%userData))) {
    my $userInfo = $userData{$userid};
    print MYFILE join(":",
      $userid,
      $userInfo->{password},
      $userInfo->{displayName},
      $userInfo->{email}
      ). "\n";
  }
  close (MYFILE);
}
  
sub readPassword {
  Term::ReadKey::ReadMode('noecho');
  my $password = Term::ReadKey::ReadLine(0);
  Term::ReadKey::ReadMode('normal');
  print "\n";
  return $password;
}

sub readAndConfirmPassword {
  print "Please enter the password: ";
  my $pwd = readPassword();    
  print "Please re-enter the password: ";
  my $pwdCheck = readPassword();
  exitWithError("The passwords are different") if $pwd ne $pwdCheck;
  return $pwd;
}

sub editUser {
  exitWithError("Please provide a valid user name") if ! $cmdLineOptions{userName};
  my $userName = $cmdLineOptions{userName};

  readUserData() if -r $cmdLineOptions{datafile};

  my $encryptedPassword = undef;
  if ( ! defined($userData{$userName}) ) {
    # New user

    exitWithError("Please provide a valid display name") if ! $cmdLineOptions{displayName};
    exitWithError("Please provide a valid email") if ! $cmdLineOptions{email};

    $userData{$userName} = { };

    if ( ! defined($cmdLineOptions{password}) ) {
      # No password provided on the command line. Force reading from terminal.
      $cmdLineOptions{password} = "";
    }
  }
  
  if ( defined($cmdLineOptions{password}) && ! $cmdLineOptions{password} ) {
    $cmdLineOptions{password} = readAndConfirmPassword();
    exitWithError("Please provide a non empty password") if ! $cmdLineOptions{password};
  }

  
  if ( $cmdLineOptions{password} ) {
    $encryptedPassword = encryptpwd($userName, $cmdLineOptions{password});
  }


  $userData{$userName}->{password}    = $encryptedPassword if $encryptedPassword;
  $userData{$userName}->{displayName} = $cmdLineOptions{displayName} if $cmdLineOptions{displayName};
  $userData{$userName}->{email}       = $cmdLineOptions{email} if $cmdLineOptions{email};
  # print Data::Dumper::Dumper(\%userData);
  
  print "New user data for $cmdLineOptions{userName}:\n";
  foreach ( sort(keys(%{$userData{$userName}}))) {
    printf "  %-15s : %s\n", $_, $userData{$userName}->{$_}
  }
  writeUserData();
}

High resolution charts based on Piwik data

Piwik is a great replacement to Google Analytics. Based on my Piwik data, I wanted to generate a chart for a presentation, showing the evolution of the number of visitors and visits over time. Piwik provides such a chart, of course, however it’s not customizable. For instance, the height is fixed and small, and so it didn’t look good in the presentation.

Therefore, I hacked a tool that generated this chart in high resolution. Of course this should be packaged, for instance as a Piwik plugin, or even integrated in the main line, but for now it’s good enough for me. Based on the JQPlot documentation, you can tweak it to your own needs.

You just have to set “url” and token_auth below to fit your own Piwik installation. You will of course need JQPlot, which is however already included in Piwik.

Here is example of the output:
piwik-jqplot

<!DOCTYPE html>

<html>
<head>
	
    <title>Piwik statistics</title>

    <link class="include" rel="stylesheet" type="text/css" href="jquery.jqplot.min.css" />
  
  <!--[if lt IE 9]><script language="javascript" type="text/javascript" src="excanvas.js"></script><![endif]-->
    <script class="include" type="text/javascript" src="jquery.min.js"></script>
   
    <script class="include" type="text/javascript" src="jquery.jqplot.min.js"></script>
    <script class="include" language="javascript" type="text/javascript" src="plugins/jqplot.dateAxisRenderer.min.js"></script>
    <script type="text/javascript" src="plugins/jqplot.highlighter.min.js"></script>
    <script type="text/javascript" src="plugins/jqplot.cursor.min.js"></script>
    <script type="text/javascript" src="plugins/jqplot.dateAxisRenderer.min.js"></script>   
</head>
<body>

<div id="chart1" style="height:700px; width:100%;">Loading...</div>

<script class="code" type="text/javascript">

// Read a page's GET URL variables and return them as an associative array.
function getUrlVars()
{
    var vars = [], hash;
    var hashes = window.location.href.slice(window.location.href.indexOf('?') + 1).split('&');
    for(var i = 0; i < hashes.length; i++)
    {
        hash = hashes[i].split('=');
        vars.push(hash[0]);
        vars[hash[0]] = hash[1];
    }
    return vars;
}

$(document).ready(function(){

  var callback = function(data) {
  var jqdata = [];
  console.log(data);
  $.each(data, function(key, val) {
    var day = key.match(/\d{4}-\d{1,2}(-\d{1,2})?/)[0];
    var metrics = ["nb_uniq_visitors", "nb_visits"];
    for ( i = 0 ; i < metrics.length ; i++ ) {
      jqdata[i] = jqdata[i] || [];
      jqdata[i].push([day,val[metrics[i]]]);
    }
  });

  $('#chart1').empty();
  var plot1 = $.jqplot('chart1', jqdata, {
    title:'Statistics',
    axes:{
        xaxis:{
            renderer:$.jqplot.DateAxisRenderer,
        },
        yaxis:{
          min:0
        }
    },
    series:[
        {label:"Visiteurs uniques",
        lineWidth:4,
        markerOptions:{style:'square'}},
        {label:"Visites",
        lineWidth:4,
        markerOptions:{style:'square'}},
    ],
    highlighter: {
        show: true,
        sizeAdjust: 7.5
      },
    legend: {
        show: true,
    }
  });

};

  var url = "http://yoursite/piwik/index.php"; 
  var urlparams = {
    module:"API",
    method:"VisitsSummary.get",
    idSite:1,
    token_auth:"TBD...",
    format:"JSON"
  };

  if ( 	getUrlVars()["type"] == "week" ) {
    urlparams["period"] = "week";
    urlparams["date"] = "last70";
  }
  else {
    urlparams["period"] = "month";
    urlparams["date"] = "last26";
  }
  $.ajax({
    url: url,
    data:urlparams,
    dataType: 'json',
    success: callback
  });

});
</script>


</body>

</html>

Adding the geographical peaks to Google Earth

My current version of Google earth doesn’t show any peak. I describe below how I imported them from OpenStreetMap. Here is the end result:

google-earth-osm-peaks

google-earth-osm-peaks-2

  • Download an OSM file for the region you are interested in, for instance from geofabrik. I will use “provence-alpes-cote-d-azur.osm.bz2” here
  • Decompress it:
    bzip2 -d provence-alpes-cote-d-azur.osm.bz2
  • Extract the peaks with Osmosis:
    osmosis -q --rx provence-alpes-cote-d-azur.osm --tf accept-nodes natural=peak --tf reject-ways --tf reject-relations --wx provence-alpes-cote-d-azur-peaks.osm
  • Convert it to KML, removing the timestamps:
    gpsbabel -i osm -f provence-alpes-cote-d-azur-peaks.osm  -x transform,wpt=trk -o kml -F - | grep -vi 'timestamp.*when' > provence-alpes-cote-d-azur-peaks.kml

Finally, open the KML file in Google Earth.

Update 2012-12-04 : you can also modify the OSM file to include the elevation in the names, for a better overview in Google Earth. For that, you will need an XML processor. I use xsltproc here, from libxslt.

<?xml version="1.0"?>

<xsl:stylesheet  xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">       
  <xsl:output indent="yes" method="xml"/>
  
  <xsl:template match="/">
    <xsl:apply-templates />
  </xsl:template>
 
  <xsl:template match="@*|*|processing-instruction()|comment()">
    <xsl:copy>
      <xsl:apply-templates select="@*|node()"/>
    </xsl:copy>
  </xsl:template>

  <xsl:template match="tag[@k='name']">
    <xsl:element name="tag">
      <xsl:attribute name="k">name</xsl:attribute>
      <xsl:attribute name="v">
        <xsl:value-of select="@v"/>
        <xsl:text> (</xsl:text>
        <xsl:value-of select="../tag[@k='ele']/@v"/>
        <xsl:text>)</xsl:text>
      </xsl:attribute>
    </xsl:element>
  </xsl:template>
</xsl:stylesheet>

Before you call gpsbabel, use this XSLT sheet to process the OSM file:

xsltproc peaks.xslt provence-alpes-cote-d-azur-peaks.osm   > provence-alpes-cote-d-azur-peaks-processed.osm

It’s of course possible to optimize this process. If you do, it would be nice to post your improvements here.

Post to phpbb using WWW::Mechanize from CPAN

Unfortunately there is no module on CPAN that supports posting to a phpbb blog, so I did it with WWW:Mechanize. While it looked like a simple task, I had a very frustrating time getting this to work. The reason is that phpbb will silently refuse to post if the script does so too fast. So the key to get it working is a sleep statement, like shown in the following sample code.

  my $mech = WWW::Mechanize->new();
  push @{ $mech->requests_redirectable }, 'POST';
  $mech->cookie_jar({});
  my $response;
  $response = $mech->get($bbUrl."/ucp.php?mode=login");
  $response = $mech->submit_form(with_fields=>{username=>$bbUser, password=>$bbPassword}, button=>"login");
  $response = $mech->get($bbUrl."/posting.php?mode=post&f=".$params{forumid});
  sleep(3); # workaround for phpbb's silly "antispam feature"
  $response = $mech->submit_form(with_fields=>{subject =>$params{subject}, message =>$params{message}}, button=>"post", form_name=>"postform");

The corresponding code in phpbb is the following, in posting.php:

// Was cancel pressed? If so then redirect to the appropriate page
if ($cancel || ($current_time - $lastclick < 2 && $submit))
{
...

Hard coded values, non-self-speaking code, lack of appropriate comment… The more I use phpbb, the more disappointed I become!

Installing shellinabox on Debian stable (squeeze)

Shellinabox is a great tool that allows to open a terminal session (e.g. a shell) on your server through HTTP and/or Apache. It’s particularly useful when you are a network that doesn’t allow outgoing SSH connections, like in a many hotspots or corporate networks.

Unfortunately, shellinabox is as of today in sid (unstable). Here are the steps to install it anyway on squeeze.

Add a sid “deb-src” source to your /etc/apt/sources.list, for instance:

deb-src http://ftp.uni-bayreuth.de/linux/Debian/debian/ sid main non-free contrib

Then prepare the binary package from the source:

apt-get update
apt-get build-dep shellinabox
apt-get -b source shellinabox

Finally, you can install it:

dpkg -i shellinabox*.deb

Obviously the same procedure can be used to install other packages from unstable.

If you have an Apache instance running, you can use it as a front-end to shellinabox (e.g. a reverse proxy). The advantages are:

  • no need to open a new port in your firewall.
  • you can add a layer of security by requiring an HTTP password to access shellinabox. This way, potential security flaws in shellinabox are not directly exploitable.

To do this, add the following lines to your Apache configuration, for instance in /etc/apache2/sites-available/shellinabox:

<Location /shellinabox>
ProxyPass http://localhost:4200/
Order allow,deny
Allow from all

AuthUserFile /path/to/.htpasswd
AuthName "Authentication for shellinabox"
AuthType Basic
require user someuser
</Location>

Finally enable the proxy module and the site:

sudo a2enmod proxy_http
sudo a2ensite shellinabox
sudo /etc/init.d/apache2 restart

kernel_task CPU usage

I had recently a problem with the battery of my MacBook Pro (MacBookPro8,2), so I decided to remove it (full story here). From that point on, every now and then, the kernel_task started to use 200-300% of the CPU (according to the CPU monitor), and the whole system became really slow. The only solution at that point was to restart the machine, but the problem appeared again after 1-20 minutes. I put the battery back and it fixed the problem for good.

My guess is that the battery is important for the air flow within the machine. If it’s not present, some parts are getting too hot and the system slows itself down artificially. That’s only a guess, but it might help you if you have to face the same symptoms.

Why I may well switch away from the Mac

Let me share with you an incredible story that just happened to me:

Monday

The battery of my Macbook Pro (early 2011) is completely dead, after only 16 months and 130 load cycles. The 12 months warranty is of course expired.

With Dell or HP, I would just buy a new battery online for 80EUR, get it within 2-3 days and install it myself. Pretty simple.

On my Unibody Mac, the battery is internal (but easy to access with the right screwdrivers). Apple decided to put batteries inside the computer and thinks all its customers are too dumb to change it, so the official way is to let Apple or an official reseller to change it, for the bold price of 130 Euros.

Tuesday

I am angry against Apple, therefore I look for alternatives. Ebay only has used items or coming from Hong-Kong, the USA, or China. It doesn’t look like a good plan.

I call Gravis, a big Apple reseller in Germany which has a shop close to me. They tell me I have to come to the shop with my laptop.

So I go to the shop. I wait 1h in the queue. The Gravis employee then starts his diagnosis tools on my computer, and confirms that the battery is dead. They can change it, but they will need 5-7 days, during which they will keep the computer. SAY WHAT ? 5-7 days to change a laptop battery ?

Back home, I take my screwdrivers and open my laptop. It takes me 10min to take the battery out. If could buy a new one by my own, this story would be over at that point.

Wednesday

I call the Apple hotline to get an appointment in an Apple store, since their online scheduling tool doesn’t work. The employee tells me that there is not appointment until the next Wednesday, and that there is not other way. SAY WHAT ?

10am : I go to the Apple store directly, and ask if they could do it quickly. First answer is no, I have to take an appointment, 1 week, blabla. This is the point where I became really angry and loud: “My battery has clearly a manufacturing defect, it’s not covered by warranty. Now I have to wait 1 week for an operation that takes 10min, and pay 130EUR for that ? If that’s really the case, I sell my Mac and buy an HP or Dell”. I meant it. Fortunately the employee gave me an appointment at 6pm.

6:45 pm : I go out from the Apple store with a new battery and 130EUR less in my pocket.

Bottom line

I really don’t like the general direction Apple is heading to (Gatekeeper, App Store, planned obsolescence of computers and devices, closed and proprietary systems, patent lawsuits, culture of secret , and so on), but since MacOS X is until now the best system for me, I was closing my eyes. However this battery story went way to far. My next machine will most probably not be a Mac, but a PC with Linux. I know I will have to make compromises, but I am now ready to pay the price.

Maven’s pom.xml is not included when using assembly plugin with ref “jar-with-dependencies”

Maven has a bug that causes the pom.xml of the current artifact to be ignored when using assembly / jar-with-dependencies. There is even a bug report from 2008 with priority “Major” (!).

As a workaround, create an XML file “assembly.xml” with the following content:

<assembly xmlns="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0"
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/plugins/maven-assembly-plugin/assembly/1.1.0 http://maven.apache.org/xsd/assembly-1.1.0.xsd">
  <!-- TODO: a jarjar format would be better -->
  <id>jar-with-dependencies</id>
  <formats>
    <format>jar</format>
  </formats>
  <includeBaseDirectory>false</includeBaseDirectory>
  <dependencySets>
    <dependencySet>
      <outputDirectory>/</outputDirectory>
      <useProjectArtifact>true</useProjectArtifact>
      <unpack>true</unpack>
      <scope>runtime</scope>
    </dependencySet>
  </dependencySets>
</assembly>

In your main pom.xml, replace:

<descriptorRefs>
<descriptorRef>jar-with-dependencies</descriptorRef>
</descriptorRefs>

by:

<descriptors>
<descriptor>assembly.xml</descriptor>
</descriptors>

It shouldn’t make any difference since it’s the same configuration as the one reported here, but it works for me !

Java: Disabling SSL checks for a given HttpClient

If you know what you are doing, you can disable the SSL trust and host name checks when using the HttpClient in Java. It goes like this:


TrustManager[] trustAllCerts = new TrustManager[]{
new X509TrustManager() {
public java.security.cert.X509Certificate[] getAcceptedIssuers() {
return null;
}
public void checkClientTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
public void checkServerTrusted(
java.security.cert.X509Certificate[] certs, String authType) {
}
}
};

SSLContext sslContext;
sslContext = SSLContext.getInstance("SSL");
sslContext.init(null, trustAllCerts, new java.security.SecureRandom());
SSLSocketFactory sslSocketFactory = new SSLSocketFactory(sslContext, SSLSocketFactory.ALLOW_ALL_HOSTNAME_VERIFIER);
Scheme scheme = new Scheme("https", sslSocketFactory, 443);
httpclient.getConnectionManager().getSchemeRegistry().register(scheme);

Location of the log4j configuration file in a program embedding Jetty

I have a server with an embedded Jetty, for which I am now setting up the logging. Unfortunately, I can’t find a location to put the log4j.properties or log4j.xml where they would be found by default by the log4j runtime, resulting in the following error message:

log4j:WARN No appenders could be found for logger (org.eclipse.jetty.util.log).
log4j:WARN Please initialize the log4j system properly.
log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Unfortunately, the link doesn’t help (at the time of writing). However, it’s possible to specify the location of the configuration file explicitly.

Either in your Java code, before you call any log4j methods:

System.setProperty("log4j.configuration","file:///path/to/log4j.properties");

or on the command line :

java "-Dlog4j.configuration=file://path/to/log4j.properties" -jar ...

You can also debug where log4j looks with the following option on the command line:

java -Dlog4j.debug -jar ...

Which gives in my case :

log4j: Trying to find [log4j.xml] using context classloader sun.misc.Launcher$AppClassLoader@20cf2c80.
log4j: Trying to find [log4j.xml] using sun.misc.Launcher$AppClassLoader@20cf2c80 class loader.
log4j: Trying to find [log4j.xml] using ClassLoader.getSystemResource().
log4j: Trying to find [log4j.properties] using context classloader sun.misc.Launcher$AppClassLoader@20cf2c80.
log4j: Trying to find [log4j.properties] using sun.misc.Launcher$AppClassLoader@20cf2c80 class loader.
log4j: Trying to find [log4j.properties] using ClassLoader.getSystemResource().
log4j: Could not find resource: [null].

Switching from Expression Media to Lightroom

Expression Media is a great tool. However, its future doesn’t look very bright. Since it has been bought by Microsoft and recently Phase One, no real development occurred, while on the other end Lightroom and Aperture appeared and took off, offering a much more modern user experience, including raw development, which is what I definitely need. I decided therefore to switch to Lightroom, and the challenge was to port my workflow and database that I have been defining, refining and maintaining since many years now.

My approach is the following:

  • Some fields can be taken 1:1 (like the IPTC instructions)
  • the non-standard “People” field  becomes hierarchical  keywords, like “People >> Smith John”
  • “Event” becomes hierarchical  keywords too, like “Event >> 2011 >> 12 >> 2011-12-25 Christmas”
  • EM keywords go into a Lightroom keyword sub-tree (“EMKeywords >> Flower”), with the intention to clean them up later (I didn’t use hierarchical keywords in EM)

I don’t use the Lightroom collections at all during the transfer, but I will definitely use them later in my workflow.

 

–[[—————————————————————————-

Expression Media Importer v1.0
Olivier Croquette
ocroquette@free.fr

This Lightroom plugin will import your Expression Media data into Lightroom.

WARNING: YOU NEED TO ADAPT this plugin to your own needs !
It’s not a final product, just a basis for your own work.

Please drop me an email if this plugin was useful to you.

——————————————————————————–

I tested this plugin with iView Media Pro 2.0.2 on a Mac, but it should work for
other versions and also for iView, Phase One Media Pro, even under Windows.
I used Lightroom 4, but I believe it should work fine with Lightroom 3.

This plugin also assumes that your filenames are unique. If it’s not the case,
you will probably have to adapt it to use the full path as key for mediaitems
instead.

Lightroom is very slow at this kind of bulk changes. On my system, it is only
able to update 40 photos / second.
See http://forums.adobe.com/message/4427342

Instructions:

o in Expression Media, export your catalog(s) as XML files
File >> Export to XML

o Adapt mapEm2Lr for the simple fields you want to take over 1:1

o Adapt the code for the handling complex fields, like People, Keywords…
I transfer them as keywords in Lightroom

o Adapt the code for the handling of the location
"AnnotationFields:Country", "AnnotationFields:City", "AnnotationFields:Location"
I transfer them as keywords in Lightroom

o Select either a few photos, or select none to process all, and click on
File >> Plug-in Extras >> Expression Media Importer

Make backups ! Try on test data first !

——————————————————————————]]

require "xmlparser"

–[[
This is the mapping of simple Expression Media fields to Lightroom
I only defined the ones I need.
The EM field name is the one found in the XML file, with ":" used
as a separator between field category and field name.
For instance:

0

800

1999:09:26

Are:
AssetProperties:Rating
MediaProperties:Width
AnnotationFields:EventDate

See here for the LR field names:
http://www.robcole.com/Lightroom/SDK%203.0/API%20Reference/modules/LrPhoto.html#photo:setRawMetadata

Note that some of the EM fields can have multiple value (like People or Keyword),
and the deserve a special hard-coded handling in the code below.
–]]

local mapEm2Lr = {
["AnnotationFields:Source"] = "jobIdentifier",
["AnnotationFields:Instructions"] = "instructions",
["AnnotationFields:Author"] = "creator",
["AnnotationFields:Status"] = "title",
["AnnotationFields:EventDate"] = "dateCreated",
}

–[[
Set here the path to your XML file
]]
local xmlPath = "/path/to/file.xml"

——————————————————————————–
— You should not have to change these values
——————————————————————————–

local keywordCache = { }
local keywordIdSeparator = "::"
local progressTotalSteps = 4
local progressCurrentStep = 0
local maxNumberOfXmlTags = nil — Useful for testing

local LrDialogs = import ‘LrDialogs’
local LrLogger = import ‘LrLogger’
local LrApplication = import ‘LrApplication’
local LrTasks = import ‘LrTasks’
local LrProgressScope = import ‘LrProgressScope’

local progress

— Create the logger and enable the print function.
local myLogger = LrLogger( ‘exportLogger’ )
myLogger:enable( "print" ) — Pass either a string or a table of actions.

——————————————————————————–
— Write trace information to the logger.

local function outputToLog( message )
myLogger:trace( message )
end

— Split a string based on the given separator
function split(str,sep,n)
local sep, fields = sep or ":", {}
local pattern = string.format("([^%s]+)", sep)
str:gsub(pattern, function(c) fields[#fields+1] = c end,n)
return fields
end

— Revert an array
function revert(t)
local t2 = { }
for i,v in ipairs(t) do table.insert(t2, 1, v) end
return t2
end

— Build the media item object from the corresponding XML node
function getMediaItem(xmlnode)
mediaitem = { }

for i,subXmlNode in pairs(xmlnode.ChildNodes) do
local category = subXmlNode.Name — AssetProperties, MediaProperties or AnnotationFields
— outputToLog( "Category="..category)
for i, subXmlNode in pairs(subXmlNode.ChildNodes) do
local key = category .. ":" .. subXmlNode.Name
— outputToLog(" " .. key)

if key == "AnnotationFields:People" or key == "AnnotationFields:Keyword" then
— Array data
if mediaitem[key] == nil then mediaitem[key] = { } end
table.insert(mediaitem[key], subXmlNode.Value)
else
mediaitem[key] = subXmlNode.Value
end
end
end
— outputToLog("getMediaItem done")
return mediaitem
end

— Returns a callback used by the XML parser to cancel processing on user request
function getContinueCallback(progress)
local lastTime = os.time()
local continue = true
return function (numberOfTags)
— outputToLog(lastTime .. " " .. os.time())
if (os.time() – lastTime) >= 1 then
LrTasks.yield()
continue = not progress:isCanceled()
if numberOfTags ~= nil then
progress:setCaption("Parsing the Expression Media XML file… "..numberOfTags.." tags")
end
end
lastTime = os.time()
return continue
end
end

function incrementProgress(progress, caption)
progress:setCaption(caption)
progressCurrentStep = progressCurrentStep + 1
progress:setPortionComplete(progressCurrentStep, progressTotalSteps)
LrTasks.yield()
end

— Parse the Expression Media XML file
function parseXml()
— outputToLog( " Parsing…" )
local ccb = getContinueCallback(progress)
if not ccb() then return nil end
incrementProgress(progress, "Parsing the Expression Media XML file…")
local xmlTree = XmlParser:ParseXmlFile(xmlPath, ccb, maxNumberOfXmlTags)
if not ccb() then return nil end
local mediaitems = { }
incrementProgress(progress, "Processing the XML content…")
for i,xmlNode2 in pairs(xmlTree.ChildNodes) do
if(xmlNode2.Name=="MediaItemList") then
for i,xmlNode3 in pairs(xmlNode2.ChildNodes) do
if(xmlNode3.Name=="MediaItem") then
mediaitem = getMediaItem(xmlNode3)
— Store information by media Filename.
mediaitems[mediaitem["AssetProperties:Filename"]] = mediaitem
if not ccb() then return nil end
end
end
end
end
return mediaitems
end

— A simple wrapper that ignores nil values
function setMetadata(photo, key, value)
if not value then return end
— outputToLog(" photo:setRawMetadata " .. key .. " " .. value)
photo:setRawMetadata(key, value)
end

— Find an existing LR keyword or create it. Returns the corresponding LrKeyword object
— Hierarchical keywords are encoded into the KeywordId string using keywordIdSeparator
— as a separator
— This function caches the keywords, because Lightroom is very slow at this game.
function findOrCreateKeyword(catalog, keywordId)

— outputToLog( " findOrCreateKeyword ("..keywordId..") ; number of items in the cache: ".. # keywordCache )

if keywordCache[keywordId] ~= nil then return keywordCache[keywordId] end

— outputToLog( " findOrCreateKeyword: Item not in cache: "..tostring(keywordId) )

local idElements = split(keywordId, keywordIdSeparator)
local parentKeyword
local keywordName = table.remove(idElements, # idElements)
if # idElements > 0 then
local parentId = table.concat(idElements, keywordIdSeparator)
— outputToLog( " parentId=" .. parentId )
parentKeyword = findOrCreateKeyword(catalog, parentId)
else
parentKeyword = nil
end

local childrenOfParent
if parentKeyword == nil then
childrenOfParent = catalog:getKeywords()
elseif parentKeyword._justcreated then
childrenOfParent = { }
else
childrenOfParent = parentKeyword:getChildren()
end

— Find out if this keyword already exists in Lightroom
for _, child in ipairs(childrenOfParent) do
if ( child:getName() == keywordName ) then
keywordCache[keywordId] = child
return child
end
end

local keyword = catalog:createKeyword(keywordName , { }, true, parentKeyword )
keyword._justcreated = true
keywordCache[keywordId] = keyword
— outputToLog( " findOrCreateKeyword end number of items in the cache: ".. # keywordCache )
return keyword
end

— Call this function everytime you close a transaction, because
— the cached LrKeyword’s become invalid then.
function initializeKeywordCache(catalog, keywordList, idElements)
keywordCache = { }
return
end

function processChunk(catalog, photos, mediaitems)

initializeKeywordCache(catalog)

for _, photo in ipairs( photos ) do
if progress:isCanceled() then return end

local path = photo:getRawMetadata("path")
local filename = string.gsub(path, ".*/", "")
— outputToLog(" filename:" .. filename)

if mediaitems[filename] == nil then
outputToLog(" WARNING No information available in the XML file about " .. filename)
else
if true then
— Test and adapt to your own needs !
local locationFields = {"AnnotationFields:Country", "AnnotationFields:City", "AnnotationFields:Location"}
local locationKeywordItems = { }
for _,field in ipairs(locationFields) do
if mediaitems[filename][field] ~= nil and mediaitems[filename][field] ~= "" then
table.insert(locationKeywordItems, mediaitems[filename][field])
end
end
if # locationKeywordItems > 0 then
table.insert(locationKeywordItems, 1, "Location")
local keywordId = table.concat(locationKeywordItems, keywordIdSeparator)
local keyword = findOrCreateKeyword(catalog, keywordId)
— outputToLog(" result: ".. tostring(keyword))
photo:addKeyword(keyword)
end
end

for k,v in pairs(mediaitems[filename]) do
local k2 = mapEm2Lr[k]
if k == "AnnotationFields:People" then
for _,peopleName in pairs(v) do
— Test and adapt to your own needs !
local components = split(peopleName, " ")
local newName = table.concat(revert(components), " ")
photo:addKeyword(findOrCreateKeyword(catalog, "People" .. keywordIdSeparator .. newName))
end
elseif k == "AnnotationFields:Keyword" then
— Test and adapt to your own needs !
for _, emKeyword in pairs(v) do
photo:addKeyword(findOrCreateKeyword(catalog, "EmKeywords" .. keywordIdSeparator .. emKeyword))
end
elseif k == "AnnotationFields:Fixture" then
— You probably don’t want that code
local components = split(string.gsub(v, " ", "-", 1), "-")
while # components > 2 do table.remove(components) end — Keep only year and month
table.insert(components, 1, "Events")
table.insert(components, v)
photo:addKeyword(findOrCreateKeyword(catalog, table.concat(components, keywordIdSeparator) ))
elseif k == "AnnotationFields:Source" then
— You probably don’t want that code
local findr = { string.find(v, ‘(%a+)(%d%d)(%d%d)’) }
local year = findr[5]
if year == nil then
outputToLog(" WARNING unable to parse source for " .. filename .. " : " .. tostring(v))
else
if tonumber(year) 0 do
local currentTime = os.time()
local caption = "Updating photos (" .. completed .. "/" .. total .. ")"
if completed > 0 and startTime ~= currentTime then
caption = caption .. " " .. math.floor(completed/(currentTime – startTime)) .. " /s"
end
progress:setCaption(caption)
LrTasks.yield()

if progress:isCanceled() then return end
local batch = { }
— We create small chunks of photos to process because otherwise Lightroom becomes
— slow and unresponsive
while # photos > 0 and # batch < 1000 do
— outputToLog(" in subloop " .. # batch .. " element(s) " .. # photos .. " left")
local photo = table.remove(photos)
table.insert(batch, photo)
end

catalog:withWriteAccessDo( "Updating the LR database", function(context)
processChunk(catalog, batch, mediaitems)
end)

local endChunkTime = 0

progress:setPortionComplete(progressCurrentStep+completed*1.0/total, progressTotalSteps)
completed = completed + # batch
end

end

function startWithProgressTracking()
progress = LrProgressScope({ title = "Import from Expression Media" })

processTargetPhotos(progress)

if not progress:isDone() then progress:done() end

LrDialogs.message('Done !')

end

import 'LrTasks'.startAsyncTask( startWithProgressTracking )

Change keyboard layout on a Mac using a hotkey

Windows supports keyboard shortcuts to set a specific keyboard layout or to cycle through them. Not so with MacOS X 😦

With Spark, and the following Applescript, I can at least set a layout. Cycling through is a bit more difficult.


changeInputLanguage("U.S.") -- change "U.S." to your own needs

on changeInputLanguage(L)
tell application "System Events" to tell process "SystemUIServer"
tell (1st menu bar item of menu bar 1 whose value of attribute "AXDescription" is "text input")
return {its value, click, click menu 1's menu item L}
end tell
end tell
end changeInputLanguage

The code comes from allancraig.net

Get list of “smtp” email addresses from Outlook messages

I had recently to export the email addresses contained in the BCC: field of an Outlook message. It turned out it’s not that easy. It achieved it with the following of Visual Basic, which will let you pick a folder and dump all recipients for all messages in this folder. I wanted to do it for a specific message and copy the result to the clipboard, but neither of this looks easy in Outlook. I spent already too much time on this so I give up, this version is good enough for me.

Sub ExtractRecipientsFromEmail()
Dim OlApp As Outlook.Application
Dim MailObject As Object
Dim RecipientObject As Object
Dim Email As String
Dim NS As NameSpace
Dim Folder As MAPIFolder
Set OlApp = CreateObject("Outlook.Application")
Set NS = ThisOutlookSession.Session
Set Folder = NS.PickFolder
For Each MailObject In Folder.Items
If MailObject.Class = olMail Then
For Each RecipientObject In MailObject.Recipients

Dim smtp As String
' Debug.Print "ad=", RecipientObject.Address

Select Case RecipientObject.AddressEntry.AddressEntryUserType
Case OlAddressEntryUserType.olExchangeUserAddressEntry
Set oEU = RecipientObject.AddressEntry.GetExchangeUser
If Not (oEU Is Nothing) Then
smtp = oEU.PrimarySmtpAddress
End If
Case OlAddressEntryUserType.olExchangeDistributionListAddressEntry
Set oEDL = RecipientObject.AddressEntry.GetExchangeDistributionList
If Not (oEDL Is Nothing) Then
smtp = oEDL.PrimarySmtpAddress
End If
End Select

Debug.Print smtp

Next
End If
Next
Set OlApp = Nothing
Set MailObject = Nothing
Set RecipientObject = Nothing
End Sub

Installing debian on a Popcorn Hour A-210

Instructions based on: http://www.networkedmediatank.com/showthread.php?tid=16317

On an existing Debian or Ubuntu system:

apt-get install debootstrap
debootstrap --arch mipsel --foreign stable debian
tar -cvzf debian.tgz debian
mount --bind /proc/ debian/proc/
mount --bind /dev/ debian/dev/

Copy the TGZ to your device, log in with telnet on your NMT, and run:

tar -xvzf debian.tgz
cd debian
usr/sbin/chroot . /bin/bash
export PATH=$PATH:/usr/bin:/usr/sbin
debootstrap/debootstrap --second-stage
# Verify the DNS settings, in case you used different network configurations on the computer and the Popcorn Hour:
cat /etc/resolv.conf
exit

Now you should have a Debian system running. Configure the package source:

usr/sbin/chroot . /bin/bash
echo "deb http://ftp.de.debian.org/debian/ squeeze main " >> /etc/apt/sources.list
echo "deb http://security.debian.org/ squeeze/updates main " >> /etc/apt/sources.list

Note: check out the following generator for sources.list : http://debgen.simplylinux.ch/generate.php

Update the package list and install the required packages:

apt-get update
# Standard stuff:
apt-get install vim less sysklogd sudo lsof nmap wget curl psmisc
# Development:
apt-get install gcc autoconf automake subversion git uuid-dev uuid-runtime make

Create a user (working as root constantly is dangerous):

adduser user

Install SSH:

apt-get install openssh-server openssh-client
/etc/init.d/ssh start

Reencode videos to watch them on an Android device with ffmpeg

Here is the script I use to convert videos to watch them on a Samsung Galaxy Ace. You can probably use it for any Android device, just modify the maximum resolution.


#!/usr/bin/perl

use strict;
use Data::Dumper;
use Getopt::Long;

sub getFileResolution {
my ($file) = @_;
my $stdout = `ffmpeg -i "$file" 2>&1`;
# Stream #0.0: Video: mpeg4, yuv420p, 720x528 [PAR 1:1 DAR 15:11], 25 tbr, 25 tbn, 25 tbc
$stdout =~ /Stream.*Video:.*?([\d]+)x([\d]+)/;
my ($w, $h) = ($1, $2);
die "Failed to get original resolution of \"$file\" ($w,$h)" if ! ( $w && $h);
return ($w, $h);
}

sub getNewResolution {
my ($width, $height) = my ($newwidth, $newheight) = @_;
my $ratio = $width * 1.0 / $height;
my $maxwidth = 480;
my $maxheight = 320;
if ( $newheight > $maxheight ) {
$newheight = $maxheight;
$newwidth = $ratio * $newheight;
}
if ( $newwidth > $maxwidth ) {
$newwidth = $maxwidth;
$newheight = $newwidth / $ratio;
}
return (int($newwidth), int($newheight));
}

sub getNewName {
my ($file) = @_;
$file =~ /(.*)\./;
my $basename = $1;

my $newname = $basename . ".mp4";
return $newname if $newname ne $file;

return $basename . "-converted.mp4";
}

sub printUsage {
print STDERR "Usage: $0 --input INVIDEO [--output OUTVIDEO] [--volume N]\n";
print STDERR " Converts the input video into a MP4 video readable on mobile devices\n";
print STDERR "\n";
print STDERR " --input : the input video file\n";
print STDERR " --output : the input video file\n";
print STDERR " --volume N : increase or decrease volume ; nominal volume is N=256\n";
print STDERR "\n";
print STDERR "Example:\n";
print STDERR " $0 --input file.avi --output file.mp4 --vol 512\n";
}

my ($src, $dst, $volume);
my $gocode = GetOptions (
"input=s" => \$src,
"output=s" => \$dst,
"volume=i" => \$volume
);

if ( ! $gocode || scalar(@ARGV) ) {
printUsage();
exit(1);
}

my ($w, $h) = getNewResolution(getFileResolution($src));
$dst = getNewName($src) if ! $dst;

my @args = ("ffmpeg");
push @args, "-i", $src;
push @args, "-s", "${w}x${h}";
push @args, "-b", "600k";
push @args, "-ab", "96k";
push @args, $dst;
system(@args);

git: failed to lock

Recently I had this error message “failed to lock” while trying to push some changes to a remote Git repository.

After some time Googling with no success and then troubleshooting, I realised I was trying to push to branch called “a/b” while the branch “a” existed. This is obviously not supported by Git, since having a branch “a” requires “.git/refs/heads/a” to be a file (containing the current branch head), while having a branch “a/b” requires “.git/refs/head/a” to be a directory.

rsync under Windows, permission problem

When calling rsync from Windows (e.g. with Cygwin) to sync to Unix, I had the bad surprise that all directories created on the Linux side had the permission 000 (that’s it, no permission at all). After rsync created the top level directory, it would pitifully fail to create any element underneath it (mkdir: permission denied).

The solution is to call it with the chmod option :

$ rsync -rtxv --chmod=ugo=rwX -e ssh localdir/ user@host:/remotedir

Expression Media crashing when opening a catalog

If Expression Media is crashing when you open a specific catalog, you can try the hints described in this thread :

http://www.eggheadcafe.com/software/aspnet/31633965/catalog-corrupt-crashes-mem.aspx

For the record, they are :

  1. Self repair : Hold down the Alt key on the keyboard while clicking Open in the open dialog (didn’t work at all for me)
  2. Create a new catalog and import from the corrupted catalog (didn’t work for me : the progress bar showed the import of my medias, but at the end the new catalog was empty)
  3. Open the catalog on another computer, if possible using a different OS (I could open the catalog using EM2 under Windows vs. MacOS X, unfortunately saving a copy and opening it in MacOS X again still led to a crash)

In my case, I had to combine 3. then 2. to get back to a working catalog under MacOS X. That’s a relief, because this catalog contains a hell lot of data entered manually, probably more than 500 hours of work.

 

Random delays while accessing a web page

Some parts of a web server I am administrating was showing a bad “felt” performance. Using Firebug, I narrowed down the problem : there were random delays in the delivery of the pages. Sometimes it was PHP scripts, sometimes CSS or Javascript files. Since CSS and JS files are static, it excluded a problem with PHP or the database. The load on the server was OK, so it wasn’t that either.

I finally found out that by disabling the Keep Alive feature on the server side, the problem disappeared. I still have to investigate what the exact problem is (Keep Alive is part of the HTTP standard and used widely). It could be a problem with the Content-Length that is not reported correctly by the server, causing the browser to wait for more data.

Sony Alpha 100 vs Sony Alpha 550 (noise)

One huge weak point of the Sony Alpha 100 is the noise. I was expecting the 550 and its CMOS sensor to provide a major improvement to this regard, and was not disappointed.

Here is quick illustration. It’s a typical lowlight situation. Even with an aperture of 2.8, if I want to keep a reasonable exposure time of 1/30, I need 1600 ISO. Here is how the 2 cameras handle the challenge. The pictures speak for themselves.

Alpha 100 (notice the horrible noise in the background in the top left corner) :

Alpha 550 :

Alpha 100 :

Alpha 550 :

Personal review of the Garmin Oregon 300

After some time using the Etrex Vista Hcx, I have switched some weeks ago to the Oregon 300.

Here are the pros and cons, including the ones in comparison with the Etrex Vista Hcx.

Pros (common to the Vista Hcx) :

  • Fast boot and GPS fix (typically 15 sec – 1 min )
  • Good dimensions for a hand held device
  • Can be used for turn-by-turn navigation (ie. car) if you have the right map
  • Uses normal AA batteries
  • USB connection and USB mass storage mode (to send the maps or get the logs)
  • Power over USB : very useful in the car, but no battery charging over USB
  • Good battery performance (8h or more)
  • Can save logs (ie. tracks) to the memory card
  • Customizable, ie. :
    • on all pages (map, stats, …) the displayed fields can be defined very precisely (ie. average speed, max speed, ETA…)

Pros compared to the Vista Hcx :

  • Thanks to the touch screen, the controls are much intuitive. Menu structure is very clean (but also require more steps to use, see cons)
  • Scrolling the map is faster and more intuitive
  • It’s possible to have several map files on the card, eg. if you need temporarily a new piece of map, just add the corresponding file to the memory card (no need to regenerate a huge single file). When you don’t need it anymore, just delete it
  • Clips directly on the bike mount. No need for a little piece of plastic and metal like on the Vista Hcx.
  • Different profiles possible, eg. bike, car, walk… (not tested yet)
  • Better screen resolution
  • Openstreemap maps look great (see here the instructions)

Cons (common to the Vista Hcx) :

  • Detailed maps are very expensive, especially for France
  • Batteries can not be charged over USB
  • The altitude displayed and recorded in the tracks is unusable in (pressurized) airplane cabin. The device relies much too much on the barometric altitude. Even the auto-calibration doesn’t help. The algorithm must be changed so that the GPS altitude is taken over when there is a reasonable fix and the different with the barometric altitude is more than eg. 100m
  • Garmin tools do not support MacOS X
  • For the frequency of track points stores in the log, there are 5 settings, but even “Most often” is quite coarse, so I have to force it to “Every second”, but then the track becomes huge very fast. There is no good compromise.
  • Does not record Hdop, Ldop or similar information in the track logs
  • Very limited memory for the current track log, even if you have a big SD card
  • The device has to be held 100% horizontally for the compass to be accurate. If you hold it naturally, ie. at the level of your breast with the screen perpendicular to your visual sight, then you hold it with a 10°+ angle, and that’s enough to get an error of 20° or more in the compass function
  • You can’t decide (and at the beginning it’s not clear) which altitude is displayed or stored in the log (the device can have up to 3 sources : GPS altitude, barometric altitude, map)
  • I really miss a function that would reset the statistics after X hours of inactivity, or ask at startup to do so

Cons compared to the Vista Hcx :

  • The screen is not as bright. Actually, even with the full brightness, it’s not bright enough. On a sunny day, you will have to find/make some shadow to see well
  • The Vista Hcx can save GPX files automatically to the memory card, one per day. The Oregon 300 doesn’t have this feature. A big loss !
  • The Oregon displays the time of day only with hours and minutes, not the seconds. So you used to sync the time of a digital camera by taking a picture of the Garmin’s screen (e.g. for geotagging, openstreetmap), that won’t work anymore.
  • The Vista Hcx has a key to access the settings of the current mode (eg. map, compass…). With the Oregon 300, you have to go back to the root page, and navigate to Settings, and choose the right category, change what you need, and go back to the root page, and return to the mode you were in. Very inefficient !

Other links :

Personal review of the Garmin eTrex Vista Hcx

After several months using my first GPS hand held device, the Garmin eTrex Vista Hcx, here are the pros and cons.

Pros:

  • Fast boot and GPS fix (typically 15 sec – 1 min )
  • Nice and readable screen
  • Good dimensions for a hand held device
  • Can be used for turn-by-turn navigation (ie. car) if you have the right map
  • Uses normal AA batteries
  • USB connection and USB mass storage mode (to send the maps or get the logs)
  • Power over USB : very useful in the car, but no battery charging over USB
  • Good battery performance (8h or more)
  • Can save logs (ie. tracks) to the memory card
  • Customizable, ie. :
    • the main pages (displayed in a round-robin manner, ie. map, compass…) can be switched off or on at wish
    • on all pages (map, stats, …) the displayed fields can be defined very precisely (ie. average speed, max speed, ETA…)

Cons:

  • The controls needs getting used to
  • There is only one main page for the statistics. I miss one more, because I don’t need the same fields in your car as on my bike, so I have to change the fields manually when changing activity
  • Screen resolution could be better
  • Detailed maps are very expensive, especially for France
  • Detailed maps are slow to navigate in when zoomed out
  • Batteries can not be charged over USB
  • The map is one big file at a hard coded location. So there is no way to have 2 map files and choose which one to use on the field (for instance, Garmin maps vs. OpenStreetMap)
  • The map file takes a looooooog time to generate.
  • The map file must be completely regenerated if you want to add a piece of map
  • No MacOS X support for the map file
  • Limited to 2GB cards
  • For the frequency of track points stores in the log, there are 5 settings, but even “Most often” is quite coarse, so I have to force it to “Every second”, but then the track becomes huge very fast. There is no good compromise.
  • Does not record Hdop, Ldop or similar informatio in the track logs
  • The device has to be held 100% horizontally for the compass to be accurate. If you hold it naturally, ie. at the level of your breast with the screen perpendicular to your visual sight, then you hold it with a 10°+ angle, and that’s enough to get an error of 20° or more in the compass function
  • You can’t decide (and at the beginning it’s not clear) which altitude is displayed or stored in the log (the device can have up to 3 sources : GPS altitude, barometric altitude, map)
  • I really miss a function that would reset the statistics after X hours of inactivity, or ask at startup to do so.
  • When the bike clip is fixed, the device doesn’t fit in its case anymore
  • Therefore, you have to remove the clip when not using the device on the bike, but the little thing is easy to loose, and couldn’t even find it to buy on Garmin’s page 😦

Windows: Copy file path to the clipboard

I often want to send the path of a file by email, either of local file or one on a network share. Sadly, Microsoft didn’t think of this use case, at least up to Windows XP, so I wrote this little VBS script that does exactly this (under Vista, just press SHIFT+Right click on the file, Copy as path).

If the path is within a drive that is mapped to a network resource, then the script will also replace the letter by the corresponding absolute location. The reason is that your colleagues may not use the same mapping as you do, making thereby a link to “X:…” useless.

Installation instructions :

  • Download this file
  • Save it as C:Documents and SettingsUSERNAMESendToCopyLocation.vbs (replacing USERNAME by your Windows login)

Usage :

  • Right click on a file, select Send To, CopyLocation.vbs :
    copylocation-usage
  • Wait a few seconds (especially the first time after a reboot)
  • Paste anywhere you want

Notes for the installation :

  • the SendTo directory is hidden by default in Windows, so enter the path either manually, or change your settings to show hidden files and dirs
  • It’s important to change the file name extension to .vbs

Troubleshooting :

  • If the path is not copied to the clipboard, open the file with the notepad, and replace CopyToClipboardIe by
    CopyToClipboardWord
    No idea which Internet Explorer setting prohibits to copy to the clipboard on some systems. Of course, the workaround requires MS Word installed

Keywords : windows explorer copy absolute path clipboard vbs script unc letter drive network

WordPress : insert all images into post

Like other users (see for example here and here), I was quite frustrated by the number of clicks it takes to insert all images attached to a post into its content.

Therefore I have implemented an “Insert all” function. It’s a button in the gallery tab, that, when clicked, will do what it says.

Instructions :

  • Rename your wp-admin/include/media.php as media.php.org
  • Put this file in place of the old one
    (the .doc extension is just because of my blog hoster, it’s a php file)

Tested with : WordPress 2.7.1

Audio quality of digital cameras

Here are sample videos of the Panasonic DMC-FX33 and the Canon IXUS 860 IS, also known as the SD 870 IS Digital (Canon’s naming scheme is really confusing).
The main purpose is to compare the audio quality. I was very disappointed by the FX33 to this regard. Even if far from perfect, the Canon is much better. It even has a sound recording function, ie. without video.

Too bad that online reviews and even magazines never say a word about the audio (and often neither video) quality of the cameras, since many, many people are using their digital camera as simple camcorders too nowadays. And too bad too that it’s so hard to find sample movies ! There is really no good source of information based on this criteria.

Canon IXUS 860 IS / SD 870 IS Digital sample movie

Panasonic DMC-FX33 sample movie

Sorry, I didn’t have any good source of loud noise, but it already gives you an idea.

Keywords : audio quality compact digital camera movie sample video samples sound

Simple solutions to capture DV under MacOS

I was looking for simple (and cheap) solution to copy my DV tapes on my hard disk. Here is what I found :

  • Vidi : free
  • AVCVideoCap : GUI, included by Apple in the Firewire SDK, that can be downloaded here
  • Live Capture Plus : commercial, about €49
  • iMovie, commercial

iMovie doesn’t support the use case I am interested in, ie. just save one big .dv file. Live Capture Plus is too expensive for this purpose.

I was until now pretty happy with AVCVideoCap, until it recently decided not to capture anything anymore. After selecting an output file name, it says “File opened”, and if I try to capture, “Could not open device”, although iMovie and Vidi didn’t have any problem with the same hardware/software setup.

So I had to find another solution, and I came upon Vidi, which is now the tool I use. It can do much more than what I need, but it’s free and even provided with the source code.

See also :

Keywords: rip capture dv raw camcorder video movie mac macos

Mac/Linux/Unix: Record internet radios to listen them later with streamripper

I wanted to record Internet radios to listen to it later on (say at work), with fixed length files, and the name of the radio in the file names. I was not totally convinced by Radio Lover.

Here is my solution based on streamripper, an open-source tool.

You will need to save the following script as “ripstream-fake-trackinfo”:

#!/bin/sh

DELAY=1200 # in seconds


START=`date +%s`
LASTCOUNT=-1

# Force a track change at the beginning
echo "TITLE=BEGINNING";
echo "ARTIST=BEGINNING";
echo "."
sleep 1

while true ; do
        NOW=`date +%s`
        COUNT=$(((NOW-START)/$DELAY))
        if [[ $COUNT -ne $LASTCOUNT ]] ; then
                DATESTR=`date +%Y-%m-%d-%H-%M-%S`
        echo "ARTIST=$1";
            echo "TITLE=$DATESTR";
            echo "."
            LASTCOUNT=$COUNT
        fi
        sleep 1
done

The following script is “ripstream” :

#!/bin/sh

TRACKINFO=~/bin/ripstream-fake-trackinfo

cd ~/Desktop/Downloads/Radio || exit 1

streamripper $1 -D "%S-%d" -E "$TRACKINFO $2"

And here is how to call it :

ripstream URL Name

Keywords : record mp3 stream fixed length internet radio

How NOT to kill a brand new hard disk

Some time ago, I bought a new hard disk for my Macbook Pro, namely a Samsung Spinpoint M6 HM320HI. I was pretty happy with it, until I noticed a weird and annoying noise when the drive was idle. It was like a fan noise, 10 seconds long, then 5 s quite, and again.

I forced some disk activity to investigate:

(while true ; do echo test > /tmp/test ; sleep 4 ; done)

The noise disappeared.

After a short research on the web, I found similar problems of Unix users with the power saving. The disk parks the heads much too quickly, and unparks them at the first access.

THE DANGER:

Not only the noise is annoying, this constant movement of the heads is very bad for the disk itself. Using smartctl from MacPorts, I could see how fast the Load_Cycle_Count was increasing :

225 Load_Cycle_Count 0x0032 100 100 000 Old_age Always - 293

After 2h of using my computer, the counter was already at 250 (that’s 25 load cycles more than before).

At this point, I had circumvented the problem, and I just had to solve it. Under Linux, there is a standard tool to change the hard disk settings, hdparm, but it has not been ported to MacOS X yet. However, I found another tool, hdapm, distributed with its source code.

First try:

$ sudo ./hdapm disk0 max
disk0: SAMSUNG HM320JI
Setting APM level to 0xfe: FAILED: APM not supported

Supposedly, my disk does not support APM, very surprising in 2008.
To investigate further, I started up Linux (a Knoppix Live CD 5.3), and I called hdparm:

hdparm -B 254 /dev/sda

It worked !

I restarted my Mac under MacOS X, and the noise had disappeared ! Even better, hdapm now works :

$ ./hdapm disk0 254
disk0: SAMSUNG HM320JI
Setting APM level to 0xfe: Success

Mission accomplished.

I don’t know what the cause of the problem is (MacOS X ? the SATA firmware ? the Samsung drive ?), but at least my disk is now quiet, and will live (I hope) for a long time.
hdapm’s author couldn’t understand either why hdapm refused to work in the first place.

I hope this article will help other people with the same problem.

Keywords : hard drive hard disk clicking noise regular mac hdparm hdapm apm samsung power saving

See also:

spip2mediawiki : convert SPIP syntax to Mediawiki

This is a Perl script I wrote when we had to switch MFC from the CMS SPIP to Media Wiki. In this form, it’s intended to be put on a web server as an online tool.

It’s not by far perfect, but it does already 80+% of the job. Some things you will still have to do by hand:

  • conversion of internal links
  • handling of images, attachments and medias

Instructions :

  • Download spip2mediawikipl.doc
  • Rename the file to spip2mediawiki.pl (WordPress.com doesn’t allow to upload .pl files)
  • Upload it to your server and use it, or hack it a bit to use it from the command line

Keywords : mediawiki spip conversion convert migration migrate syntax pages