Sunday, December 3, 2017

Why Love Linux



Why love Linux? Or any FOSS project? Because it's all in the community and culture surrounding it. https://whylovelinux.wordpress.com/

Thursday, November 28, 2013

A Psalm to Linux

A little humour in celebration of a Thursday morning victory.

Oh holy Linux, holy Linux.
Thou art forever my only love.
My heart be filled with joy at your Enter.

Oh holy Linux, holy Linux.
We bring you praise.
We give you all the glory.

Oh holy Linux, holy Linux.
May you forever be,
Ctrl, Alt and Delete free.

Friday, March 18, 2011

Linux iBurst Connection Script

 I always do things to make my life easier, and one of the ways I achieve this is scripts to automate tasks or make them more informative/easy to use.

One of these tasks is connecting to the internet on my iBurst connection at home. This is for the USB modem which uses PPPoE over an ethernet device emulated by the driver.

Originally I had to use "pon iburst" to connect, then check /var/log/messages to ifconfig to see if and when I am connected. Over time I started scripting this and this post is the result of that script.

It has matured quite a bit over the past few years. A list of features can be summarized as follows:

  1. When instructed to connect will invoke "pon " (for a configured peer).
  2. Shows connection progress using a zenity dialog.
  3. All important feedback like errors are reported via zenity message boxes.
  4. A successful connect is reported via Gnome's informational messages using notify-send. This is only if the notify-send command is available and you are logged into a Gnome session. The fallback is to use a zenity message box.
  5. The assigned IP address will be displayed in the message when connected successfully.
  6. If pppd dies it will fail the connection attempt immediately and report this.
  7. If the connection attempt remains for a long time without getting an IP address, it will be timed out after a configured amount of time.
  8. If the connection attempt fails for any reason it will ensure pppd exits and kill it if necessary.
  9. Connects and disconnects can be cancelled. Script will ensure pppd exits and kill it if necessary.
  10. When killing pppd the script will use increasing levels of aggressiveness to work around being interrupted by blocking processes.
  11. If a connection existed previously it will disconnect unless otherwise instructed with command line switches.
  12. Can reliably detect if an existing iBurst connection is already active.
  13. Can be instructed to connect quietly.
  14. If the iBurst device isn't plugged into it will display an error.
  15. This error can be suppressed by a command line switch (useful in automation like if executed when you log in).
  16. Supports hooks for pre and post connect, successful disconnect and any error.
  17. Thorough logging to a configured log file.
  18. When executed as a non-root user will wrap itself inside a sudo session. This way if the sudo is configured to allow execution of the script without a password, then you can run the script as any of the allowed users.
  19. Automatically controls the up/down state of the iBurst driver interface (usually ib0). This was discovered from experience and experimentation and increases stability and maximum success with connection attempts. 
  20. Some misc other stability and reliability tweaks in the way connections are managed.
This is quiet an extensive list of features. I have found the script to be very reliable these days and can probably say that I connect successfully 100% of the time. The only times I have problems is when the ISP has delays in authenticating me. I haven't had any problems in the past few months where I need to manually play around with plugging/unplugging the device, up/downing the interface, etc.

So I decided to share it.

How to Use
Requirements to use it:
  1. Have zenity installed: apt-get install zenity
  2. If using Gnome, recommended to have notify-send installed: apt-get install libnotify-bin
  3. Have a PPPD peer setup that can be connected using: /usr/bin/pon peername
  4. Download the script at: http://sites.google.com/site/qbeukesblog/connect.iburst.sh
With the above requirements satisfied you can edit the script and change the configuration options in the beginning. Most important option is the IB_IFACE and PEER variables. PEER should be the name of the peer used supplied to /usr/bin/pon. IB_IFACE should be the network device name created by the iBurst driver. This is usually ib0.

Then if you're going to execute this command as a non-root user, add the following to your /etc/sudoers file. Remember to replace in your username and the full path to the connect script.
your-username ALL=(root) NOPASSWD: /opt/iburst/connect.iburst.sh

After this you can simply use the following commands to use the script. These commands assume the script is on the command path.

Connect to the configured peer. Will disconnect if there is already an active connection:
connect.iburst.sh

Connect to the configured peer. If already connected will do nothing.
connect.iburst.sh --no-disconnect

Only attempt a connection if the iBurst device is plugged in. This is detected by checking if the IB_IFACE interface is available.
connect.iburst.sh --conditional

Only attempt a connection if the iBurst device is plugged in. Also, if already connecting nothing will be done.
connect.iburst.sh --no-disconnect --conditional

Hooks

If you want to use the hooks, you need to create a directory called connect.iburst.hooks in the same location the script is executed from. So if you execute the script from /opt/iburst, you need to create the directory at /opt/iburst/connect.burst.hooks. If you want to put the directory in a different location, you can do so by editing the HOOKS_DIR variable in the beginning of the script.

From here you can create hooks for 4 types of events:
  1. before: Pre Connect
  2. disconnect: Successful disconnect. Hooks receives 1 argument, which is the trigger for the disconnect.
  3. success: Post Successful Connect. Hooks receives 1 argument, which is the IP address.
  4. error: Error condition
The files inside the hooks directory need to be named as follows:
hookid.eventname.sh

Here hookid is simply a name to identify the hook. You can choose this name and it may be anything valid in a filename. 

The eventname is the name of one of the 4 hook events, as listed above (before, disconnect, success or error).

Finally you just need the .sh extension.

As a note, for any given event, these hooks are executed in no specific order. They are also executed synchronously, so any commands that won't exit needs to be executed in the background manually.

Then, for the hook to be executed the file needs to be executable. 

For an example, if we wanted to make a hook that will execute a dynamic DNS update whenever we are connected successfully, we would want to hook into the success event, so we'd call the script dynamicdns.success.sh.

The success event supplies 1 argument to the script, which is the IP address we were assigned. We could then make use of this IP address when doing the DNS update. 

Example Setup

In my setup, I have the script and it's hooks installed at /usr/zbin/connect.iburst.sh and /usr/zbin/connect.iburst.hooks.

Then I have the script configured in my sudoers file so I can execute it without a password.

When I log into Gnome, on my panel I have a shortcut icon the I can click to connect explicitly. This will execute /usr/zbin/connect.iburst.sh. This way it will disconnect if it's already connected.

Finally I also added a command to my Gnome startup, so that whenever I log in and the iBurst device is plugged in a connection will be created. Further, if a connection already exists it won't disconnect and reconnect. This could be the case where I, for example, would log out and log back in for whatever reason. This is achieved by adding the following command line to startup:
connect.iburst.sh --no-disconnect --conditional

Download

The script can be downloaded from here: http://sites.google.com/site/qbeukesblog/connect.iburst.sh

Friday, August 27, 2010

Winning at Pick the Mug

Everyone knows the classical Pick the Mug game, where you have 3 mugs and you have to guess which mug has the stone in it. Every time you play you have a 33% change to guess correctly.

But what if, after you made your first guess, one of the other 2 mugs is revealed, specifically one that doesn't have the stone, and you have the option of changing your guess. In other words, if you were to pick mug A, and mug B had the stone, then mug C would be revealed and you can choose to either stay with mug A or change to mug B. Or for another example, if you choose mug B, and it has the stone, then either mug A or C would be revealed and you have the option of changing your option between your first choice and the one that wasn't revealed.

If you were to stay with your choice, you still have only 33% chance to win. But if you were to change your choice, your chances to win increases to 66%. So if this were the rules of the game, you double your chances to win if you consistently change your choice.

When I first heard this, I could understand the math behind it, but that's all I thought it was, a bit of math. I didn't think it would reflect the real world. In other words, I didn't think applying this would actually improve my chances of winning. After all, whether you stay or change, the designated mug didn't. I decided to put it to the test, and wrote a little program that will play the game a certain amount of rounds using each of the 2 strategies, and then see how effective each strategy was at winning.

After doing this, and playing each strategy 1 million times, I found that the game was won 33% of the time when you stay with the initial choice, and 66% of the time if you change your choice. Here is the output of 5 games, each game being reseeded with the current time and all random number generation for a given game happening from the same java.security.SecureRandom instance.
Playing 1000000 games with seed: 1282940127868
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.2673 |  332673 |
| Change Guess  | 66.6309 |  666309 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282940139406
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3591 |  333591 |
| Change Guess  | 66.5868 |  665868 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282940150963
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3038 |  333038 |
| Change Guess  | 66.6906 |  666906 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282940162498
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3286 |  333286 |
| Change Guess  | 66.6777 |  666777 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282940174009
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3406 |  333406 |
| Change Guess  | 66.6462 |  666462 |
+ ------------- + ------- + ------- +

There you have it. The amount of times won, consistently matches the statistical probabilities. I'm still battling to completely believe this, though the numbers show it does.

Just as a fun test to compare java.security.SecureRandom with standard java.util.Random, I run 5 games with the standard Random, and the results seems to be pretty much the same.

Here is the output using java.util.Random with 5 games:
Playing 1000000 games with seed: 1282943227982
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3512 |  333512 |
| Change Guess  | 66.6876 |  666876 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282943228344
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3279 |  333279 |
| Change Guess  | 66.7352 |  667352 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282943228758
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.4019 |  334019 |
| Change Guess  | 66.6114 |  666114 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282943229166
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3664 |  333664 |
| Change Guess  | 66.6776 |  666776 |
+ ------------- + ------- + ------- +
Playing 1000000 games with seed: 1282943229573
+ ------------- + ------- + ------- +
| Description   | Perc    | Count   |
+ ------------- + ------- + ------- +
| Keep Guess    | 33.3731 |  333731 |
| Change Guess  | 66.6791 |  666791 |
+ ------------- + ------- + ------- +

Random, as you can see from the seeds, was MUCH faster as well, all tests taking 2 seconds in total compared to 57 seconds with SecureRandom.

To test if SecureRandom does give results generally closer to the probabilities, I decided to run 30 runs with 1,000,000 games on each run, and then see the deviations from 33.3% and 66.6% on each. Each game will be reseeded with the time on each run of 1,000,000 games. The results of this is:
Playing 30 rounds with 1000000 games each.
+ --------------------- + ------- + ------- + ------- +
| Description           | Min     | Max     | Range   |
+ --------------------- + ------- + ------- + ------- +
| Secure Keep Guess     | 33.2524 | 33.4422 |  0.1898 |
| Secure Change Guess   | 66.5545 | 66.7486 |  0.1941 |
| Standard Keep Guess   | 33.2437 | 33.4309 |  0.1872 |
| Standard Change Guess | 66.5841 | 66.7979 |  0.2138 |
+ --------------------- + ------- + ------- + ------- +

So the results are actually pretty much the same. We're only randomizing between 2 and 3 after all. So there is no real conclusion here. Mostly just satisfied curiosity.

You can download the application's code at: https://sites.google.com/site/qbeukesblog/MugsGame.tar.bz2. This is only the code for the initial tests. The experiments with the generators aren't included.

Wednesday, August 11, 2010

Cross Compiling OpenVPN for Windows on Linux

I went through quite a struggle to build OpenVPN and a custom installer for Windows using my Linux machine. This post describes how to achieve this. To make it easier I packaged all the actual build steps into a script.

This post works on the cross compiling environment prepared in my previous post Building a Cross Compiler on Linux for MinGW32.

If you want to build the installer exe for OpenVPN as well, you will need NSIS (Nullsoft Installer System) installed. Download and compile it from http://nsis.sourceforge.net/.

You will still be able to build the exes without NSIS. NSIS is only needed for packaging them into an installer.

Also note that this doesn't build the TAP driver. It just copies the prebuilt one. To do this would have to install the Microsoft DDK, make an amd64 cross compile environment and modify my script to build the driver instead of copying it.

First download the following:
  1. Prebuilt packages from http://openvpn.net/prebuilt/. Choose the latest -prebuilt .tbz file.
  2. Download the latest OpenVPN source code tar.gz archive.
  3. Download the build scripts from here.
Then extract the build scripts to your home directory. This will then create a directory ~/openvpn-src which contains ~/openvpn-src/archive. Copy the prebuilt and OpenVPN source code packages into ~/openvpn-src/archive.

Modify the ~/openvpn-src/env.sh script to reflect your cross-compiler environment. The variables have the following purposes:
  • PREFIX - Where to install the compiled OpenVPN files
  • TARGET - The build environment you're targeting, for example i686-mingw32msvc
  • TOOLCHAIN - The root location where the cross compiler binaries are access from.
  • MAKENSIS - The full path to your makensis variable. Leave this empty if you don't want the installer to be created or if you don't have NSIS installed.
  • The rest of them are standard autoconf environment variables.
When you're ready you can kick of build-openvpn.sh.

The resulting OpenVPN .exe files will be located in the directory your PREFIX variable points to, which by default would be ~/openvpn-dist.

This script was tested with openvpn-2.1.1 and 2.1_rc22-prebuilt.tgz.

Building a Cross Compiler on Linux for MinGW32

I was installing OpenVPN the other day and after getting everything up and running wanted to build an installer for Windows, to make it easier for the average person to be able to connect to the company VPN.

The basic idea was to prompt for a P12 cert and it's private key password during the installation and then install this along with the OpenVPN client. OpenVPN has an option "askpass", which allows you to store the private key password for a CRT or P12 cert. The build provided on openvpn.net, however, has this feature disabled. Due to this I had to recompile OpenVPN.

Not having Windows installed I knew I was going to have to figure out another way to get it to compile. I could either try and get MinGW32 running in Wine or setup a cross compiler environment. I thought the latter would be the easier option (as I was quite familiar with the existing Linux building environment). Boy was I wrong.

Either way. During the process of building the cross compiler, sorting out problem after problem I came across a script written by Paul Millar, which will extract, patch and compile MinGW32 for you as a cross compiler. This is a life saver.

To honor his hard work, even though still a bit tricky to use, I'm documenting it here.

Firstly, download and extract this script into ~/mingw-src. You can get it here. Inside this directory also create 2 other directories called Archive and Patches.

Then, get hold of the following source packages from http://www.sourceforge.net/projects/mingw/files/:
  • gcc
  • binutils
  • w32api
  • mingw-runtime
At the time of writing this, they had the following names:
  • gcc-3.4.4-3-msys-1.0.13-src.tar.lzma
  • binutils-2.19.51-3-msys-1.0.13-src.tar.lzma
  • w32api-3.14-3-msys-1.0.12-src.tar.gz
  • mingw-runtime-3.14-src.tar.gz
Then extract these archives into ~/mingw-src/Archive. The lzma archives would first have to be extacted with lzma into a temporary location, and the resulting tar archive extracted into ~/mingw-src/Archive.

Extract them one at a time, because some of them also contain patches. As you extract each of these, copy the patches into the ~/mingw-src/Patches directory, naming them {prefix}_name.patch, where {prefix} would be gcc, binutils, w32api or mingw-runtime_, depending on which archive the patch originated from.

For example, the binutils package has 2 patches, nl.
  1. 01-scriptdir.patch
  2. binutils-2.19.51-1-msys.patch
These would respectively be places in ~/mingw-src/Patches with these names:
  1. binutils_01-scriptdir.patch
  2. binutils_binutils-2.19.51-1-msys.patch.
The important part is the prefix and underscore. What comes after it doesn't actually matter. So you could name them binutils_1 and binutils_2 if you prefer.

When you're done with this, edit the parameters file, applying the following changes:
  1. Comment the LANGUAGES variable.
  2. There are 2 definitions of the DEFAULT_CC variable. The first assigns it the value gcc-4.0 and the second prefixes it with a distcc invocation. Change the value of the first to only gcc and comment the second.
  3. Comment the DISTCC_LOG and DISTCC_HOSTS variables.
  4. Change MAKE_PROCESS_COUNT to equal the number of CPU cores you have available, multiplied by 2, plus 1. So if you have 4 cores, this would be (4*2) + 1, which is 9.
  5. Comment the ROOT_TARGET variable.
  6. Comment the CLUSTER_DEPLOY variable.
  7. Check what the extensions are for your binutils, w32api and mingw-runtime archives in the ~/mingw-src/Archive directory. If it's not tar.gz for any of these, update the BINUTILS, W32API and RUNTIME variables to reflect the correct extension.
After the script has finished building the packages, change to ~/mingw32 where the packages were deployed and run the following commands:
cd ~/mingw32
cp -Rp i686-mingw32msvc/include/ i686-mingw32msvc/lib/ ./
rm -rf i686-mingw32msvc/include/ i686-mingw32msvc/lib/
ln -s ../include i686-mingw32msvc/include
ln -s ../lib i686-mingw32msvc/lib
Then you should have a working cross compiler environment in ~/mingw.

To use it, load the following environment:
PREFIX=$HOME/openvpn
TARGET=i686-mingw32msvc
TOOLCHAIN=$HOME/mingw32
BIN="$TOOLCHAIN/bin"
export PATH="$BIN:$TOOLCHAIN/$TARGET/bin:$PATH"
export CC="$BIN/$TARGET-gcc"
export LDFLAGS="-L$TOOLCHAIN/lib"
export CFLAGS="-I$TOOLCHAIN/include"
export CROSS_COMPILE="$TARGET"
Remember to update the value of PREFIX to where you want to install the resulting package, which in the example is ~/openvpn.

Once loaded you can compile a package (like OpenVPN) with the following command. This has to be run from the directory where it's source code is extracted.
./configure --prefix=$PREFIX --build=$TARGET && make && sudo make install
And that's all there is to it. The resulting package will be located in the directory where PREFIX points.

Sunday, January 17, 2010

Upgrading with apt-get replaces packages built from source

Sometimes you need to apply your own flavor to some distribution package. With apt this is a dream, as you simply (using the example of grub2):
  1. Instruct to to install all the dependencies needed to rebuild a specific package:
    sudo apt-get build-dep grub2
  2. Fetch, extract and apply distro patches of the source code:
    apt-get source grub2
  3. Make your modifications
  4. Rebuild (from inside the extracted source tree)
    dpkg-buildpackage -j2 -rfakeroot -b
  5. Then install the generated .debs
This is all fine except for when you get to upgrading your packages to newer versions. The package selection policies always see self installed debs as lower priority to those coming from the repositories.

So your package will always be replaced with it's same original distro version. To explain with a more clear example, assume you have linux-image-2.6.31-17-generic installed. You fetch it's source, beat a bug, rebuild and install it, once you upgrade the repositories version of linux-image-2.6.31-17-generic will be installed, making the bug a "zombie".

I had a look around, and it seems that the solutions a float is to either
  1. increase the version of the package so it's higher than the one from the repository, or
  2. to "hold" the package.
The problem with the first approach is that you need to select the version carefully enough, so a legitimate upgrade doesn't go unnoticed. If you were to increase version 1.0 to 1.1, and a version 1.0.1 gets released, you'll miss it, because you're version is still higher.

The problem with the second approach is that you'll never have the package upgraded.

So in both cases you'll probably end up having to monitor for upgrades these package, to ensure you don't miss any. There is also the option of seeing which packages are proposed for the upgrade, and just not do any of those you built yourself unless their versions were increased.

This was inefficient for me, since my list of self-built packages were growing increasingly large, and I had more than one machine to do all this maintenance on.

So I set out on finding a solution. In the process of applying a possible solution I found that it's actually very simple. Halfway through it was already solved.

So here it is:
  1. Create your own repository
  2. Load your packages into the repository
  3. Install them from the repository
  4. Done
That's all there is to it.

So, to give a bit more detailed instructions.
  1. When your done testing your changes and ready to load the package into the repository, build your package with the dpkg-buildpackage command. This is necessary to generate the .changes file.
  2. Setup a .deb repository and import the packages. See this link for instructions.
  3. Add your repository to then end of your /etc/apt/sources.list file.
  4. Install the package
Further, when setting up my repository, I gave a custom name as the repository's "Suite". This way I could more easily pin this repository. Doing this, however, will result in a warning when trying to import the packages using the reprepro utility. You can tell reprepro to ignore the warning and continue, or you can build your packages prepared for this repository.

To do so, when building your package, do the following from inside it's source tree:
  1. Open the 'debian/changelog' file
  2. The first line would read something like:
    gnutls26 (2.8.3-2) unstable; urgency=low
    OR
    grub2 (1.97~beta4-1ubuntu4.1) karmic-security; urgency=low
  3. The part after the package version in parenthesis and before the semi-colon is the target "suite", in these cases 'unstable' and 'karmic-security' respectively.
  4. Change this to whatever your configured 'Suite' value is, and build the package.
Another problem I ran into when trying to import my modified kernel image was a warning:
Cannot put file 'virtio-modules-2.6.31-17-generic-di_2.6.31-17.54_i386.udeb' into component 'main', as it is not listed in UDebComponents!
This was because I didn't include a UDebComponents entry in my repository configuration, which has to be listed AFTER the Components entry. So if you're going to rebuild and import the kernel, you need to have something similar to this in your 'conf/distributions' file:
Components: main non-free contrib
UDebComponents: main
And that's about all there was to it. I suspect it might depend on the order of the repositories as well, so if this doesn't seem to help, try and swap the order your repositories are listed in your sources.list, or try pinning your custom repository with a higher priority.