HowTo: Swap the window gadgets back to the right side of the window in Ubuntu Lucid.

The release of Ubuntu’s brand new look in Ubuntu 10.04 Lucid Lynx Alpha 3 brought mixed reactions, but probably none more so than the decision to move the window minimise, maximise and close gadgets from their traditional placement on the upper-right corner of the window to the upper-left side ala Apple Mac.

Gadgets on the left side

Many people, myself included, do not like this. To fix it and make it look like this:

Gadgets on the right side

…is very easy to do. Read on.

Simply open up a terminal and type in the following at the $ prompt:

$ gconftool -s /apps/metacity/general/button_layout -t string ":maximize,minimize,close"

Viola! Instant fix! But how does it work?

Gnome is highly customisable. One of its configuration options tells Metacity where and in what order to render elements on a window. In this case, the string “:maximize,minimize,close” means to render the “maximise”, “minimise” and “close” gadgets in that order, and the colon at the start means to render them on the right side of the window. If you change the colon to be on the far right, your window gadgets will appear on the left of the window.

Don’t like how the Ubuntu team have also changed the maximise and minimise button order around? Be a rebel! Change it back by replacing the configuration string above with “:minimize,maximize,close”.

Play around with it and enjoy. Smilie: :)

HowTo: Fix Virtualbox not allowing you to attach USB devices to your virtual machines.

Virtualbox is a great desktop virtualisation tool, but one of its annoying installation niggles is that when you setup and run a virtual machine you can’t attach any USB devices to it at all because all your USB options in Virtualbox are greyed out.

There are a raft of different solutions to this problem out there ranging from adding an extra line to the /etc/fstab file to modifying your udev rules, but the real cause of this problem is simply that your login name does not have permission to access Virtualbox’s USB driver which interfaces itself between the VM’s virtual USB hardware and your real USB stack.

During the initial installation process, Virtualbox sets up a new group called vboxusers, but it doesn’t put your login name into it. Since using USB in Virtualbox occurs at the device level, your normal user permissions that allow you to run up virtual machines in general are not enough to manipulate Virtualbox’s USB driver. As a result, you cannot tell Virtualbox to attach a given USB device to your virtual machine.

Like most things, this is easily fixed of course.

  1. If your login name is johndoe, all you need to do is jump into a terminal and type in:

    $ sudo adduser johndoe vboxusers

    …which will add the user johndoe to the vboxusers group.
  2. Now close all applications and windows, and log yourself out of Ubuntu. You don’t need to reboot, but you can if you’re the kind of person who enjoys the subtle pleasures of watching your PC start up.
  3. Log yourself back in again as normal. This will read in your new group membership.
  4. Fire up Virtualbox and start your virtual machine(s) as normal. You will now find that you can attach USB devices to all your VM’s via the Virtualbox Devices menu without any further ado.
  5. Pat yourself on the back – you’re done. Smilie: :)

Until Sun Microsystems modify the deb installer to add the current login to the vboxusers group during install, these instructions should apply to just about any version of Virtualbox sporting the problem, on any Linux distro. 

HowTo: Fix being unable to click in Flash applications in Ubuntu 64-bit

Ubuntu 9.10 (Karmic Koala) has a curious bug on the 64-bit Intel/AMD version whereby on some systems you can play Flash perfectly, but the Flash application does not recognise any mouse clicks in it. This means in sites such as YouTube, you can’t click the mouse to play and pause, or seek in a video – you’re forced to use the keyboard.

This is a known bug with the flashplugin-installer package and is currently being worked on by Canonical. In the meantime, if you wish to fix the problem yourself now rather than wait for the official fix, just follow these instructions…

  1. After you have installed Flash in the usual manner, open a terminal and type in the following:

    $ sudo gedit /usr/lib/nspluginwrapper/i386/linux/npviewer
  2. Once the GEdit text editor (or substitute your favourite) opens, insert the following line just before the last line (should appear in most installations as the fourth line out of a total of five lines):

    export GDK_NATIVE_WINDOWS=1
  3. Save your changes and exit your text editor.
  4. Now restart any applications that use Flash, such as Firefox.
  5. In the case of Firefox, go and visit a page that uses Flash. You should now find that you can now click in Flash without a problem.
  6. Pat yourself on the back. You’re done.

HowTo: Remotely collaborate with another user in a terminal

You do remote tech support for clients. One client calls you up needing assistance. You SSH into their machine as usual to check out the problem. You probably also have them on the phone so you can walk them through what you are doing or ask them questions, but making long support phone calls can be expensive if you’re doing it via a mobile phone or internationally and it’s tiresome to switch to an IM client window all the time to write comments, especially if the client is not running a graphical session and only has a text server console to look at.

Sometimes actions speak much louder than words, and it would be great for the client to be able to see what you are doing without cumbersome and bandwidth-hogging remote screen tools like VNC. Is there an easy way to collaborate in a terminal?

There certainly is…

  1. First up, login to the client’s remote machine in question using their login, eg: login to the PC at 192.168.0.27 with the username “fred”:

    $ ssh fred@192.168.0.27
  2. Once logged in, we need to create a screen session. It needs a name, so I’ll call mine “blah”, but you can make it any name you want. Type in the following (note that the “-S” parameter is uppercase):

    $ screen -S blah
  3. Now instruct the client to open a terminal locally and attach themselves to your screen session by typing in the following command (note that the “-x” parameter is lowercase):

    $ screen -x blah
  4. You are now both looking at a common screen session. Anything that either of you type along with any command output will be automatically and immediately seen by the other person in real-time!
  5. Once you’ve finished sorting out the client’s problem, terminate the screen session with:

    $ exit
  6. You and the client will be both returned to your regular local terminal sessions which you can now close with the “exit” command again.

If you don’t have another machine to try this with, you can try it using two terminal windows on your own local machine. You don’t need to SSH in since you’re already logged in, just run both screen commands in their own respective windows and watch as any new information entered, including command output, appears in both terminals simultaneously.

You are not limited to only having two terminals sharing a screen session – you can have an unlimited number of terminals, remote or local, share one screen session.

Note that the shared screen session only works with the same user login. You cannot have two separate users share a screen, hence the need to login using the client’s username before setting up the screen session. If the client’s username does not have sudo rights, once inside the screen session, simply su to your admin login and then do the administrative work you require, all while your client watches on in amazement. Of course, be aware that the client can also start typing in commands whilst you are su’ed into your admin login as well, so don’t leave your terminal unattended.

Enjoy. Smilie: :)

HowTo: Quickly transfer files from an Ubuntu box to another PC over a network without installing Samba, SSH or FTP.

Let’s say you have an Ubuntu PC and a second Windows PC or Mac. You need to do a quick transfer of a file or two from the Ubuntu box, but you really don’t want to go through the hassle of installing and configuring Samba or FTP just for the sake of transferring a couple of files.

Of course you could use a USB flash drive, but it takes twice as long to copy a file that way because you have to copy it to the flash drive and then copy it again from the flash drive to the destination PC. Besides that, what if you don’t have a flash drive big enough to transfer the files you want? Is there a quick and dirty way to transfer some files over a network without the need to install additional software to bridge the compatibility divide?

Indeed there is…

NOTE: This method is not suitable for transferring entire directories of files. While it is possible to transfer multiple files at once using this method, it is primarily intended for the transfer of very small quantities of files due to the fact that you have to initiate the transfer of each file manually – you cannot multi-select files for transfer unless you archive those files into a tarball first.

On the Ubuntu PC, open a terminal and type in the following at the $ prompt:

$ python -m SimpleHTTPServer

If this returns an error when you hit Enter, you are probably using an old version of Python, in which case use the following command instead:

$ python -c "import SimpleHTTPServer;SimpleHTTPServer.test()"

When you hit Enter, you should see a message similar to the following:

Serving HTTP on 0.0.0.0 port 8000 ...

What we have done is started a basic mini web server using Python on port 8000 which will now happily serve files from the current directory you started the Python command from! Now open up a web browser on the other PC and, assuming your Ubuntu PC’s IP address is 10.0.0.27, surf to the following web address:

http://10.0.0.27:8000

Viola! A full directory listing on the Ubuntu PC is presented that you can now navigate and download files from without needing to install any other software to effect a transfer. Just right-click and save like any normal download link on any ordinary website.

If you started the Python command from your Home directory, then the root of the site starts from your Home directory. If you change to another directory before launching the Python command, the web server will serve files from that directory instead. Standard security rules apply – whatever access your Ubuntu user has will be applied to the Python web server. It is not recommended that you run this command as root.

When you’re done, simply press CTRL+C to stop the Python web server on the Ubuntu PC.

Happy file transfers! Smilie: :)

HowTo: Migrate an Apt-Mirror-generated Ubuntu archive to another mirror source or merge a foreign Apt-Mirror archive into yours

So, you’ve gone and created your very own local Ubuntu mirror using Apt-Mirror, and you’ve come across a situation similar to:

  • You’ve decided to change where you update your Apt-Mirror archive from (eg: you’ve changed ISP’s or feel that another source is more reliable than your current one to update from)
  • You’re adding another large repository to your Apt-Mirror archive (such as the next version of Ubuntu) and don’t have the quota to download it, so you’re getting a friend to download it for you from their free server using Apt-Mirror (eg: iiNet and Internode customers can access their respective Ubuntu mirrors for free), so you need to be able to merge it with your own Apt-Mirror archive and have it update from your preferred source afterwards.

So how do you do this? Read on.

Migrating your Apt-Mirror archive to update from a new source

This one is really easy. Let’s say you are updating your Ubuntu mirror from Internode, but now want to get your updates from iiNet. To make this happen you need to change the following files:

  • Your /etc/apt/mirror.list file needs to be updated to point to the new source, and
  • the Apt-Mirror’s record of downloaded files needs to be updated so that it doesn’t waste time trying to re-download the entire mirror again not realising that it’s already got 99% of all the files already, because Apt-Mirror tracks the files it has downloaded by the source URL and filename, not just the filenames themselves.

So let’s go through this.

  1. Open a terminal load your /etc/apt/mirror.list file into your favourite text editor. In this case I will use the Nano text editor:

    $ sudo nano /etc/apt/mirror.list
  2. In your mirror.list file, The lines for updating Ubuntu 32 and 64-bit versions plus source code from Internode can look similar to this:

    # Ubuntu 9.10 Karmic Koala 32-bit
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala 64-bit
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala Source
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
  3. We need to change the Internode URL to the iiNet URL, so bring up Nano’s search and replace function by pressing CTRL+Backslash (“\”Smilie: ;)).
  4. Now type in the text to replace, in this case:

    http://mirror.internode.on.net/pub/ubuntu/ubuntu
  5. Press Enter and you’ll be prompted for the text to replace this with. In this case it’s:

    http://ftp.iinet.net.au/pub/ubuntu/
  6. Press Enter and Nano will find the first occurrence of the Internode text string and highlight it for you. If the selection is correct, press “A” on the keyboard to automatically replace “all” occurrences.
  7. Once the update is done, manually go back and visually verify that all the entries were changed correctly.
  8. When you’re happy, save your changes by pressing CTRL+X, then “Y” and then Enter.
  9. Now we need to update the Apt-Mirror record of downloaded files. First, let’s take a backup of the index in case you stuff up. Type in:

    $ sudo cp /var/spool/apt-mirror/var/ALL /var/spool/apt-mirror/var/ALL_Backup

    NOTE: the filename “ALL” must be in uppercase
  10. Now let’s bring up the original file into the Nano text editor.

    $ sudo nano /var/spool/apt-mirror/var/ALL
  11. Depending how large your index file is, there may be a brief delay while Nano opens it up. Once it appears, do the same search and replace as you did in steps 3-6 again. Note: If the editor comes up blank, then you have not opened up the index file – check your path spelling in Step 9 and try again.
  12. Save your changes by pressing CTRL+X, then “Y” and then Enter.
  13. Finally, we need to modify the Apt-Mirror’s cache of downloaded files so that its directory structure matches that of the new source. In the case of iiNet, you’ll notice it’s URL has one less ubuntu word in it compared to Internode’s URL, so we’ll need to move some directories to eliminate the extra ubuntu directory.

    At the terminal, move the dists and pool directories of the mirrored files one directory back using the commands:

    $ sudo mv /var/spool/apt-mirror/mirror/mirror.internode.on.net/pub/ubuntu/ubuntu/dists /var/spool/apt-mirror/mirror/mirror.internode.on.net/pub/ubuntu
    $ sudo mv /var/spool/apt-mirror/mirror/mirror.internode.on.net/pub/ubuntu/ubuntu/pool /var/spool/apt-mirror/mirror/mirror.internode.on.net/pub/ubuntu

  14. Now rename the mirror.internode.on.net directory to become the name of the iiNet server:

    $ sudo mv /var/spool/apt-mirror/mirror/mirror.internode.on.net /var/spool/apt-mirror/mirror/ftp.iinet.net.au
  15. The directory structure now matches iiNet’s server and your ALL file is up to date, so now we can test your changes by launching Apt-Mirror. Launch it manually with:

    $ apt-mirror
  16. Watch the output. First Apt-Mirror will download all the repository indexes from the new location and will compare the files presented in those indexes to your local index of downloaded files (the modified ALL file). It will skip all files already listed as being present and will only download new files not listed in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or no more than a gigabyte or two since your last update under the old setup. If you see that Apt-Mirror wants to download some 30GB or more, then you have made an error in changing the URL in the ALL index file or you incorrectly renamed the mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 10.

    $ apt-mirror
    Downloading 1080 index files using 5 threads...
    Begin time: Wed Dec  9 15:59:23 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:00:45 2009

    Proceed indexes: [SSSSSSSSSSPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP]

    1.7 GiB will be downloaded into archive.
    Downloading 998 archive files using 5 threads...
    Begin time: Wed Dec  9 16:02:31 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:54:15 2009

    207.4 MiB in 256 files and 1 directories can be freed.
    Run /var/spool/apt-mirror/var/clean.sh for this purpose.
    $

  17. You’re done! Pat yourself on the back. Smilie: :)

Inserting a foreign Apt-Mirror archive into your own archive

This one is slightly more involved, but is not difficult. In the case of a full Ubuntu Mirror, let’s say you were adding an Ubuntu Karmic mirror archive taken from iiNet’s mirror servers into your own local Apt-Mirror archive that featured only Intrepid and Jaunty, both of which you are updating from Internode’s mirror servers. There are some obstacles we need to overcome such as:

  • Continuing to perform future updates for the Karmic repository from Internode rather than iiNet.
  • The foreign iiNet Karmic archive contains lots of files that you already have in your own archive – files that are common between all releases of Ubuntu. How do you filter those ones out and only copy the new files?
  • Finally, how do you update the Apt-Mirror index file with the potentially thousands of new entries from the foreign archive? How do you avoid duplicate lines potentially confusing Apt-Mirror?

Follow these steps:

  1. First ensure that you have the full copy of the foreign Apt-Mirror archive supplied on a suitable storage medium. Aside from the mirror directory itself (usually under /var/spool/apt-mirror/mirror), you must have a copy of its /var/spool/apt-mirror/var/ALL file. It does not matter if the foreign mirror is not completely up to date, as Apt-Mirror will catch up with what is missing when you run the next update.
  2. Let’s prepare your local Apt-Mirror installation for grabbing Ubuntu Karmic from our preferred source first. We need to load up the /etc/apt/mirror.list file into your favourite text editor and add the entries relevant to our new repository that we are mirroring. I will use the Nano text editor for this, but you can use any text editor you like:

    $ sudo nano /etc/apt/mirror.list
  3. Now we add the entries relevant to Ubuntu Karmic for Apt-Mirror to use. In this case, I am going to update Ubuntu Karmic from Internode and I will be grabbing both the 32-bit and 64-bit versions plus the source code (reflecting what is already included in the foreign archive on my storage medium, or Apt-Mirror will be doing a LOT of downloading the next time you run it), so I need to add the following entries:

    # Ubuntu 9.10 Karmic Koala 32-bit
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala 64-bit
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse

    # Ubuntu 9.10 Karmic Koala Source
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse
    deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse

  4. Save your changes and exit the editor using CTRL+X, then “Y” and then Enter.
  5. Make a backup copy of the foreign mirror’s /var/spool/apt-mirror/var/ALL file, so you can revert to it if you make a mistake. Call the copy something like ALL_Backup.
  6. Now open the foreign mirror’s original /var/spool/apt-mirror/var/ALL file into your favourite text editor.
  7. Use your text editor’s search and replace function (in Nano, press CTRL + Backslash “\”) to replace the URL of each entry in the foreign mirror’s ALL file to the URL of the mirror you will be performing your future updates from. In the case of changing iiNet URLs to Internode URLs, you would replace any occurrence of the text string:

    http://ftp.iinet.net.au/pub/ubuntu
    …with…
    http://mirror.internode.on.net/pub/ubuntu/ubuntu
  8. Once updated, save your changes and close your text editor.
  9. Now we need to merge the modified foreign ALL file into the ALL file from your local Apt-Mirror setup. First up, rename the modified foreign ALL file so we don’t confuse it. For this tutorial, I will assume your foreign mirror is supplied on an external USB hard-drive called “myhdd” and is simply a copy of the foreign system’s /var directory in its entirety. The following will rename the file from ALL to ALL_modified in a terminal:

    $ mv /media/myhdd/var/spool/apt-mirror/var/ALL /media/myhdd/var/spool/apt-mirror/var/ALL_modified
  10. Now concatenate the original ALL file and the modified foreign mirror’s ALL_modified file into one new file called ALL_new in your local Apt-Mirror’s var directory. Concatenating alone will result in duplicate lines and we need to sort the file so that any duplicate lines in both the local and foreign ALL files are brought together. We can sort the content of the concatenated files and remove duplicate lines in one hit with:$ sudo cat /var/spool/apt-mirror/var/ALL /media/myhdd/var/spool/apt-mirror/var/ALL_modified | sort | uniq > /var/spool/apt-mirror/var/ALL_new The cat part of the command simply joins the content of /var/spool/apt-mirror/var/ALL and /media/myhdd/var/spool/apt-mirror/var/ALL_modified into one big file, but before it’s written to a physical file, the concatenated data is “piped” using the pipe symbol “|” into the sort command, which sorts the concatenated data into alphabetical order which will group duplicate lines together. But before that resultant output is written anywhere, the sorted data is then piped again into the uniq command which automagically removes all duplicate lines, leaving one unique copy of each line. Finally, we direct the output from uniq using the “>” character into our physical destination file at /var/spool/apt-mirror/var/ALL_new at the end. The sudo command at the start is used simply because only the root and the apt-mirror users can actually write to the /var/spool/apt-mirror/var directory.

    Alternatively, we can replace the “| sort | uniq” part with “| sort -u” which does the exact same thing, since the sort command does have it’s own “unique” functionality as well. I’ll leave it up to you which way you’d like to go.
  11. Check your new /var/spool/apt-mirror/var/ALL_new file and you will find it now contains all your local and foreign mirror’s entries in alphabetical order and with no duplicate lines. If you’d like to see how this worked, re-work Step 10 without the sort and uniq commands or the pipe characters and see how it affects the output file. Try adding just the sort or just the uniq command too.
  12. Now rename your local mirror’s original ALL file because we’re about to replace it with the new one:

    $ sudo mv /var/spool/apt-mirror/var/ALL /var/spool/apt-mirror/var/ALL_old
  13. Now rename the new ALL_new file to take the place of the old one:

    $ sudo mv /var/spool/apt-mirror/var/ALL_new /var/spool/apt-mirror/var/ALL
  14. Right, that’s the index taken care of. We’re nearly done! Now we only have to merge the foreign mirror’s actual files into your local mirror. Once again, for the purposes of this tutorial I’m going to assume you have them stored on an external USB hard-drive called “myhdd” and is a copy of the foreign system’s entire /var directory, so the path to the foreign mirror’s files will be /media/myhdd/var/spool/apt-mirror/mirror – got that? Let’s change to that directory now in a terminal to save us having to type so much:

    $ cd /media/myhdd/var/spool/apt-mirror/mirror
  15. Now, the observant of you may have noticed that Apt-Mirror stores its mirrored files using a directory structure that follows the path of the URL the data is obtained from, so in the case of a mirror from iiNet, there is a directory here called ftp.iinet.net.au. You can see it by using the ls command to list the directory contents:

    $ ls -l
    -rw-r--r--  1 apt-mirror apt-mirror   198599 2009-12-09 10:19 access.log
    -rw-r--r--  1 apt-mirror apt-mirror   544373 2009-12-01 06:45 access.log.1
    -rw-r--r--  1 apt-mirror apt-mirror  1863467 2009-11-03 06:44 access.log.2
    -rw-r--r--  1 apt-mirror apt-mirror  1865334 2009-10-01 06:28 access.log.3
    -rw-r--r--  1 apt-mirror apt-mirror 18152891 2009-09-01 06:42 access.log.4
    -rw-r--r--  1 apt-mirror apt-mirror     6135 2009-12-09 06:46 error.log
    -rw-r--r--  1 apt-mirror apt-mirror    33898 2009-12-01 06:45 error.log.1
    -rw-r--r--  1 apt-mirror apt-mirror   124512 2009-11-03 06:44 error.log.2
    -rw-r--r--  1 apt-mirror apt-mirror   554851 2009-10-01 06:28 error.log.3
    -rw-r--r--  1 apt-mirror apt-mirror   831227 2009-09-01 06:42 error.log.4
    drwxr-xr-x  3 apt-mirror apt-mirror     4096 2008-09-11 02:00 ftp.iinet.net.au
    $
  16. We need to modify the foreign directory names and structure to exactly match that of the URL path your local mirror updates from. Starting with the obvious, we need to rename the ftp.iinet.net.au directory to be mirror.internode.on.net with:

    $ sudo mv ftp.iinet.net.au mirror.internode.on.net
  17. Next we need to create an extra subdirectory called “ubuntu” because Internode’s URL path is mirror.internode.on.net/pub/ubuntu/ubuntu/ and iiNet’s path is ftp.iinet.net.au/pub/ubuntu/ only:

    $ sudo mkdir mirror.internode.on.net/pub/ubuntu/ubuntu
  18. Now we need to move the “dists” and “pool” directories under the first “ubuntu” directory to be under the second “ubuntu” directory:

    $ sudo mv mirror.internode.on.net/pub/ubuntu/dists mirror.internode.on.net/pub/ubuntu/ubuntu
    $ sudo mv mirror.internode.on.net/pub/ubuntu/pool mirror.internode.on.net/pub/ubuntu/ubuntu

  19. With the directory structure and directory names all amended, we are now ready to merge the foreign mirror’s files into your local mirror. We will do this using RSync. This tool traditionally is used to make backups and is indeed used to keep the official worldwide Ubuntu mirrors up to date 1:1 with the master archive, but in our case we are using it to add the “missing” files in the local mirror with the files from the foreign mirror whilst skipping the files that are already present, which means instead of copying around about 60GB worth of data from the foreign mirror, we’ll only copy a percentage of that instead, saving us time and drive space:

    $ sudo rsync -avz --progress /media/myhdd/var/spool/apt-mirror/mirror/mirror.internode.on.net /var/spool/apt-mirror/mirror/
  20. The “–progress” parameter allows you to see which file is being copied over. You may see a large number of directory names whizz past because those directories don’t have any files that are different between your current Ubuntu Intrepid and Jaunty mirror and the Karmic mirror you are merging. Unfortunately rsync does not provide an all-over progress. It only provides a progress of the file it is currently working on. This procress can take several hours to complete depending on how much data needs to be copied and the speed of your storage medium containing the foreign mirror (which if on a USB HDD can take a looooong time).
  21. Once RSync has finished, it will give a summary of what was copied. If you were to run the rsync command in Step 16 again, you will see it finish rather quickly because there is no data that has changed or is missing anymore.
  22. Now we just quickly ensure that all the merged foreign files belong to the Apt-Mirror user with:

    $ sudo chown apt-mirror:apt-mirror -R /var/spool/apt-mirror
  23. And now we are ready to try a manual update to see if it all worked. If you now execute the Apt-Mirror application manually, you should now see that it reads in the new repository entries you added into your /etc/apt/mirror.list file in Step 3 and will compare the files presented in those indexes to your local index of downloaded files (the newly modified ALL file). It will skip all files already present and will only download new files not present in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or a gigabyte or two since your last update under the old setup and depending on how old the foreign archive was. If you see that Apt-Mirror wants to download about 30GB or more, then you have made an error in changing the URL in the ALL index file or the renaming of mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 5.

    $ apt-mirror
    Downloading 1080 index files using 5 threads...
    Begin time: Wed Dec  9 15:59:23 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:00:45 2009

    Proceed indexes: [SSSSSSSSSSPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPPP]

    1.7 GiB will be downloaded into archive.
    Downloading 998 archive files using 5 threads...
    Begin time: Wed Dec  9 16:02:31 2009
    [5]... [4]... [3]... [2]... [1]... [0]...
    End time: Wed Dec  9 16:54:15 2009

    207.4 MiB in 256 files and 1 directories can be freed.
    Run /var/spool/apt-mirror/var/clean.sh for this purpose.
    $

  24. If all is good, then pat yourself on the back. You’ve successfully merged the foreign repository and it will now update from your preferred ISP’s mirror from now on. Smilie: :)

HowTo: Fix a missing eth0 adapter after moving Ubuntu Server from one box to another.

Scenario: You have a box running Ubuntu Server. Something happens to the box and you decide to move the hard-drive to another physical machine to get the server back up and running. The hardware is identical on the other machine, so there shouldn’t be any issues at all, right?

The machine starts up fine, but when you try and hit the network, you can’t. Closer inspection using the ifconfig command reveals that there is no “eth0″ adapter configured. Why?

Here’s how to fix it.

Ubuntu Server keeps tabs on the MAC address of the configured ethernet adapter. Unlike Ubuntu Desktop, you can’t simply change network cards willy nilly – while Ubuntu Server does detect and automatically setup new cards, it won’t automatically replace any adapter already configured as eth0 with another one, so you need to tell Ubuntu Server that you no longer need the old adapter.

This problem can also appear if you have a virtual machine such as one from Virtualbox, and you move or copy it from one host to another without ensuring that the MAC address configured for that VM’s ethernet adapter is 100% identical to the previous one.

These instructions were done with Ubuntu Server 9.04 Jaunty Jackalope in mind, but should apply to just about any release.

  1. Since you can’t SSH in, you will need to login directly on the Ubuntu Server console as an appropriate user with sudo rights.
  2. Once logged in, type in the following and hit Enter:

    $ sudo nano /etc/udev/rules.d/70-persistent-net.rules
  3. You are now presented with the Nano text editor and some info that looks similar to the following:

    # This file was automatically generated by the /lib/udev/write_net_rules
    # program, run by the persistent-net-generator.rules rules file.
    #
    # You can modify it, as long as you keep each rule on a single
    # line, and change only the value of the NAME= key.
    # PCI device 0x8086:0x1004 (e1000)
    SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="0a:03:27:c2:b4:eb", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"

  4. Delete the last two lines or simply comment out the SUBSYSTEM line on the end. This is a rule defining what MAC address should be explicitly assigned to “eth0″. Since you no longer have an ethernet card with the specified MAC address in this machine (it’s on the old PC, remember), Ubuntu Server effectively ignores your new ethernet adapter because its MAC address does not match the defined rule for “eth0″.
  5. Once you’ve made your changes, press CTRL + X and then Y and then Enter to save your changes.
  6. Now reboot your box with:

    $ sudo reboot
  7. Upon reboot, Ubuntu Server will detect the “new” ethernet adapter in your PC and will automatically write a new rule into the /etc/udev/rules.d/70-persistent-net.rules file, thus enabling networking over eth0 for your server.
  8. To verify that the new adapter is working, type in:

    $ ifconfig

    …and you should see eth0 now listed with your defined IP address.
  9. Test remote connectivity to the server and if all is well, then pat yourself on the back. You’re done.

HowTo: Restore the Windows Master Boot Record (without using a Windows CD) using Ubuntu Karmic.

You know how it is – you take a client’s Windows based machine, do a dual-boot installation of Ubuntu (which replaces the Windows Master Boot Record, or MBR, with GRUB and sets up an option to boot Ubuntu or Windows) so the client can evaluate Ubuntu, but then later on for whatever reason, Ubuntu is no longer wanted. It’s removed and you need to restore the system’s ability to natively boot Windows directly without a GRUB menu.

You’re probably thinking “why the hell would anyone want to do that?!”… well, the fact of the matter is you sometimes come across a client who is just too mind-set and refuses to use anything but Windows, so yes – sometimes you need to restore the Windows MBR, but how do you do that when you don’t have a Windows CD handy?

Well, here’s how to do it using nothing but an Ubuntu 9.10 (or later) LiveCD.

It’s a little known fact that the Windows bootloader is nothing special. In fact it contains nothing proprietary to Windows at all. All the Windows bootloader does is simply look for the partition marked as “bootable” or “active” and transfer control of the boot process to it.

And would you know it? The Ubuntu LiveCD has a binary image of a generic open source bootloader that does just that!

  1. Boot your soon-to-be-Windows-only machine using the Ubuntu 9.10 (or later) LiveCD. Doesn’t matter if it’s the 32-bit or 64-bit version.
  2. Once booted on the LiveCD, open a terminal by going to the Applications menu and then choose Accessories and then Terminal.
  3. Find out what the designation of the Windows drive is (generally it will be the first drive, eg: /dev/sda or /dev/hda). If you are not sure, issue the command:

    $ sudo fdisk -l

    …and review the output, looking for your NTFS Windows partition. Make note of the drive that partition resides on (not the partition itself), eg: “/dev/sda”, not “/dev/sda1″. 
  4. Now type in the following (remembering to substitute the correct drive device name for your setup in place of “/dev/sda”):

    $ sudo dd if=/usr/lib/syslinux/mbr.bin of=/dev/sda

    …which will write the image of a standard MBR contained in the /usr/lib/syslinux directory of the LiveCD environment to the first hard-drive, overwriting GRUB.

    WARNING: Do NOT use a partition designation, eg: “sda1″ or “sda2″, etc. This will overwrite the start of that partition which will effectively destroy data. The MBR exists at the start of the drive only, so only specify “sda” with no number on the end. 
  5. Shutdown and reboot. Windows should now start “natively” without GRUB appearing at all.
  6. Normally I’d say “pat yourself on the back” here, but it’s Windows… ;-)

HowTo: Configure Ubuntu to be able to use and respond to NetBIOS hostname queries like Windows does

Users in the Windows world are very used to referencing PC’s via their NetBIOS names instead of their IP address. If your PC has a dynamic IP address (DHCP-assigned) of 192.168.0.12 and its hostname (computer name) is “gordon”, Windows users can happily jump into a command line or an Explorer window and ping the name “gordon” which will magically resolve to 192.168.0.12.

If your host is not configured with a Hosts file entry on your local PC or a DNS entry to associate a name with an IP address, Ubuntu can only use the IP address of that PC to communicate with it which means you have to remember what that IP address is with your feeble grey-matter in your head. Likewise, Ubuntu will not respond to a Windows PC pinging its NetBIOS name because Ubuntu does not use NetBIOS at all by default and so it will ignore such requests.

So how do we get Ubuntu to resolve NetBIOS names like Windows? And how can we allow Windows to ping Ubuntu like another Windows PC? Read on…

Let’s illustrate the problem first. You’ll need a Windows PC on your network to test this. For this article, the Ubuntu PC will be called “gordon” and the Windows PC will be called “alyx”.

On either PC, if you open a terminal or Command Line window and ping the opposing machine, eg:

$ ping alyx

or

C:\> ping gordon

You get an error stating that the host cannot be found. Now in the case of Windows, if you were to ping another Windows PC instead of an Ubuntu PC, you can ping its name with no problem.

Let’s sort this out, shall we?

Allowing Ubuntu to ping Windows NetBIOS names

Ubuntu is setup for Linux use, not Windows use, so we need to install a package that will allow Ubuntu to more readily mix in with Windows networks and use NetBIOS. This package is called “winbind”.

  1. Open a terminal and type in the following at the terminal prompt:

    $ sudo apt-get install winbind
  2. Once installed, we need to tell Ubuntu to use WINS (as provided by winbind) to resolve host names. Type in:

    $ sudo gedit /etc/nsswitch.conf

    …which will open the file into the Gnome Editor.
  3. Scroll down to the line that starts with “hosts:”. In Ubuntu Jaunty, it looks similar to this:

    hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4
  4. Add the word “wins” on the end of this line such that is now looks like:

    hosts:          files mdns4_minimal [NOTFOUND=return] dns mdns4 wins
  5. Save and exit the editor.
  6. Now let’s ping the name of our Windows box again.

    $ ping alyx

    …and it now resolves!
  7. Pat yourself on the back.

Allowing Windows to ping Ubuntu NetBIOS names

This is just one half of the equation. We now need to allow Windows to be able to ping Ubuntu PC’s using its NetBIOS name. This requires Ubuntu to recognise and respond to that request. We need to setup a server daemon to do this. In Ubuntu, this particular server daemon is called Samba.

  1. Installing Samba is simplicity itself. Open a terminal and type in:

    $ sudo apt-get install samba
  2. Once that has finished, your Ubuntu PC will automagically respond to all NetBIOS queries for its hostname straight away, and that’s not just from Windows machines, but other Ubuntu machines (configured with the “winbind” package) as well.
  3. Pat yourself on the back again. Smilie: :)

HowTo: Deal with BD+ copy protection when ripping Blu-ray titles using Ubuntu

A fair while back now, I wrote an article detailing how to decode Blu-ray titles using Ubuntu and an LG GGC-H20L Blu-ray optical drive.

This article detailed how to decrypt just about every movie under the sun except for a newer type of protection called “BD+” which I never got around to supplementing my original article with.

What is “BD+” protection? Well in short, it’s the deliberate corruption of random parts of the video track of the movie (well, OK – that is a highly simplified definition as BD+ protection can do a lot more than that, but the end result is the same – to prevent unauthorised playback which includes ripping). The idea BD+ is that when you rip the title, you can still watch the movie, but with some or all of the screen corrupt at various stages in the movie which well and truely ruins the movie-watching experience, especially since you paid good money for it and should not be forced to buy a dedicated consumer Blu-ray player when you’ve got a perfectly good PC that can do the same task.

But hang on, if the movie is deliberately corrupt, then how come it plays fine in a stand-alone consumer Blu-ray player or PlayStation3 console?

Well, let me tell you about that and how to get around it yourself.

I have to give credit to the movie studios for this one. It’s a simple, and annoying, method of protection. But as with anything, it was eventually reverse-engineered and broken, and neat little tools were developed to allow us consumer types to backup, or watch in our preferred way, our movies bought with our hard-earned cash.

So what’s this BD+ thing all about? Basically after the movie is mastered and just before being pressed to discs, an extra step is taken where by random parts of the movie data stream are deliberately exchanged with random data or removed altogether, thus corrupting the video stream. A record is kept, however, of what parts of the movie have been changed – a table listing where, when and what data needs to be put back into the movie stream in order to watch the movie back in its original uncorrupted format. This table is called a “conversion table”, and it is processed by your Blu-ray player while you watch the movie, with the correct data substituted back into the video stream before the image hits your screen, thus resulting in a proper uncorrupted picture.

An example of a corrupted video stream showing the BD+ Protection in full effect.
An example of the repaired video stream using the Conversion Table.

So how do we get around BD+? Well, all we have to do is follow this conversion table ourselves and correct the corrupted data as the title is decrypted.

As I showed in my previous article, the DumpHD application is brilliant and it has been extended by the author KenD00 to allow the “plugging in” of another program called the “BD VM Debugger”. What this program does is simple – it executes the Java Virtual Machine that runs the conversion table in concert with the normal decrypting process which happens when the disc is played in your normal BD player, patching up the stream as it goes. The end result is a clean decryption with no corrupt video stream.

This tutorial was written using Ubuntu Jaunty but should work with Intrepid and should definitely work with Karmic and beyond as well.

DISCLAIMER: This article describes decrypting BD titles using an Intel or AMD based PC with Ubuntu Linux. At this time of writing you cannot use Ubuntu installed on a PlayStation3 console to deal with BD+ copy protection because the BD VM Debugger and AACS Keys applications are not available for the PPC processor used by the PS3.

So let’s set this up, but first – since my last article, DumpHD has been updated to 0.61 so let’s upgrade this first. Go and download yourself a copy.

  1. Extract the archive out by either double-clicking on it or via the terminal. You should get a “dumphd-0.61″ directory.
  2. If you are upgrading from an older version of DumpHD, copy over the “KEYDB.cfg” file, overwriting the archive copy. No point losing your collection of keys accumulated thus far. Smilie: :)
  3. You’re done for this bit.

The AACSKeys program (which extracts the decryption key for the Blu-ray title and can automatically update your “KEYDB.cfg” file for you when you insert a new Blu-ray title) has also been updated to 0.4.0c since my last article, so go download yourself a copy of that as well.

  1. Extract the archive out by either double-clicking on it or via a terminal. You should get a “aacskeys-0.4.0c”.
  2. Copy the “ProcessingDeviceKeysSimple.txt” and “HostKeyCertificate.txt” into the “dumphd-0.61″ directory.
  3. Copy over the “libaacskeys.so” file located in the “/lib/linux32/” OR “/lib/linux64/” directories (depending on which architecture you’re using) to the “dumphd-0.61″ directory. Do NOT copy or create the “/lib/linux32″ or “/lib/linux64″ directories themselves. Copy the library file only.
  4. You’re done for this bit.

Right, let’s get the BD VM Debugger installed. As of this writing, the current version is 0.1.5. Go and download yourself a copy.

  1. This archive is provided as a 7zip file. Ubuntu does not have out-of-the-box support for this archive format, so install it first with:

    $ sudo apt-get install p7zip-full
  2. Once installed, extract the archive either by double-clicking on it like any normal archive, or via the terminal as follows:

    $ 7z e bdvmdbg-0.1.5.7z
  3. Copy over the everything into the “dumphd-0.61″ directory except the “changelog.txt”, “readme.txt” and “debugger.sh” files since you don’t really need them, but there’s no harm copying them anyway.
  4. That’s it!

You should now have a total of at least of 17 files and two directories inside the “dumphd-0.61″ directory (if you are setting up these tools for the first time, you will only have 15 files instead, as two of them  – conv_tab.bin & hash_db.bin – are generated by DumpHD in conjunction with the BD VM Debugger).

The prepared DumpHD folder with the tools we need.

Now let’s try decrypting a BD+ protected Blu-ray title. In this example, I will use the Australian release of “Day Watch”, the sequel to the Russian epic “Night Watch”.

The BD+ Protected “Day Watch” Blu-ray title I am ripping.

NOTE: Your ability to decrypt a given Blu-ray title, BD+ protected or not, will ultimately depend on the MKB version of the disc. As of this writing, DumpHD can only decrypt up to MKB version 10. Newer discs using version 11 or later can only be decrypted once suitable decryption keys are uncovered and added to the “ProcessingDeviceKeysSimple.txt” file in the “dumphd-0.61″ directory.

The obtaining of the decryption key of the Blu-ray title also requires the player authentication mechanism of your Blu-ray drive to be bypassed, or through use of a drive that deliberately does not have this feature such as some imported drives from China. In the case of my LG GGC-H20L drive, I used a modified firmware so that the drive always gave up the disc’s decryption key regardless of what player certificate I used – blacklisted or not.

  1. Start the DumpHD program by double-clicking on the “dumphd.sh” icon. You will be asked if you want to run the script file. Click on the “Run” button.
Starting the DumpHD application.
  1. When the DumpHD GUI appears, make a note of the messages in the bottom pane to ensure that AACSKeys and the BD VM Debugger was found and loaded OK. You should see the following information:

    DumpHD 0.61 by KenD00
    Opening Key Data File… OK
    Initializing AACS… OK
    Loading aacskeys library… OK
    aacskeys library 0.4.0 by arnezami, KenD00
    Loading BDVM… OK
    BDVM 0.1.5

The DumpHD Interface
  1. Insert the Blu-ray title into your Blu-ray drive.
  2. Next to the “Source” section at the top-right of the DumpHD window is a “Browse” button. Click on it.
  3. Navigate to the path of your Blu-ray drive (generally “/media/cdrom” will work fine). and hit the OK button.Choosing a source to rip from. Click for full size.
Setting up the ripping source
  1. DumpHD will read the disc and will pass it through AACSKeys to identify the title’s descryption key. If it is successful, it will output some data about the disc in the lower pane. In the case of my Day Watch title, it shows the following:

    Initializing source…
    Disc type found: Blu-Ray BDMV
    Collecting input files…
    Source initialized Identifying disc… OK
    DiscID : 73886D08811073F45AD8C75012689097E17EBD3C
    Searching disc in key database…
    Disc found in key database

Identifying the disc and getting the decryption keys to rip with
  1. This is good. We can decrypt this. If the title is not one you have ripped before, you have the option to click on the “Title” button at the top-left of the DumpHD window to give the movie a name in your Key Database.
  2. In the “Destination” section on the right, click on the “Browse” button.
  3. Choose a place to dump the decrypted disc to. Note that most titles will dump at least 20GB worth of data and in some cases 50GB. Ensure that you have enough hard-drive space in the location you choose to dump to.
  4. We’re ready to rock and/or roll. Click on the “Dump” button and decryption will begin, automatically executing the BD VM and applying the Conversion Table to correct the deliberate corruption in the video stream. Here’s a small extract of what you will see in the lower pane of the DumpHD window:

    AACS data processed
    Initializing the BDVM… OK
    Executing the BDVM… OK
    Parsing the Conversion Table… OK
    Processing: BDMV/BACKUP/CLIPINF/00000.clpi
    Processing: BDMV/BACKUP/CLIPINF/00001.clpi
    Processing: BDMV/BACKUP/CLIPINF/00002.clpi
    etc…

Beginning the ripping process
  1. And after awhile it will finish with something like:
    Processing: BDMV/STREAM/00211.m2ts
    Searching CPS Unit Key… #1
    0x0000000000 Decryption enabled
    Processing: BDMV/STREAM/00212.m2ts
    Searching CPS Unit Key… #1
    0x0000000000 Decryption enabled
    Processing: BDMV/index.bdmv
    Disc set processed

Finished decrypting the Blu-ray title.
  1. That’s it! You’ve successfully decrypted the disc and fixed up the corrupted video track. Identify and playback the actual movie M2TS file using a player like MPlayer or VLC, and you should now find that it contains no corruption whatsoever. In the case of Day Watch, the movie file was under BDMV/STREAM/00012.m2ts identifiable simply because it was the largest file in the directory. Using MPlayer, you can play this file with:

    $ mplayer -fs BDMV/STREAM/00012.mt2s

    Thankfully this title does not have the movie broken up into multiple files (I’ll be writing another article soon showing you how to deal with multi-part movies).