The release of Ubuntu’s brand new look in Ubuntu 10.04 Lucid Lynx Alpha 3 brought mixed reactions, but probably none more so than the decision to move the window minimise, maximise and close gadgets from their traditional placement on the upper-right corner of the window to the upper-left side ala Apple Mac.
Many people, myself included, do not like this. To fix it and make it look like this:
…is very easy to do. Read on.
Simply open up a terminal and type in the following at the $ prompt:
Gnome is highly customisable. One of its configuration options tells Metacity where and in what order to render elements on a window. In this case, the string “:maximize,minimize,close” means to render the “maximise”, “minimise” and “close” gadgets in that order, and the colon at the start means to render them on the right side of the window. If you change the colon to be on the far right, your window gadgets will appear on the left of the window.
Don’t like how the Ubuntu team have also changed the maximise and minimise button order around? Be a rebel! Change it back by replacing the configuration string above with “:minimize,maximize,close”.
I regularly deal with external hard-drives, be it for data backup or if I’m rescuing a client’s hard-drive from uncertain death.
Since the idea of opening my PC on a regular basis to connect a drive is a bit of a turn off, I used to use an external USB drive enclosure. This works fine, but it’s a bit slow (well, at least until USB 3.0 makes its debut). The eSATA standard allows you to connect external drives at full SATA speed, but it’s not cost-effective to buy an enclosure for every external drive you have.
Enter the Docking Bay. This is a simple weighed base that allows you to connect a hard-drive in a similar way to how you used to plug in game cartridges into a classic game console like the Atari 2600. You can then eject the hard-drive and plug another one in, all without restarting the PC.
This is a review of one such Docking Bay and how it works with Ubuntu, including the wonders of hot-swapping.
I came across this generic eSATA Docking Bay whilst browsing my local PC store. eSATA Docking Bays have been around for awhile now, but I never got around to getting one so I figured I may as well try this one and see how it went under Ubuntu.
There is unit was branded “A-Power” but I’ve seen several of these drives with various brand names on it, so this one is as generic as they come, but it comes in one of three variants:
eSATA and USB Docking Bay
eSATA and USB Docking Bay with in-built USB card-reader
USB-only Docking Bay with in-built USB card-reader
In my case, I got the first variant as I already have a separate card-reader.
Hooking Up
The Docking Bay is very easy to hook up. The package comes with the following components:
The Docking Bay unit
Power Supply
eSATA cable
USB cable
After connecting power, the Docking Bay is connected to the PC by the eSATA cable to a spare eSATA port on the back of your PC. You then insert the hard-drive into the slot on the top of the unit – it caters for both 3.5″ desktop hard-drives and 2.5″ notebook hard-drives. Once inserted, power on the drive using the power button at the back of the unit. The power light on the top of the Docking Bay will light up and you can now switch on your PC.
Configuration
eSATA Docking Bays don’t actually need any configuration as such. If you wish to make use of SATA’s ability to hot-swap, you will need to enable the Advanced Configuration Host Interface (AHCI) in your PC’s BIOS. Not every motherboard has AHCI, but if your machine is a recent machine, you should have AHCI capabilities. If you do not enable AHCI, you can still use your Docking Bay, however you will not be able to hot-swap a new drive without shutting down your PC first.
Using the Docking Bay
Drives inserted into the Docking bay appear like any ordinary permanently installed hard-drive inside your PC. You can format them, partition them, read and write data to them and see their SMART status like any other drive.
Doing an unscientific benchmarks using the dd app with a 7200rpm Seagate 1TB HDD, I was able to write straight zeros to the drive at a rate of about 116MB/s and read at about 120MB/s.
Real-world file copying transferred data at about 86MB/s which is consistent with normal single-drive copy speeds.
Doing a fresh installation of Ubuntu Karmic 9.10 on the hard-drive and booting my system from the docking bay and then repeating the boot test with the drive attached directly to the internal SATA connection as normal, Ubuntu booted in precisely the same amount of time, as one would expect. I was also able to dual-boot Ubuntu with Windows 7 without any issue.
Hot-swapping works well also. While Ubuntu is running, I insert my hard-drive into the dock, power on the drive and wait a few seconds. The drive appears in the Places menu, you choose it, enter your sudo password to mount the drive, and the drive appears on your desktop. When you are done with the drive, you simply do a right-mouse-click on the drive’s icon, choose “Unmount” and wait for any data to be written to the drive. Once the drive icon disappears off the desktop, you can then power off the drive in the docking bay, then press the eject button to remove the drive.
Dealing with differently sized drives, I tried a half-height Seagate 500GB I have (see photos). The spring-loaded flap on the top of the drive was able to hold the drive in place without a problem. Trying with a 2.5″ notebook HDD, the docking bay provides a cut-out section that allows you to insert the 2.5″ HDD but the flap does not press directly against the drive.
Conclusion
The convenience of a hard-drive docking station cannot be understated. This unit provides a simple, effective interface. For AUD$25 it’s cheap and in the last couple of months I’ve been using this unit, it has proven to be very reliable.
While this unit is not exactly the most elegant-looking of devices, it does the job and does it well.
Virtualbox is a great desktop virtualisation tool, but one of its annoying installation niggles is that when you setup and run a virtual machine you can’t attach any USB devices to it at all because all your USB options in Virtualbox are greyed out.
There are a raft of different solutions to this problem out there ranging from adding an extra line to the /etc/fstab file to modifying your udev rules, but the real cause of this problem is simply that your login name does not have permission to access Virtualbox’s USB driver which interfaces itself between the VM’s virtual USB hardware and your real USB stack.
During the initial installation process, Virtualbox sets up a new group called vboxusers, but it doesn’t put your login name into it. Since using USB in Virtualbox occurs at the device level, your normal user permissions that allow you to run up virtual machines in general are not enough to manipulate Virtualbox’s USB driver. As a result, you cannot tell Virtualbox to attach a given USB device to your virtual machine.
Like most things, this is easily fixed of course.
If your login name is johndoe, all you need to do is jump into a terminal and type in:
$ sudo adduser johndoe vboxusers
…which will add the user johndoe to the vboxusers group.
Now close all applications and windows, and log yourself out of Ubuntu. You don’t need to reboot, but you can if you’re the kind of person who enjoys the subtle pleasures of watching your PC start up.
Log yourself back in again as normal. This will read in your new group membership.
Fire up Virtualbox and start your virtual machine(s) as normal. You will now find that you can attach USB devices to all your VM’s via the Virtualbox Devices menu without any further ado.
Pat yourself on the back – you’re done.
Until Sun Microsystems modify the deb installer to add the current login to the vboxusers group during install, these instructions should apply to just about any version of Virtualbox sporting the problem, on any Linux distro.
Ubuntu 9.10 (Karmic Koala) has a curious bug on the 64-bit Intel/AMD version whereby on some systems you can play Flash perfectly, but the Flash application does not recognise any mouse clicks in it. This means in sites such as YouTube, you can’t click the mouse to play and pause, or seek in a video – you’re forced to use the keyboard.
This is a known bug with the flashplugin-installer package and is currently being worked on by Canonical. In the meantime, if you wish to fix the problem yourself now rather than wait for the official fix, just follow these instructions…
After you have installed Flash in the usual manner, open a terminal and type in the following:
Once the GEdit text editor (or substitute your favourite) opens, insert the following line just before the last line (should appear in most installations as the fourth line out of a total of five lines):
export GDK_NATIVE_WINDOWS=1
Save your changes and exit your text editor.
Now restart any applications that use Flash, such as Firefox.
In the case of Firefox, go and visit a page that uses Flash. You should now find that you can now click in Flash without a problem.
You do remote tech support for clients. One client calls you up needing assistance. You SSH into their machine as usual to check out the problem. You probably also have them on the phone so you can walk them through what you are doing or ask them questions, but making long support phone calls can be expensive if you’re doing it via a mobile phone or internationally and it’s tiresome to switch to an IM client window all the time to write comments, especially if the client is not running a graphical session and only has a text server console to look at.
Sometimes actions speak much louder than words, and it would be great for the client to be able to see what you are doing without cumbersome and bandwidth-hogging remote screen tools like VNC. Is there an easy way to collaborate in a terminal?
There certainly is…
First up, login to the client’s remote machine in question using their login, eg: login to the PC at 192.168.0.27 with the username “fred”:
$ ssh fred@192.168.0.27
Once logged in, we need to create a screen session. It needs a name, so I’ll call mine “blah”, but you can make it any name you want. Type in the following (note that the “-S” parameter is uppercase):
$ screen -S blah
Now instruct the client to open a terminal locally and attach themselves to your screen session by typing in the following command (note that the “-x” parameter is lowercase):
$ screen -x blah
You are now both looking at a common screen session. Anything that either of you type along with any command output will be automatically and immediately seen by the other person in real-time!
Once you’ve finished sorting out the client’s problem, terminate the screen session with:
$ exit
You and the client will be both returned to your regular local terminal sessions which you can now close with the “exit” command again.
If you don’t have another machine to try this with, you can try it using two terminal windows on your own local machine. You don’t need to SSH in since you’re already logged in, just run both screen commands in their own respective windows and watch as any new information entered, including command output, appears in both terminals simultaneously.
You are not limited to only having two terminals sharing a screen session – you can have an unlimited number of terminals, remote or local, share one screen session.
Note that the shared screen session only works with the same user login. You cannot have two separate users share a screen, hence the need to login using the client’s username before setting up the screen session. If the client’s username does not have sudo rights, once inside the screen session, simply su to your admin login and then do the administrative work you require, all while your client watches on in amazement. Of course, be aware that the client can also start typing in commands whilst you are su’ed into your admin login as well, so don’t leave your terminal unattended.
Let’s say you have an Ubuntu PC and a second Windows PC or Mac. You need to do a quick transfer of a file or two from the Ubuntu box, but you really don’t want to go through the hassle of installing and configuring Samba or FTP just for the sake of transferring a couple of files.
Of course you could use a USB flash drive, but it takes twice as long to copy a file that way because you have to copy it to the flash drive and then copy it again from the flash drive to the destination PC. Besides that, what if you don’t have a flash drive big enough to transfer the files you want? Is there a quick and dirty way to transfer some files over a network without the need to install additional software to bridge the compatibility divide?
Indeed there is…
NOTE: This method is not suitable for transferring entire directories of files. While it is possible to transfer multiple files at once using this method, it is primarily intended for the transfer of very small quantities of files due to the fact that you have to initiate the transfer of each file manually – you cannot multi-select files for transfer unless you archive those files into a tarball first.
On the Ubuntu PC, open a terminal and type in the following at the $ prompt:
$ python -m SimpleHTTPServer
If this returns an error when you hit Enter, you are probably using an old version of Python, in which case use the following command instead:
When you hit Enter, you should see a message similar to the following:
Serving HTTP on 0.0.0.0 port 8000 ...
What we have done is started a basic mini web server using Python on port 8000 which will now happily serve files from the current directory you started the Python command from! Now open up a web browser on the other PC and, assuming your Ubuntu PC’s IP address is 10.0.0.27, surf to the following web address:
http://10.0.0.27:8000
Viola! A full directory listing on the Ubuntu PC is presented that you can now navigate and download files from without needing to install any other software to effect a transfer. Just right-click and save like any normal download link on any ordinary website.
If you started the Python command from your Home directory, then the root of the site starts from your Home directory. If you change to another directory before launching the Python command, the web server will serve files from that directory instead. Standard security rules apply – whatever access your Ubuntu user has will be applied to the Python web server. It is not recommended that you run this command as root.
When you’re done, simply press CTRL+C to stop the Python web server on the Ubuntu PC.
You’ve decided to change where you update your Apt-Mirror archive from (eg: you’ve changed ISP’s or feel that another source is more reliable than your current one to update from)
You’re adding another large repository to your Apt-Mirror archive (such as the next version of Ubuntu) and don’t have the quota to download it, so you’re getting a friend to download it for you from their free server using Apt-Mirror (eg: iiNet and Internode customers can access their respective Ubuntu mirrors for free), so you need to be able to merge it with your own Apt-Mirror archive and have it update from your preferred source afterwards.
So how do you do this? Read on.
Migrating your Apt-Mirror archive to update from a new source
This one is really easy. Let’s say you are updating your Ubuntu mirror from Internode, but now want to get your updates from iiNet. To make this happen you need to change the following files:
Your /etc/apt/mirror.list file needs to be updated to point to the new source, and
the Apt-Mirror’s record of downloaded files needs to be updated so that it doesn’t waste time trying to re-download the entire mirror again not realising that it’s already got 99% of all the files already, because Apt-Mirror tracks the files it has downloaded by the source URL and filename, not just the filenames themselves.
So let’s go through this.
Open a terminal load your /etc/apt/mirror.list file into your favourite text editor. In this case I will use the Nano text editor:
$ sudo nano /etc/apt/mirror.list
In your mirror.list file, The lines for updating Ubuntu 32 and 64-bit versions plus source code from Internode can look similar to this:
# Ubuntu 9.10 Karmic Koala 32-bit deb-i386http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
# Ubuntu 9.10 Karmic Koala 64-bit deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
# Ubuntu 9.10 Karmic Koala Source deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
We need to change the Internode URL to the iiNet URL, so bring up Nano’s search and replace function by pressing CTRL+Backslash (“\”).
Now type in the text to replace, in this case:
http://mirror.internode.on.net/pub/ubuntu/ubuntu
Press Enter and you’ll be prompted for the text to replace this with. In this case it’s:
http://ftp.iinet.net.au/pub/ubuntu/
Press Enter and Nano will find the first occurrence of the Internode text string and highlight it for you. If the selection is correct, press “A” on the keyboard to automatically replace “all” occurrences.
Once the update is done, manually go back and visually verify that all the entries were changed correctly.
When you’re happy, save your changes by pressing CTRL+X, then “Y” and then Enter.
Now we need to update the Apt-Mirror record of downloaded files. First, let’s take a backup of the index in case you stuff up. Type in:
Now let’s bring up the original file into the Nano text editor.
$ sudo nano /var/spool/apt-mirror/var/ALL
Depending how large your index file is, there may be a brief delay while Nano opens it up. Once it appears, do the same search and replace as you did in steps 3-6 again. Note: If the editor comes up blank, then you have not opened up the index file – check your path spelling in Step 9 and try again.
Save your changes by pressing CTRL+X, then “Y” and then Enter.
Finally, we need to modify the Apt-Mirror’s cache of downloaded files so that its directory structure matches that of the new source. In the case of iiNet, you’ll notice it’s URL has one less ubuntu word in it compared to Internode’s URL, so we’ll need to move some directories to eliminate the extra ubuntu directory.
At the terminal, move the dists and pool directories of the mirrored files one directory back using the commands:
The directory structure now matches iiNet’s server and your ALL file is up to date, so now we can test your changes by launching Apt-Mirror. Launch it manually with:
$ apt-mirror
Watch the output. First Apt-Mirror will download all the repository indexes from the new location and will compare the files presented in those indexes to your local index of downloaded files (the modified ALL file). It will skip all files already listed as being present and will only download new files not listed in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or no more than a gigabyte or two since your last update under the old setup. If you see that Apt-Mirror wants to download some 30GB or more, then you have made an error in changing the URL in the ALL index file or you incorrectly renamed the mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 10.
$ apt-mirror Downloading 1080 index files using 5 threads... Begin time: Wed Dec 9 15:59:23 2009 [5]... [4]... [3]... [2]... [1]... [0]... End time: Wed Dec 9 16:00:45 2009
1.7 GiB will be downloaded into archive. Downloading 998 archive files using 5 threads... Begin time: Wed Dec 9 16:02:31 2009 [5]... [4]... [3]... [2]... [1]... [0]... End time: Wed Dec 9 16:54:15 2009
207.4 MiB in 256 files and 1 directories can be freed. Run /var/spool/apt-mirror/var/clean.sh for this purpose. $
You’re done! Pat yourself on the back.
Inserting a foreign Apt-Mirror archive into your own archive
This one is slightly more involved, but is not difficult. In the case of a full Ubuntu Mirror, let’s say you were adding an Ubuntu Karmic mirror archive taken from iiNet’s mirror servers into your own local Apt-Mirror archive that featured only Intrepid and Jaunty, both of which you are updating from Internode’s mirror servers. There are some obstacles we need to overcome such as:
Continuing to perform future updates for the Karmic repository from Internode rather than iiNet.
The foreign iiNet Karmic archive contains lots of files that you already have in your own archive – files that are common between all releases of Ubuntu. How do you filter those ones out and only copy the new files?
Finally, how do you update the Apt-Mirror index file with the potentially thousands of new entries from the foreign archive? How do you avoid duplicate lines potentially confusing Apt-Mirror?
Follow these steps:
First ensure that you have the full copy of the foreign Apt-Mirror archive supplied on a suitable storage medium. Aside from the mirror directory itself (usually under /var/spool/apt-mirror/mirror), you must have a copy of its /var/spool/apt-mirror/var/ALL file. It does not matter if the foreign mirror is not completely up to date, as Apt-Mirror will catch up with what is missing when you run the next update.
Let’s prepare your local Apt-Mirror installation for grabbing Ubuntu Karmic from our preferred source first. We need to load up the /etc/apt/mirror.list file into your favourite text editor and add the entries relevant to our new repository that we are mirroring. I will use the Nano text editor for this, but you can use any text editor you like:
$ sudo nano /etc/apt/mirror.list
Now we add the entries relevant to Ubuntu Karmic for Apt-Mirror to use. In this case, I am going to update Ubuntu Karmic from Internode and I will be grabbing both the 32-bit and 64-bit versions plus the source code (reflecting what is already included in the foreign archive on my storage medium, or Apt-Mirror will be doing a LOT of downloading the next time you run it), so I need to add the following entries:
# Ubuntu 9.10 Karmic Koala 32-bit deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-i386 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
# Ubuntu 9.10 Karmic Koala 64-bit deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-amd64 http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
# Ubuntu 9.10 Karmic Koala Source deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-updates main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-backports main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-security main restricted universe multiverse deb-src http://mirror.internode.on.net/pub/ubuntu/ubuntu karmic-proposed main restricted universe multiverse
Save your changes and exit the editor using CTRL+X, then “Y” and then Enter.
Make a backup copy of the foreign mirror’s /var/spool/apt-mirror/var/ALL file, so you can revert to it if you make a mistake. Call the copy something like ALL_Backup.
Now open the foreign mirror’s original /var/spool/apt-mirror/var/ALL file into your favourite text editor.
Use your text editor’s search and replace function (in Nano, press CTRL + Backslash “\”) to replace the URL of each entry in the foreign mirror’s ALL file to the URL of the mirror you will be performing your future updates from. In the case of changing iiNet URLs to Internode URLs, you would replace any occurrence of the text string:
Once updated, save your changes and close your text editor.
Now we need to merge the modified foreign ALL file into the ALL file from your local Apt-Mirror setup. First up, rename the modified foreign ALL file so we don’t confuse it. For this tutorial, I will assume your foreign mirror is supplied on an external USB hard-drive called “myhdd” and is simply a copy of the foreign system’s /var directory in its entirety. The following will rename the file from ALL to ALL_modified in a terminal:
Now concatenate the original ALL file and the modified foreign mirror’s ALL_modified file into one new file called ALL_new in your local Apt-Mirror’s var directory. Concatenating alone will result in duplicate lines and we need to sort the file so that any duplicate lines in both the local and foreign ALL files are brought together. We can sort the content of the concatenated files and remove duplicate lines in one hit with:$ sudo cat /var/spool/apt-mirror/var/ALL /media/myhdd/var/spool/apt-mirror/var/ALL_modified | sort | uniq > /var/spool/apt-mirror/var/ALL_new The cat part of the command simply joins the content of /var/spool/apt-mirror/var/ALL and /media/myhdd/var/spool/apt-mirror/var/ALL_modified into one big file, but before it’s written to a physical file, the concatenated data is “piped” using the pipe symbol “|” into the sort command, which sorts the concatenated data into alphabetical order which will group duplicate lines together. But before that resultant output is written anywhere, the sorted data is then piped again into the uniq command which automagically removes all duplicate lines, leaving one unique copy of each line. Finally, we direct the output from uniq using the “>” character into our physical destination file at /var/spool/apt-mirror/var/ALL_new at the end. The sudo command at the start is used simply because only the root and the apt-mirror users can actually write to the /var/spool/apt-mirror/var directory.
Alternatively, we can replace the “| sort | uniq” part with “| sort -u” which does the exact same thing, since the sort command does have it’s own “unique” functionality as well. I’ll leave it up to you which way you’d like to go.
Check your new /var/spool/apt-mirror/var/ALL_new file and you will find it now contains all your local and foreign mirror’s entries in alphabetical order and with no duplicate lines. If you’d like to see how this worked, re-work Step 10 without the sort and uniq commands or the pipe characters and see how it affects the output file. Try adding just the sort or just the uniq command too.
Now rename your local mirror’s original ALL file because we’re about to replace it with the new one:
Right, that’s the index taken care of. We’re nearly done! Now we only have to merge the foreign mirror’s actual files into your local mirror. Once again, for the purposes of this tutorial I’m going to assume you have them stored on an external USB hard-drive called “myhdd” and is a copy of the foreign system’s entire /var directory, so the path to the foreign mirror’s files will be /media/myhdd/var/spool/apt-mirror/mirror – got that? Let’s change to that directory now in a terminal to save us having to type so much:
$ cd /media/myhdd/var/spool/apt-mirror/mirror
Now, the observant of you may have noticed that Apt-Mirror stores its mirrored files using a directory structure that follows the path of the URL the data is obtained from, so in the case of a mirror from iiNet, there is a directory here called ftp.iinet.net.au. You can see it by using the ls command to list the directory contents:
We need to modify the foreign directory names and structure to exactly match that of the URL path your local mirror updates from. Starting with the obvious, we need to rename the ftp.iinet.net.au directory to be mirror.internode.on.net with:
Next we need to create an extra subdirectory called “ubuntu” because Internode’s URL path is mirror.internode.on.net/pub/ubuntu/ubuntu/ and iiNet’s path is ftp.iinet.net.au/pub/ubuntu/ only:
With the directory structure and directory names all amended, we are now ready to merge the foreign mirror’s files into your local mirror. We will do this using RSync. This tool traditionally is used to make backups and is indeed used to keep the official worldwide Ubuntu mirrors up to date 1:1 with the master archive, but in our case we are using it to add the “missing” files in the local mirror with the files from the foreign mirror whilst skipping the files that are already present, which means instead of copying around about 60GB worth of data from the foreign mirror, we’ll only copy a percentage of that instead, saving us time and drive space:
The “–progress” parameter allows you to see which file is being copied over. You may see a large number of directory names whizz past because those directories don’t have any files that are different between your current Ubuntu Intrepid and Jaunty mirror and the Karmic mirror you are merging. Unfortunately rsync does not provide an all-over progress. It only provides a progress of the file it is currently working on. This procress can take several hours to complete depending on how much data needs to be copied and the speed of your storage medium containing the foreign mirror (which if on a USB HDD can take a looooong time).
Once RSync has finished, it will give a summary of what was copied. If you were to run the rsync command in Step 16 again, you will see it finish rather quickly because there is no data that has changed or is missing anymore.
Now we just quickly ensure that all the merged foreign files belong to the Apt-Mirror user with:
And now we are ready to try a manual update to see if it all worked. If you now execute the Apt-Mirror application manually, you should now see that it reads in the new repository entries you added into your /etc/apt/mirror.list file in Step 3 and will compare the files presented in those indexes to your local index of downloaded files (the newly modified ALL file). It will skip all files already present and will only download new files not present in your local mirror. You should find Apt-Mirror advises only a small subset of data to download, perhaps only a few megabytes or a gigabyte or two since your last update under the old setup and depending on how old the foreign archive was. If you see that Apt-Mirror wants to download about 30GB or more, then you have made an error in changing the URL in the ALL index file or the renaming of mirror directories. Press CTRL+C to stop Apt-Mirror, and go check your configuration from Step 5.
$ apt-mirror Downloading 1080 index files using 5 threads... Begin time: Wed Dec 9 15:59:23 2009 [5]... [4]... [3]... [2]... [1]... [0]... End time: Wed Dec 9 16:00:45 2009
1.7 GiB will be downloaded into archive. Downloading 998 archive files using 5 threads... Begin time: Wed Dec 9 16:02:31 2009 [5]... [4]... [3]... [2]... [1]... [0]... End time: Wed Dec 9 16:54:15 2009
207.4 MiB in 256 files and 1 directories can be freed. Run /var/spool/apt-mirror/var/clean.sh for this purpose. $
If all is good, then pat yourself on the back. You’ve successfully merged the foreign repository and it will now update from your preferred ISP’s mirror from now on.
Scenario: You have a box running Ubuntu Server. Something happens to the box and you decide to move the hard-drive to another physical machine to get the server back up and running. The hardware is identical on the other machine, so there shouldn’t be any issues at all, right?
The machine starts up fine, but when you try and hit the network, you can’t. Closer inspection using the ifconfig command reveals that there is no “eth0″ adapter configured. Why?
Here’s how to fix it.
Ubuntu Server keeps tabs on the MAC address of the configured ethernet adapter. Unlike Ubuntu Desktop, you can’t simply change network cards willy nilly – while Ubuntu Server does detect and automatically setup new cards, it won’t automatically replace any adapter already configured as eth0 with another one, so you need to tell Ubuntu Server that you no longer need the old adapter.
This problem can also appear if you have a virtual machine such as one from Virtualbox, and you move or copy it from one host to another without ensuring that the MAC address configured for that VM’s ethernet adapter is 100% identical to the previous one.
These instructions were done with Ubuntu Server 9.04 Jaunty Jackalope in mind, but should apply to just about any release.
Since you can’t SSH in, you will need to login directly on the Ubuntu Server console as an appropriate user with sudo rights.
Once logged in, type in the following and hit Enter:
You are now presented with the Nano text editor and some info that looks similar to the following:
# This file was automatically generated by the /lib/udev/write_net_rules # program, run by the persistent-net-generator.rules rules file. # # You can modify it, as long as you keep each rule on a single # line, and change only the value of the NAME= key. # PCI device 0x8086:0x1004 (e1000) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="0a:03:27:c2:b4:eb", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0"
Delete the last two lines or simply comment out the SUBSYSTEM line on the end. This is a rule defining what MAC address should be explicitly assigned to “eth0″. Since you no longer have an ethernet card with the specified MAC address in this machine (it’s on the old PC, remember), Ubuntu Server effectively ignores your new ethernet adapter because its MAC address does not match the defined rule for “eth0″.
Once you’ve made your changes, press CTRL + X and then Y and then Enter to save your changes.
Now reboot your box with:
$ sudo reboot
Upon reboot, Ubuntu Server will detect the “new” ethernet adapter in your PC and will automatically write a new rule into the /etc/udev/rules.d/70-persistent-net.rules file, thus enabling networking over eth0 for your server.
To verify that the new adapter is working, type in:
$ ifconfig
…and you should see eth0 now listed with your defined IP address.
Test remote connectivity to the server and if all is well, then pat yourself on the back. You’re done.
You know how it is – you take a client’s Windows based machine, do a dual-boot installation of Ubuntu (which replaces the Windows Master Boot Record, or MBR, with GRUB and sets up an option to boot Ubuntu or Windows) so the client can evaluate Ubuntu, but then later on for whatever reason, Ubuntu is no longer wanted. It’s removed and you need to restore the system’s ability to natively boot Windows directly without a GRUB menu.
You’re probably thinking “why the hell would anyone want to do that?!”… well, the fact of the matter is you sometimes come across a client who is just too mind-set and refuses to use anything but Windows, so yes – sometimes you need to restore the Windows MBR, but how do you do that when you don’t have a Windows CD handy?
Well, here’s how to do it using nothing but an Ubuntu 9.10 (or later) LiveCD.
It’s a little known fact that the Windows bootloader is nothing special. In fact it contains nothing proprietary to Windows at all. All the Windows bootloader does is simply look for the partition marked as “bootable” or “active” and transfer control of the boot process to it.
And would you know it? The Ubuntu LiveCD has a binary image of a generic open source bootloader that does just that!
Boot your soon-to-be-Windows-only machine using the Ubuntu 9.10 (or later) LiveCD. Doesn’t matter if it’s the 32-bit or 64-bit version.
Once booted on the LiveCD, open a terminal by going to the Applications menu and then choose Accessories and then Terminal.
Find out what the designation of the Windows drive is (generally it will be the first drive, eg: /dev/sda or /dev/hda). If you are not sure, issue the command:
$ sudo fdisk -l
…and review the output, looking for your NTFS Windows partition. Make note of the drive that partition resides on (not the partition itself), eg: “/dev/sda”, not “/dev/sda1″.
Now type in the following (remembering to substitute the correct drive device name for your setup in place of “/dev/sda”):
…which will write the image of a standard MBR contained in the /usr/lib/syslinux directory of the LiveCD environment to the first hard-drive, overwriting GRUB.
WARNING: Do NOT use a partition designation, eg: “sda1″ or “sda2″, etc. This will overwrite the start of that partition which will effectively destroy data. The MBR exists at the start of the drive only, so only specify “sda” with no number on the end.
Shutdown and reboot. Windows should now start “natively” without GRUB appearing at all.
Normally I’d say “pat yourself on the back” here, but it’s Windows… ;-)
I finally got around to upgrading the server that serves this very page you’re reading to Ubuntu Jaunty today, up from Ubuntu Hardy. Yes I know, maybe I should have waited for Karmic, or even Lucid, but the biggest reason why I did this was that I’ve migrated this server from the little Pentium 4 Shuttle XPC that was in use before onto a Virtualbox 3.0.6 headless VM hosted on an Ubuntu Jaunty box running on top of an Intel E5200 CPU.
You’re probably wondering why I’d use an E5200 when it doesn’t have hardware virtualisation features built in? Well, the server consumes very little juice compared to the Pentium 4 (26% less in fact), it’s more powerful, it’s cheap, cheerful, produces less ambient heat, is a heck of a lot quieter and there’s loads of CPU time left over to do other things outside of the VM on the host side.
CPU wise, when the server gets really busy I’ve seen spikes as high as 50%, but it never exceeds that, so as far as I’m concerned, it’s fine. If I ever need this box to do anything more significant, I’ll upgrade the CPU to something that does have VT-x later on.
If you see anything unusual/missing/dead from today onwards, please let me know in a comment!