Archive for the ‘PC’ Category

Kindle Touch landscape mode – Ubuntu / Linux fix (my solution…)

December 5, 2011

I got my Kindle Touch Wi-Fi with special offers last week. More info on this nice device at the Amazon Kindle Touch page, the Kindle Wikipedia article page, and of course Google searches will do you lots of good.

Granted I’m busy so I’ve not been able to maximize its use yet but I’ve been able to play with it and organize my books and personal documents already. Overall I like the look and feel and functionality of the Kindle Touch (or KT) and I believe you can search reviews with Google, including this good Kindle Touch review by CNET. For reading novels in the native formats accepted by Kindle (e.g. .mobi, .azw files) the pinch-and-zoom as well as the swipe (or just tap) to go to the next/previous page, among other features of KT, are great.

As a researcher though I have lots of PDFs to read, many of which have text formatted in mathematical notation (such as those made using LaTeX). Of course KT can handle PDFs, and even though you could pinch-and-zoom as well as swipe to pan through a page or through pages, reading a PDF  file this way can be really cumbersome especially since the default orientation of viewing in KT is portrait mode. It would be nice if we can rotate the KT to landscape orientation to better read PDF files. However, according to an official statement from a Kindle Customer Support representative, landscape mode is not available on KT. Bummer. I immediately emailed Amazon at (all of you should! 🙂 ) asking for future software/firmware update to automatically change orientation in KT, and when might this be.

Right now, a sort of “hack” is possible to allow Ubuntu (and other GNU/Linux distros) users like myself to read our PDF books and files in landscape mode in the KT. An answer is to use the pdftk commandline tool which I made a post about some time ago. You can also refer to the man page of your GNU/Linux distribution after installing pdftk. In Ubuntu, a simple and quick apt-get or Synaptic installation should  do the installation job for you (check my post about pdftk above, or search this blog). The “hack” goes like so:

Say you have a PDF file named mydoc.pdf. To rotate the entire PDF file (assuming it is in portrait orientation by default) 90 degrees counterclockwise (so now the mydoc.pdf is now in landscape orientation) fire up a terminal and type:

$ pdftk mydoc.pdf cat 1-endW output tmp.pdf

Where tmp.pdf is the desired output filename of the re-oriented (now in landscape) version of mydoc.pdf. Now you can copy or email the file tmp.pdf to your KT and read your PDF file in landscape mode. 🙂 I’ve yet to check if pdftk works in Mac OS (I won’t be surprised if it does) though I believe this pseudo-hack might turn out to be more “graphical” or point-and-click in nature than my commandline solution above. 🙂 A user from the Amazon KT support page above mentioned using a professional version of Adobe reader to graphically do this, perhaps in Windows and in Mac OS as well. I’d appreciate if somebody would post a link on how to do this graphically in Ubuntu (GNU/Linux), Mac OS, and even in….Windows… 😉 🙂

Happy hacking and Kindling. 🙂

Find files before/after/in between specific dates

February 14, 2011

Hello folks. \\//,

Well here I am, reading a technical research paper on membrane computing and Petri nets when I decided to download some of the reference materials, conference/journal technical papers, of the paper I’m currently reading.  So I download the reference materials from IEEE Xplore, plus other non-reference materials but still related to the topic I’m reading.

Eventually, I download around 10 PDF files, with filenames made up of purely integers and of course the .pdf extension. Problem is, I’m too lazy (not that much really, I just don’t want to keep doing this manually over and over) to specify each PDF filename, then add them to a compressed archive (this case a RAR file) and then send them to another machine or person or account of mine for archiving.  Essentially, I just want that: to get those PDF files I recently downloaded for this specific topic I’m currently in, not include other PDF files in the same directory, and then compress them into a single RAR file.

How to go over doing this?

Luckily there are for loops and the nifty find command in *nixes like my Ubuntu box. 🙂 So what I simply do is

for i in `find . -type f -cnewer startdate ! -cnewer enddate`; do rar a myrarfile.rar $i; done

which means given 2 reference points, files startdate and enddate, I loop over all files in between the modification times of these two files and then add them to the RAR file myrarfile.rar.

Presto. Problem solved, learned something new. Now, back to reading research papers. 🙂



Virtualbox shared folder access: Mac OS X host with Ubuntu 10.04 as guest

November 13, 2010

Whew. It’s been a while since I’ve done anything here. 🙂 Now time to do some geeky blogging (and so much more soon) once again mis amigos y amigas. 🙂

Tech specs of the setup


Mac OS X Version 10.5.8

$uname -a

Darwin theorylabs-P-System-iMac.local 9.8.0 Darwin Kernel Version 9.8.0: Wed Jul 15 16:55:01 PDT 2009; root:xnu-1228.15.4~1/RELEASE_I386 i386

VirtualBox (non-OSE version, but still free) Version 3.2.10 r66523


Ubuntu Lucid Lynx 10.04 32bit

Setting it up

Essentially just add a shared folder using VirtualBox, whether a VM is running or not. In the guest OS, create a directory where you want your host OS’s files to be mounted (with R or R/W permissions).

Then in the guest OS make sure that the guest additions are successfully installed already. This step is easily and quickly done by mounting the ISO into the guest OS, then allowing Ubuntu 10.04 to detect an autorun script. It will warn you that the running of certain scripts can pose a threat to your system, so we go ahead knowing that the ISO is from Oracle. Otherwise, you can run the script by double-clicking on it or using a terminal.

Once the guest additions have been successfully installed, the following command should mount the host OS’s folder onto the newly created folder in the guest OS which we just created from above:

sudo mount -t vboxsf virtualbox_shared_folder_name guest_os_directory_path

Where virtualbox_shared_folder_name is the name of your host OS’s folder which you entered in the VirtualBox shared folder setting, which may not necessarily be the real directory name of the directory you want to share from your Mac OS X.  guest_os_directory_path is the newly created folder from above awhile ago.

A note on the virtualbox forums, several users say that changing one’s directory in the guest OS to / (root directory of the filesystem) helps, although this wasn’t the case for me.

Hope that helps ladies and gents. Questions are very much welcome. 🙂


Short review on ‘The Big Bang Theory’ episode ‘The Einstein Approximation’

February 3, 2010

Warning: For those who haven’t seen this episode, spoiler alert!

This is the first, and hopefully won’t be the last, of a series of short reviews I’ll try doing each week for ‘The Big Bang Theory’.

This week The Big Bang Theory (TBBT) episode ‘The Einstein Approximation’ came out,  and is the 14th episode of the show’s 3rd season.
Let me just start this quick and short review of the episode by further stating what the guys there and I have in common, apart from the quite obvious facts that we’re all geeks/nerds by heart.
Even before TBBT, I’ve admired and idolized Einstein myself, because of his great mental feats (which were of course, backed up by other physical theories and experiments at his time). Great because by just the power of his mind Einstein was able to revolutionize our lives and the 20th century, paving ways for faster transportation, not to mention telecommunication and computing, which drove and is still driving the information revolution today. And of course, so much more benefits which we more or less take for granted in our daily lives. In fact, Einstein is oftentimes synonymous with the word ‘genius’.
Einstein was also very much interested in philosophy and politics, not just physics. He’s written several books, articles, letters to people outside the scientific community. He also has a quirky sense of humor, as seen from this  picture of him. At first I thought this photo of Einstein was edited. But as it turns out it was really him, tongue hanging out and all. 🙂 It was at the time he was making fun of people taking pictures of him. Great stuff.

Silly Einstein
Of course Einstein is not without criticisms. Great and accomplished a scientist he maybe, history tells us he left much to be desired when it came to being a father or a husband.

Now, back to the episode review of TBBT. At this point I shall establish a partially objective, partially subjective point system of each episode based on the earlier 2 seasons (which I have watched at least 2 times…).
Let me just start off by saying this is a classic Sheldon episode, which is great in itself. Again we expected lots of ‘weird’ humor: Sheldon’s ability to complicate relatively simple things, as well as him belittling his friends, most noticeably Penny. Hilarious stuff once again. Bravo to TBBT production team.
Not a lot of scifi or comic book references were made though. But lines such as:

Howard: How long has he been stuck? (referring to Sheldon)
Leonard: Umm…intellectually about 30 hours, emotionally about 29 years.


Howard: Have you tried rebooting him? (referring to Sheldon)
Leonard: No I think it’s a firmware problem.

Are classics. 🙂

The part where Leonard and Sheldon were arguing inside the ‘ball play room’, with Sheldon going ‘bazinga’ everytime, was also hilarious.

Sheldon, and of course the rest of ‘the guys’ are fans of Einstein no doubt. Sheldon of course thinks he’s at the same level with Einstein so he tries to do what Einstein did in order to come at the epiphany that is the special theory of relativity: to work for a menial job so he can occupy his basal ganglia with a routine task so he can apparently free his pre-frontal cortex to solve his physics problem.

Another classic moment in this episode is the guest starring of Yeardley Smith, the not so well known voice actor behind the famous cartoon character Lisa Simpson (yes, in The Simpsons fame). Absolutely entertaining piece of the episode.

Another classic dialog is again with Sheldon and Penny:

Penny: What are you doing here?
Sheldon: A reasonable question. I asked myself, what is the most mind-numbing, pedestrian job conceivable? And 3 answers came to mind: toll booth attendant, an Apple Store “Genius”, and “What Penny does”. Now, since I don’t like touching other people’s coins, and I refuse to contribute to the devaluation of the word “genius”, here I am (meaning at the cheesecake factory).

Lines like these make me think of the real meaning and application of LOL. 🙂

I suppose myself and those guys, as well as the show’s production team, can’t help cracking jokes at Apple. 😀

Overall I’d give this episode the following scores:

* reference to sci-fi, comic books, and other geek/nerd pop culture: 6/10

* reference to physics and other fields of science: 9/10

* dialog humor factor: 9/10

* techie/technology factor: 8/10

which gives an overall score of: 8/10


Thoughts on Ubuntu/Kubuntu 9.10

December 2, 2009

After more than a month since Ubuntu/Kubuntu 9.10 codename Karmic Koala was released, here are some thoughts and noteworthy things about it:


Ubuntu 9.10, among other recent Linux distros, now uses Grub2. I didn’t read all the release notes, and when I suddenly became curious at taking a look at my menu.lst to see what Grub2 has in store for me, I was in for a surprise. Grub2 doesn’t use menu.lst anymore. It seems menu.lst is already part of legacy Grub. Initially I disliked this, having used menu.lst since I started Linux (6 years ago). The menu.lst has now been superseded by the /boot/grub/grub.cfg file. However, the grub.cfg file is preferably not to be edited by the user, as it is automatically generated by other scripts such as grub-mkconfig. Now I think it’s a good idea to have this boot file editable by a script, which has rigid syntax and rules in order to properly create a boot config file. Of course, nobody is really stopping the hacker inside any of us to not edit the file, nor to create our own menu.lst, or even to revert to legacy Grub versions.



Ubuntu 9.10 also uses the updated version of Xorg  (as of typing this I use version 1.6.4) doesn’t use an xorg.conf anymore. As Ben Grim/The Thing puts it, “What a revoltin’ development!”. Again initially I was irked by this, but later I also realized it was for the better, since xorg.conf was becoming too cryptic for newer users of Linux. Now the xorg.conf tasks are being handled by several other config files and scripts.

Which got me thinking, since xorg.conf has already been deprecated, what of the ctrl+alt+backspace that we (or at least I) have grown to love when restarting X? It turns out, the ‘dontzap’ option didn’t work anymore, I tried it. The ‘dontzap’ directive worked for 9.04, but apparently not so for 9.10 onwards.  As a result, one way to turn on ‘ctrl+alt+backspace’ back in Kubuntu via graphical method is as follows:

Enabling Ctrl-Alt-Backspace for Kubuntu

  • Click on the Application launcher and select “System Settings”
  • Click on “Regional & Language”.
  • Select “Keyboard Layout”.
  • Click on “Enable keyboard layouts” (in the Layout tab).
  • Select the “Advanced” tab. Then select “Key sequence to kill the X server” and enable “Control + Alt + Backspace”.

Or, for command line folks like me, by doing

setxkbmap -option terminate:ctrl_alt_bksp


Deprecation of hal

Several scripts, including devkit-power and devkit-disks are being used as replacements for hal. So far no issues here, and I think so far power as well as disk management in 9.10 is doing great. I do still see hald running in my machine, wonder why that is.

Some caveats and surprises

I’ve had several issues regarding my initial install of Kubuntu 9.10 32 bit desktop. One was Dolphin crashed a few times while I was dragging and dropping files to VLC’s playlist window. These crashes happened in the first 2 to 3 weeks of my installation, regardless of my constant updating of the system. The great thing however is that KDE wallet, coupled with the revamped bug reporting of KDE 4.3.X/Kubuntu 9.10 makes it so much easier to report bugs nowadays. Eat your heart out Micro$oft. 🙂 I do wish they’d fix and close the bug soon. 🙂

I haven’t really been using Konqueror a lot lately since Kubuntu 9.10 now has an ‘Install Mozilla Firefox browser’ option included, even in the live version of the OS, which is nice. In the KDE 3.5.X era I used Konqueror a lot, not as a browser but as a very useful file, ssh, ftp, samba, whathaveyou browser. Nowadays Dolphin is all that except a browser. I can still use Konqueror as my main file browser though, I might try that some time. But I’ve grown to slowly accept Dolphin in my day to day computing tasks. The previews, zoom in/out sliders,  different panels, and the widget make of Dolphin make it a delight to use. So so far no issues with Konquer in 9.10 yet.

Dolphin also makes management of drives a breeze, whether they be internal (IDE, SATA etc.) or external (USB drives, media players etc.). The Oxygen theme also looks very sleek and futuristic.

As with my installation of Ubuntu 9.04, ext4 was a marvel to behold, more so since the 9.10 version uses ext4 by default. Even with apache2 and mysql running at boot, my boot time is well under 30 seconds, even with older single core procs.  Scripts like ureadahead make booting much faster.

Conclusion – for now

Other than perhaps minor setbacks I forgot to mention, plus the introduction of some new technologies I listed above, the 9.10 version of Ubuntu/Kubuntu is a marvelous piece of work, stability, dependability,  and usability wise in my opinion. So far. Can’t wait for 10.04/Lucid Lynx.

IPCop Linux, route command, and network routing

September 16, 2009

This short post is about the dilemma a coworker of mine just had this morning regarding network packets, and a not fully functional IPCop Linux installation.

The Dilemma

The server runs IPCop, which allows a PC to run as a firewall appliance. The IPCop server has 2 NICs, eth0 and eth1. Eth0 is connected to a Class A private LAN while eth1 uses a Class C address to connect to the public Internet. The problem however is that the Internet is accessible (Google, Yahoo! etc.) but not private LAN machines and addresses.  The private LAN’s gateway return ping replies, but not the DNS server.

Detective Work (i.e. Troubleshooting)

What I did was to check all possible causes for this problem: restart the network, checked logs for error messages and others, though some of these had already been done, but I just want to be doubly sure myself. I next checked the firewall using the iptables command. There were tens of lines of firewall rules, along with numerous chains. Since I was in a hurry at that time, I decided to skip the detailed checking of the firewall rules for the moment, even hough I have experience dealing directly with iptables, and not with the higher level application firewalls that just modify it. Next I tried to ping again the DNS server. Adding a -v in the ping command to make it more verbose, I noticed that packets were being successfully sent to the DNS server, but no packets were coming back. I thought to myself that the iptables firewall is one good suspect for this, but I’ll try a few more checks before I go to the nitty gritty of iptables firewall rules. I did ifconfig ethX up and then  down but to no avail. Replace the X with the NIC number you wish to up/down.

The Fix

I next checkd the routing table using the very useful route command. The static IP route looked fine, but I noticed that it was rathe incomplete, given that it has 2 NICs. What I mean by incomplete is that the route from the public, Class C network has routes for going in and out of the destination network and host, but the private LAN doesn’t have a route for traffic going into the IPCop server. It only has a route for traffic coming from the Class A private LAN NIC. Bingo was its name-o. 🙂 Apparently the reason why ping packets weren’t making their way back to the IPCop server was that they weren’t being routed correctly back to the IPCop server itself. This was further supported by using the traceroute command. I traceroute-ed the private LAN DNS server and as expected, the routing of the packet was all messed up. The traceroute packets for the private LAN DNS server were exiting through eth1, and out to the public Internet already. No wonder it doesn’t have a private LAN connection! 🙂

So the fix was to add a correct route to the routing table using the route command. The new route should, well, route the packets correctly from the  private LAN back to the IPCop server, and to make sure that the class A private LAN traffic enters/exits via the eth0 NIC. To do this the command

route add -net NETWORK netmask NETMASK gw GATEWAY

was used. Just replace NETWORK, NETMASK, and GATEWAY with the appropriate values for your network. In our case, NETWORK was the destination host ( the local machine, given by and GATEWAY was the gateway of the Class A network of the private LAN.

Sure enough, after adding that static route, the Class A private LAN became accessible. 🙂

route add -net netmask gw ipx4route add -net netmask gw ipx4

In Linux, no cpu-z you see…

June 22, 2009

… which may be bad at the start, but isn’t so if you really know how powerful Linux is. In this case, you don’t really need to acquire a cpu-z-like software, unless of course you’re freaked out by the command line (which we’ll use in this case).  Linux (at least those that use kernel versions 2.6 and above) have quite  an array of commands that lets you acquire most info that cpu-z will give you on a Window$ box, sometimes less, sometimes more. These commands are especially useful in cases like (this was my case a week ago, that’s why I had to find out about them) there’s no graphical interface for you since you’re either remotely doing administration or the server just doesn’t have any graphical server/service installed.

To list information about the CPU enter the command

cat /proc/cpuinfo

To list your PCI devices type the command


To acquire information about your installed memory/RAM sticks or modules, one command to do this is

sudo dmidecode —type 17

To check your hard drives, the following commands give you loads of info

cat /proc/diskstats | egrep "^\s?+8"
df -hT
ls -lh /dev/disk/by-path/
ls -lh /dev/disk/by-id/
ls -lh /dev/disk/by-uuid/
cat /proc/scsi/scsi

you then can find out disk info by running the following on each node listed (device name in third column):

sudo fdisk -l /dev/NODE (e.g. sudo fdisk -l /dev/sda, if you have SCSI drives)

There are quite a lot more commands to get information about the hardware you are running, without shutting it/them down so you can open them up and check the hardware yourself. Or you won’t have to grab your hardware’s manual (whether locally or online) just to get info about your hardware. Good especially for sys ads like me. 🙂

Ubuntu, Ubuntu 9.04 Jaunty Jackalope, MSI Wind netbook, and everything in between

April 30, 2009

I just downloaded Ubuntu 9.04 codename Jaunty Jackalope (let’s call it Jaunty for brevity’s sake). I’ve been very occupied the past few days with my morning and afternoon/evening work that’s why it took me this long to sit down and check out Jaunty.  And this post is a quick overview of what’s it like to experience Jaunty, specifically over my MSI Wind U100x.


Jaunty looks much sleeker and more streamlined than previous Ubuntu incarnations, as shown by the loading splash image.

The login screen has also been revamped and kind of feels more like KDE (which isn’t bad in my opinion).

The geeky stuff

Jaunty gives you the option to install your system using the new ext filesystems, ext4. I tried it out and though I haven’t done any timing tests, the bootup from a clean install seems to be slightly faster. Of course, ext4 has been released for quite a while now and one more reason to use it other than it improves upon the performance of ext3 is that Ubuntu usually never releases/allows untested software (filesystems not the least of these) so you can be pretty sure ext4 is a safe bet. Plus, there’s support from Canonical.

I’m still plowing through Jaunty but the news is that ctrl+alt+backspace, used for restarting the X server, doesn’t work by default. To turn it on, edit your xorg.conf and add:

Option "DontZap" "false"

to the ‘ServerFlags’ section which you should also create/add. The result should look something like

Section "ServerFlags"
    Option         "DontZap" "false"

and can also be quickly resolved via a quick Google search. If you want to turn off the restarting effect of ctrl+alt+backspace, then change ‘false’ to ‘true’. More info here.

The boys and girls at the MSI Wind forums have been talking about how Jaunty works in MSI wind. The MSI Wind wiki even has an entry for Jaunty found here, though I must say I didn’t really need much or even all of it to make everything run on my Wind. I’m also quite surprised that that wiki is pretty updated, last updated April 30 when I checked last. The webcam, wi-fi (which is quite surprising since it’s been plagued with problems since Hardy and Intrepid, the 2 previous Ubuntu releases before Jaunty) and others work after a fresh install. Of course for the web cam, you’d have to install a web cam softwaree like Cheese for example, which is readily available in the list of availabe software for download. No config whatsoever as written by the links I gave above. That hassle free setup is kind of scary (at least for me) since I usually like fiddling with my *nix box via the console, but then again nothing is really stopping me right?

Ubuntu 8.04.X (Hardy Heron) wi-fi on Wind

As for making wi-fi run on Hardy, I essentially followed what’s been written here in this part of the MSI Wind forum, particularly this section:

First, you need a proper build environment with the appropriate kernel headers. This is done fairly easily:

sudo apt-get install build-essential linux-headers-`uname -r`

Next, download and unpack the modified driver sources:

tar xvzf rtl8187se_linux_26.1012.0331.2008_modified.tar.gz

Now build them. Note that you’ll need to set an environment variable in order to avoid a certain problem:

cd rtl8187se_linux_26.1012.0331.2008

Now, assuming everything compiled without errors, try starting it all up using the wlan0up script. This will insert the appropriate modules and enable the wireless device. You should then be able to use it with Ubuntu’s network manager.

sudo ./wlan0up

One other thing to note here is that the person who wrote the piece above was using 8.04.1, and not just 8.04, so it may not necessarily work for you. What you do is just download (from the same forums) the tar.gz driver appropriate for your Hardy version. I myself was using 8.04.2 so I used a .1016.0331 driver package instead of the .1012.0331 shown above. The you can install programs like wi-fi radar (to install it just run ‘apt-get install wifi-radar’) to scope out existing wifi networks around you.

The cool commandline tool I like using is iftop  (apt-get install iftop). The tool iftop shows you what networks/hosts/IP addresses you are connecting to or they to you. It also shows you all the traffic that goes your network card’s way. You run it via

    sudo iftop

which by default let’s you view your first NIC which is usually your wired connection so you do a

    sudo iftop -i wlan0

to view the traffic passing through your wireless card (just replace wlan0 with whatever the command ifconfig gives you as the device name for your wifi card). Then assuming you followed the steps above correctly and you didn’t encounter an error, you should be seeing arrows pointing to and from your IP address coupled with the download and upload speeds (in Bytes, KB or even MB depending on how fast your wireless connection is).


So if you really want to stick with Ubuntu’s current LTS (i.e. Hardy) since it gets software updates till 2011, then try the trick above to make your wifi run. Otherwise if you don’t really mind updating your system every 12 months or so, then go for Jaunty. Look and feel and performance is topnotch. So far 🙂 But knowing Ubuntu’s history on software updates and support, plus the huge community and industry support/help you can get, it’s more than enough I think to make you switch from the ‘other’ popular operating system with 4 colors 🙂

My ordeal with my Seagate Barracuda ST3250310AS and my MSI K8M Neo-V

February 16, 2008

This post is about the trouble I had (not that big in my opinion) in using my new SATA hard disk (hd) on my old motherboard (mobo), how I tried fixing it in my Ubuntu 7.10 installation, and how I finally managed to get it working for me.

I recently bought a spanking new 250 GB Seagate Barracuda ST3250310AS SATA hd which I decided to install on my old PC which I bought with an MSI K8M Neo-V mobo shown in Figure 1

MSI k8m neo-v

Figure 1 – MSI K8M Neo-V mobo

Which I bought about 3 years ago. I just wanted to update my storage space, and I am quite pleased on how this mobo (and the overall performance of the PC it became part of) perform for me up till today. I’m using Ubuntu Gutsy Gibbon with the latest kernel version

f@foxhound1:/media/sda2$ uname -a
Linux foxhound1 2.6.22-14-generic #1 SMP Tue Feb 12 07:42:25 UTC 2008 i686 GNU/Linux

And so I took my old mobo’s manual just to be extra sure I didn’t need to do any bizarre attachments, jumpers et al. I’ve had a lot of experience installing and/or debugging SATA hds before, both coming from the 3.5″ and 2.5″ area, but the hds I’ve handled are usually limited to Seagate brands.

So I thought the MSI mobo manual doesn’t say I have to do anything special, so that should mean the old plug and play type of installation will work, which should actually be the case, I thought to myself. What happened was that after I plugged in my Seagate SATA hd into my MSI mobo, I booted my PC and checked that the newly installed hd was being detected in the BIOS at the least. Primarily the onboard SATA controller of my MSI mobo wasn’t enabled to handle SATA hds (since I only used PATA hds on this PC previously), so I enabled it. Then I saved my changes and rebooted my PC again to effect the changes. After I rebooted my PC, the post-BIOS messages still didn’t display that I had a new hd (my Seagate ST3250310AS). Wow! Still no SATA hd. I checked the onboard SATA controller option in my BIOS again and this time I enabled RAID instead of SATA, thinking that (as the mobo manual also talks about RAID hd installation on SATA hds) that might work. I saved and rebooted again only to find out that there’s still no indication whatsoever that I installed a new SATA hd on the post-BIOS display messages.

What I did next is that I just enabled SATA for the meantime on my BIOS and proceeded to boot my Ubuntu 7.10 OS. I thought that perhaps since I had 3 hds + I DVD writer and a pretty decent Nvidia graphics card, my new SATA hd needed some time to settle and get powered up. So I got in my Ubuntu and checked if it has detected my new SATA hd. I checked my /dev for anything that would resemble a new SATA hd such as /dev/sd*, /dev/sc*, /dev/hd* but to no avail. It seemed that there would be no sign of my new hd in Ubuntu as well.

Now I need to find some traces of a SATA hd detection on my Ubuntu OS to see whether the problem of the SATA hd ‘undetection’ is hardware or software related. I did a

f@foxhound1:/media/sda2$ dmesg |grep -i sata
[ 44.861902] sata_via 0000:00:0f.0: version 2.2
[ 44.861982] sata_via 0000:00:0f.0: routed to hard irq line 11
[ 44.862056] scsi0 : sata_via
[ 44.862099] scsi1 : sata_via
[ 44.862304] ata1: SATA max UDMA/133 cmd 0x0001ec00 ctl 0x0001e802 bmdma 0x0001dc00 irq 16
[ 44.862308] ata2: SATA max UDMA/133 cmd 0x0001e400 ctl 0x0001e002 bmdma 0x0001dc08 irq 16
[ 45.065652] ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300)
[ 45.469051] ata2: SATA link down 1.5 Gbps (SStatus 0 SControl 300)

Which would indicate that my OS detected my new SATA hd. But why is there no trace in my /dev then?? I kept on searching for clues so as to isolate the problem as hardware or software related. I went and checked if I had SATA kernel modules loaded:

f@foxhound1:/media/sda2$ lsmod |grep -i sata
sata_via 12548 0
libata 125168 2 sata_via,ata_generic

Which would indicate I had the sata_via and the latest libata modules loaded already. I found in the Internet that sata_via and libata are the two necessary kernel modules to access and use SATA hds. So if they’re loaded, where the heck then is my new SATA hd???

Searching through the Internet to refresh some long unused Linux commands I knew, I found and recalled modinfo:

f@foxhound1:/media/sda2$ modinfo sata_via
filename: /lib/modules/2.6.22-14-generic/kernel/drivers/ata/sata_via.ko
version: 2.2
license: GPL
description: SCSI low-level driver for VIA SATA controllers
author: Jeff Garzik
srcversion: 9AEBF233010AB2B667E2553
alias: pci:v00001106d00007372sv*sd*bc*sc*i*
alias: pci:v00001106d00005372sv*sd*bc*sc*i*
alias: pci:v00001106d00005287sv*sd*bc*sc*i*
alias: pci:v00001106d00003249sv*sd*bc*sc*i*
alias: pci:v00001106d00003149sv*sd*bc*sc*i*
alias: pci:v00001106d00000591sv*sd*bc*sc*i*
alias: pci:v00001106d00005337sv*sd*bc*sc*i*
depends: libata
vermagic: 2.6.22-14-generic SMP mod_unload 586

Which shows that I have the latest Linux kernel SATA drivers available, after which I checked if there were issues with those SATA drivers and there weren’t any. WTF??? So why can’t I still use my hd? Btw I also rebooted and chose RAID in the BIOS option instead of the earlier SATA option, did the above mentioned checks again and still no new hd for me. Boo hooo.

Finally, I somehow made up my mind that it must not be a software issue/problem, rather it’s a hardware issue. What I did was I turned my PC off, then I recalled that on my mobo manual, it mentioned about a 150 MB/s performance for the SATA v1.0 interface it had. On the Seagate SATA hd, it is mentioned that the SATA hd can operate either on a 1.5 Gb/s or on a 3 Gb/s mode. From my experience with handling hds, that must’ve been it: my mobo can only operate on the lower bitrate SATA v1.0 specification since as shown in Figure 2

seagate sata st325 hd jumper config

Figure 2 – Seagate ST3250310AS hd jumper settings

which is printed on my Seagate ST3250310AS hd. So I downloaded the seagate ST3250310AS manual and verified it. Then I grabbed a jumper from my trusty toolbox (which btw has inside it some RJ45 headers, some mobo spacers, a soldering iron, 2 protoboards, several dozens of resistors, capacitors, and connecting wires, as well as food coupons, a famous electronics and solid state devices book, among others) and placed it on the appropriate jumpers on my new SATA hd.

I booted my PC up, chose the SATA BIOS option again, and voila! I can see a SATA hd on my post-BIOS message! (^)__(^) and look! It’s my Seagate ST3250310AS (^)___(^) Hehe

Anyway, before I start celebrating some more, I continued to boot my Ubuntu. But before I was able to successfully load into my KDE desktop, I first had a short problem with my Nvidia drivers and I had to fix it first. It would seem that Nvidia drivers in Linux aren’t yet as stable as their M$ Window$ counter parts. Tsk tsk tsk. Anyway, I got into my desktop, fired this command again

f@foxhound1:~$ ls /dev/sd*

And there’s my SATA hd! To further check, I used Gparted, a graphical hard disk partition editor which I was familiar and comfortable with, to partition my hd. I was finally able to use my new SATA hd. To further my pleasure, I once again check

f@foxhound1:/media/sda2$ cat /proc/scsi/scsi
Attached devices:
Host: scsi0 Channel: 00 Id: 00 Lun: 00
Vendor: ATA Model: ST3250310AS Rev: 3.AA
Type: Direct-Access ANSI SCSI revision: 05

And there she (my Seagate ST3250310AS hd) blows! Now, to back up and duplicate my files….

Comments/Questions/Suggestions are welcome, as long as they come in a calm and ruly way (^)__(^)

Linux instant safe (kernel level) reboot and more

December 10, 2007

Finally. Another post after weeks of ‘blog silence’ and constantly watching my ‘views’ count go up and down.

Anyway, this post is about how one can reboot a Linux machine instantly and safely when everything else has stopped responding (even the kernel). The reboot speed is only limited by how fast you type the keys of your keyboard, believe it or not.

Unlike the old-fashioned and not so safe ctrl+alt+del combo, this particular keyboard key combination reboots your machine safely, and more!

Now onto the main topic. First, we need to check if your system already allows this key combination. Issue

$> cat /proc/sys/kernel/sysrq


If the output (just like mine) is a ‘1’, then your system will respond to the combination. Otherwise, issue a command to input a ‘1’ in the aforementioned file whose contents we just viewed (sysrq) like opening it using vi or a text editor for example.

Moving on, the key combination is as follows:
hold down

ctrl + alt

then press your


button and then the following letters/keys (shown with their corresponding meaning/effect):

r - force switch to keyboard mode (in case you can't even use your keyboard anymore)

e - send all processes the SIGTERM signal

i - sends all processes the SIGKILL signal

s - Sync filesystem (so you won't have to use 'fsck' next time you boot up)

u - Unmount all devices

b - Reboot immediately

Note that you can use each letter in ‘reisub’ with ctrl and alt to effect the desired task without the need to press the other combinations e.g. holding ctrl+alt then pressing sysrq buton then the b button to immediately reboot without unmounting your mounted disks, for example.

The point of the key combination show above is that when pressed in that manner, it ensures not only the fastest, but the safest way to reboot your machine. Cool huh?

The easy way to remember it, i.e. mnemonics, is it’s the inverse of ‘busier’. Nice!

One final tip: in case you don’t want to reboot and you just want to turn your machine off (either by going through the ‘safe’ key combination just mentioned or just turn it off instantly), you can hold down

 ctrl + alt

then press your


button followed by


Comments and suggestions are always welcome!