****** BT Home Network 1250 doesn't work under Linux Q:: I have been playing around with Linux for about a year, using VMware Workstation on my Windows machine. I know a fair bit about Linux, but I encountered a problem when I decided to install Linux on my real machine. The installation went fine: my problem lies with the network. I have a broadband connection using a BT Home Network 1250 (aka 2Wire Home Portal) connected to a PC, which acts as the router. My machine connects via a BT Home Network PC Adapter (aka 2Wire PC Port) using HomePNA. It works fine on Windows, but there aren't any drivers available for Linux. I have tried NdisWrapper, which just won't work. I am using Fedora. Is there any way I could get my PC port to work with Linux? A:: We had a good look around the usual locations for finding information on USB hardware and came up with nothing. We didn't even find anyone saying it didn't work, or even that they had tried and reached a certain point. Our recommendation would be to go with Ethernet, which can be installed either by using Cat5 cable, or by using a pair of powerline adaptors available from D-Link and other vendors. Back to the list ****** ProFTPD sending email error messages Q:: My Linux server is running ProFTPD, and every day I get these error messages in my mail: --- 'fred.co.uk - notice: 'Freds FTP Server' (x.x.x.x:21) already bound to 'ProFTPD' fred.co.uk - bindings.c:774: notice: unable to create ipbind 'x.x.x.x': Address already in use fred.co.uk (x.x.x.x[x.x.x.x]) - FTP session opened'. ,,, Where can I look to get rid of this? FTP is working fine so I don't think it's a 'real' problem. A:: This is quite a common message and can have several causes - here's the most likely one. ProFTPD can be run in two ways, called standalone and inetd respectively. In standalone mode ProFTPD runs as a daemon and answers incoming FTP requests. When run with the inetd option ProFTPD is run by the super server inetd, which invokes ProFTPD when it receives traffic on port 21. It is possible to have ProFTPD running as a daemon and still have inetd/xinetd configured to listen on port 25. You can check the PID of the process listening to this port with the command --- fuser -n tcp 21 ,,, and what process is running under that PID with --- ps -ef | grep xxxxx ,,, where xxxxx is the PID. If you have virtual FTP hosts configured in proftpd.conf you also need to be aware that if ProFTPD is configured to run under inetd then port-based virtual hosts are not supported (it may be possible to play with your inetd configuration and /etc/services to get this to work but this is not something I have tried). The only types of virtual host that you can configure are IP-based. If you are running under inetd and you have a virtual host whose name or IP resolves back to the IP being used by your global ProFTPD configuration (that is actually being used by inetd), you will get this 'unable to bind' error message that you quoted. Each virtual host needs a separate IP. When run as a daemon ProFTPD also supports port-based virtual hosts. So you will need to check what service is listening to port 21, how the ProFTPD service is configured to run and that it does not conflict with your inetd configuration. If you are running under inetd your virtual host should be running on different IPs. Back to the list ****** SUSE Linux rebooting without graphical mode - only text mode Q:: Having long wanted to get away from Windows and associated programs I installed SUSE Linux 9.2. Installation went a treat it was all really easy - up until the reboot, that is. At the end of rebooting Linux asks me for my login and password. Then it says something like 'Have a lot of fun with Linux', then a command line appears: --- lxuser@linux> ,,, or similar. The machine whirrs away for about half a minute and nothing happens. What do I have to do? A:: It sounds like you have successfully logged in to your new Linux system and are sitting at a shell prompt. One option is to type startx to start up a graphical desktop environment; although the fact that your system isn't booting into a graphical login system needs some investigation. Did you select a server install, or otherwise disable any X packages during the installation? A basic workstation install should keep you well away from any shell prompts. Back to the list ****** How to reset MySQL password Q:: I have a Mandrake system, which I set up with security at the top of my priority list. I chose good strong passwords, disabled a load of services that weren't required, do regular updates and even got an excellent iptables setup off Google to use as a template. I'm now secure, very secure - too secure! I failed to write down the MySQL password, because that's bad security practice right? And now I can't remember what it is. My organisation is trying to get a database added for our first web application and I can't do it. Is there a way to reset the MySQL password to a default or reinstall MySQL without risking my existing data going to the bit bucket? A:: Fortunately MySQL has thought of just this type of situation! The following should fix you up: --- etc/rc.d/init.d/mysql stop /usr/bin/safe_mysqld --skip-granttables & mysql -u root ,,, The password is actually kept in an encrypted form in the MySQL database. You'll find yourself at a MySQL prompt - all you need to do is update the password and you're set. --- UPDATE user SET Password=PASSWORD("your new password here") WHERE User="root"; exit ,,, Once you're back at the Linux prompt you'll need to bring the backgrounded safe_mysqld to a stop, which is probably easiest by typing fg 1 and hitting Ctrl+C. The last thing left to do is bring the MySQL service back up with /etc/init.d/mysqld start and give it a test. Back to the list ****** Restricting web access with a proxy server Q:: I manage a small network on a residential site, which is looking to restrict staff use of the internet (especially out of hours) to 30-minute sessions per user. The network is a Windows 2000 domain, but the internet area could be on its own subnet linked directly to the router. At the moment we are looking at cheap solutions like Internet Caffe from Antamedia, but I wondered if there was something that could be done through Linux. Perhaps some form of LDAP terminal server using a MySQL database? The transport layer security protocol project (TLSP) makes me think that someone else must have asked this question at least once, but the web discussions all seem to head back towards MS servers, which seems a pity. The machines are all low-spec P400/800s, with 128-256 RAM, which could possibly be increased. Access to a common shared drive (via CIFS or NFS) and a shared printer (networked Brother) would also be useful. Obviously, all the programmes that you might want are there - MPlayer, RealPlayer, Firefox, Thunderbird, Xpdf, OpenOffice.org, Gaim/Kopete etc. Any suggestions would be greatly appreciated, as the Windows options seem to require fairly careful running. A:: Proxy software such as Squid would be ideal for this, as you can configure it to require authentication and time out after a given duration. You will know exactly who is accessing sites and what they are doing. You can find Squid at www.squid-cache.org, and there are plenty of example configurations in the documentation. The hardware you're using sounds more than adequate, and nearly all current Linux distributions provide the tools and programs you list. Mandriva, Ubuntu, Fedora or even SUSE are great options for desktop systems as an alternative to Microsoft Windows. Back to the list ****** Dual-booting Linux and Windows Q:: I'm thinking of switching my operating system to Linux. I am currently running (limping) with Windows XP and have purchased another hard drive to load Linux on to. I understand Fedora expects unassigned disk space, but I'm not sure about Ubuntu. Should I partition the drive first? If so, where should I put a boot manager? A:: You can happily install Ubuntu on to a fresh disk -either manually partition the disk within the installer or let it partition it itself based on the size of the disk and the memory in the system. Once installed, Ubuntu will install its boot loader (Grub) on to the first disk, overwriting the boot loader for Windows. However, during the installation process, Ubuntu will add a boot option for Windows within Grub; so as default you will boot into Linux, but if you want Windows XP you can manually select it at boot time. You could always install Grub into the second disk's MBR and use the BIOS to switch between the two disks at boot time, but this is really confusing and requires a lot of brainpower to sort things out when they break. Back to the list ****** Get Linux to work with a Mylex Acceleraid card Q:: My company recently bought an old but still powerful server for a bargain price on eBay. We loaded Linux on it and it is really providing outstanding value. It has several 18GB SCA 80-pin SCSI drives in a RAID5 array. All this has been up and running for a couple of months now without a single hitch. However, last week one of the drives in the RAID started making a ticking noise, and its light was no longer blinking when all the other drives were. I was curious to see if Linux would be able to interface with the Mylex Acceleraid card and if it would be aware of the issue - and I was pleasantly surprised that it was! There were tens of entries in /var/log/messages from the DAC960 module saying that drive 0:3 had failed. Fantastic. As this server is in no way mission-critical, a total wipe of the data would be fine. Having this opportunity to practice doing a rebuild with no associated risk, I'd like to see if we can do this 'live' without rebooting the system. We have put a spare drive into the slot but the rebuild has not happened automatically. Do you perhaps know how or if I can initiate this without having to reboot into the BIOS? A:: Marcus, I did some investigating for a very similar question a year or two ago. I've gone back to the resources I found and it appears that they're still valid. The Mylex card, as you have discovered, has excellent Linux support and is a favourite of many sysadmins because of this. The kernel module provides great support directly from the command line without any third-party application being required; although excellent apps are available if you feel like splashing some cash. You can download the very good Mylex Global Array Manager (GAM) software from LSI's website at www.lsilogic.com. LSI has recently bought out Mylex but is still providing support for its products. GAM has a client and a server. The client installs on to your servers and the GAM server needs to be run from a Windows-based system. This, in my opinion, is the only downside to using this software - who wants to pay a Windows tax to run RAID monitoring software? The second and preferred choice is to do this from the command line. The GAM module creates a directory tree called /proc/rd, where it puts plenty of relevant information about your array. Have a browse there and you'll see lots of info right down to the firmware version of each of the drives in the array. If this is your primary or only array it will be called c0 (for container 0). This proc structure also allows you to input data to it in order to issue commands to the controller. You can enter data into /proc/rd/c0/ user_command using echo to do a myriad of functions, including rebuilding, for example: --- echo "rebuild 0:3" > /proc/rd/c0/user_command ,,, Keep checking out log messages or the proc filesystem and you should see the rebuild taking place. Try viewing the file you just pumped the command into and you should see it giving feedback there too. Mylex put together good documentation on this proc structure in the README. DAC960 that should be packaged with your kernel's sources. Back to the list ****** No internet connection or printing in SUSE 9.2 with BT Voyager 105 Q:: I used SUSE 9.2, to creat a dual booting system with MS Windows XP. The installation was smooth - but it won't connect to the internet, and it won't print. I've looked at a few forums regarding my USB modem (a BT Voyager 105), and it seems that plenty of other people have had the same problem. There is software out there to drive this modem, but it seems that there's no RPM for SUSE. The next problem is that I don't actually know the difference between source, an RPM and a binary, and what steps I have to go through. I understand the basic concept of compiling etc, but I don't actually have a compiler. To make matters worse, I am still accessing the internet via MS Windows, so after I've downloaded files I need to put them somewhere that Linux can see them when I re-boot. This isn't a problem as such, but is rather time-consuming -and frustrating when I don't know if I am doing things right. I'm certain that other people have got stuck on this point. What is annoying is that I have been here before with Storm Linux and got stuck in roughly the same predicament. I prefer the KDE environment to Windows XP - and I love the stability Linux offers - so any pointers in the right direction would be most appreciated. A:: You can get an RPM in one of two different formats: source and binary. The source RPM contains the original code used to build the binary RPM, and isn't necessary if all you want to do is to install the software. The binary RPM contains the compiled code ready to run. Likewise, software is also distributed in a non-distribution specific 'source' tarball, containing pure source code; or occasionally in a binary format, which has to be installed by hand. We located some great documentation describing exactly how to set up the Voyager modem under Linux, which can be found at www.lack-of.org.uk/viewarticle.php?article=114. You may want to print it out before rebooting into Linux so you have it as a reference. You can also download information to your Windows C: drive and mount it from Linux with: --- # mount -t vfat /dev/hda1 /mnt/win-c ,,, Back to the list ****** SoundBlaster Live not working in SUSE Linux Q:: I have a Dell 8300 with a Pentium 4, 1GB of RAM and a 120GB disk running XP Home and two logical partitions running Swap and SUSE 9.2, upgraded from 9.1 Pro, which I bought previously. I had no sound on any SUSE applications and, finding a lack of a driver on the web for the non-standard sound chip supplied by Dell, I bought a standard Sound Blaster Live! digital board. This works fine with XP but not with SUSE. I searched the web again but none of the tips worked. Dell said they didn't support Linux and SUSE installation support said they didn't support sound! I also tried MEPIS Linux kernel 2.6.10 and, run from CD, the sound works fine. I installed MEPIS from the CD, but when it's booted with Grub it won't set up sound as it can't find the motherboard. How can I make SUSE work the Sound Blaster Live! card and why won't the MEPIS system on the hard disk do the same as the MEPIS system on the CD? A:: Sound Blaster Live! support in the Linux kernel is provided by the Emu10k1 kernel module, so you may wish to manually load that module using modprobe and investigate what the system does. Run dmesg to output any signs that the kernel picked up your soundcard. You can check with dmesg from MEPIS to find out if it uses the same kernel module and if the sound works as you expect. You can verify within SUSE if it tries to load the kernel module or if it fails to initiate part of the sound system. Output from dmesg would be helpful in resolving this problem, as it's clear to see if the kernel module is loaded, which IRQ (interrupt request line) the soundcard lives on and if there are any conflicts with other devices on the system. Back to the list ****** AbiWord plugin for Psion Word files Q:: I still use my old Psion MX5 as my PDA because nothing else I've seen comes close. My current distro is Fedora, which I am very happy with - except for the fact that it doesn't have the Psion Word plugin. Do you know where I can find an AbiWord build for Fedora that includes the Psion plugin? A:: There is an import/export plugin package, which is distributed at www.abisource.com. Specifically for Fedora, you need to download www.abisource.com/downloads/abiword/2.2.5/Linux/Fedora/3/abiword-plugins-impexp-2.2.5-1.fc3.i386.rpm. Back to the list ****** Linux error messages with DVD writers Q:: I have recently obtained a new DVD-writer. I don't have the box or anything, as I got it from a friend who upgraded to dual-layer. I didn't think I'd have too much trouble using it with K3b, dvdtools and so on but there seems to be something wrong. It is quite happy reading and writing CD-Rs and CD+Rs with Linux. It will read DVD-Rs burnt on other equipment, but when I try to burn any DVDs myself I get spurious error messages, sometimes telling me that there is no disc in the drive or the 'media is not ready'. Any ideas or is it just broken? A:: From what you say, there are two possibilities that spring to mind. Either the DVD writer isn't supported by dvdtools (pretty unlikely) or the writer doesn't like the brand of disc you are using. Many manufacturers only approve a small list of media -other discs will simply not show up when you put them in, which is exactly the problem you seem to have. Find out who made the drive, and look at their website for more info, or try a range of cheap DVD-Rs and see if any work. Back to the list ****** Apache and PHP filesize upload limits Q:: I have a customer on my server who is unable to upload files larger than 500k, yet they have set the /etc/php.ini directive upload_max_filesize = 10M. This should have allowed him to upload his 2-4MB JPEGs without any problem, but he can't. Any file smaller than 500k uploads without a problem. Am I missing something obvious here? I've tried changing the number to 20M and it does the same thing. I know it's working, because if I bring it down to less than 512k it will block smaller files too. A:: PHP probably isn't the problem here. Apache also has this parameter to safeguard your server from abuse, and the chances are that Apache is the culprit that's limiting you. I've seen this becoming an issue in Red Hat Enterprise Server 3 as they've set a 512k default. The directive can be found in /etc/httpd/conf.d/php.conf as LimitRequestBody 524288. Just change this to a number that suits your application and restart Apache. You should be good to go. Back to the list ****** Share folders with NFS in Fedora Q:: I just installed Fedora on a used PC. Since I already had Linux installed on another system, I planned on mounting the /var/spool/up2date folder from the first one on the second so I won't have to download update files twice But after looking at the Red Hat manual, man page and the HOWTO page, I'm still unsuccessful. Here's what I did. On the server I put in the /etc/exports file the line --- /var/spool/up2date 192.168.1.12 (ro,sync,no_root_squash) ,,, and in the /etc/hosts.allow file --- all:192.168.1.12 ,,, On the client side I put the following in /etc/fstab: --- 192.168.1.1 1:/var/spool/up2date /server/var nfs soft 0 0 ,,, I tried with the directory /server then /server/var to create without luck. The error message that I had was, 'Failed server is down'. I tried again by disabling the firewall on the server and that time I had an RPC timeout error. I did notice that on the server, rpc.mountd and portmap are running but not rpc.nfsd. Could you show me how to properly configure NFS on both machines so I could share the folder, and how to properly configure the firewall? A:: Your /etc/exports file is correct, so you should be able to do --- /etc/rc.d/init.d/nfs start ,,, If that does not work, verify that you have the packages pertaining to being a Network File System server installed. You can review /var/log/messages to establish exactly why the NFS server failed to start, though with Fedora it should just work out of the box. You can verify which RPC services are running with rcpinfo -p, which will need to list nfsd before you can mount it on the remote system. To answer your question about the firewall, if the system is on your internal network, it will be safe to leave the firewall down, assuming you trust everything within your network. For a system that's outwardly accessible, you will have a separate outside interface which you can limit connectivity through, and open the inside one. Often it can be difficult to permit NFS through a firewall, but by using rpcinfo you can get a good idea of what ports need to be opened for NFS to function. I would really recommend against opening NFS on the internet as it is an unencrypted protocol and transports your data in plain text. Back to the list ****** Running 32-bit Linux distros on 64-bit CPUs Q:: I have the latest Ubute AMD64 processor, and every version of Linux I have tried with it so far gives the message 'out of sync' after the welcome screen. I presume that is because Linux is a 32-bit operating system and is incompatible with the 64-bit machine. Is that true, especially of Sun Java Desktop System 2? The Solaris system on the website mentions 64 but when I proceed with it, it only shows x86. FreeBSD has a download option for AMD64, but the only option of payment is a credit card (which I don't possess). Could you please help, as I'm desperate to have Linux running and am so fed up with Windows XP Pro that I feel like chucking the whole thing out the window. Do I have to go to the extent of purchasing a second x86 machine (which I presume 64-bit isn't) and installing Win98 on it, which will at least make boot disks? A:: AMD64 processors are backwards-compatible with x86 binaries, so you can run a standard x86 Linux distribution on them. You can always download AMD64 versions of distributions such as Debian, SUSE and others and run 64-bit binaries on the system. Linux will run happily on both 32-bit and 64-bit processors, although you should be aware that to run 32-bit binaries, the distribution will need some libraries to allow them to run on a 64-bit based distribution. 'Out of sync' sounds like a video problem, so you may want to try to force a text install, or specify a video resolution at boot time. Distributions will have help screens when they boot up to indicate how to do this. I've had success with SUSE for AMD64, as well as Debian, so I would be interested to hear what progress you make. Back to the list ****** Cannot boot into Windows XP after installing Linux Q:: Before I start, my system specs are: a Shuttle XPC, Athlon 2400XP, 1024MB of RAM, DVD/CD-RW, a 120GB hard drive and a Leadtek 6600GT graphics card with a 19-inch Sony monitor. Oh, and a BT Voyager 105 USB ADSL modem. I recently ran Simply MEPIS Live Distro on CD, being forced to install MEPIS on to the same hard disk as Windows XP following the instructions given using QParted. I split it into 105GB for XP, 14GB for MEPIS, 1GB for swap, and 4GB-ish for home. Yes, I know there are risks, but to be honest it all seemed to go quite well... However, instead of offering me the dual boot option that it should have, my PC was automatically booting into MEPIS with a 2.6 and 2.4 option. This in itself isn't such a big deal, and I could still access all the document and graphic files on the Windows partition. But when I tried to change the Active partition using QParted in Linux and boot with XP, I received an 'NTLOADER' or 'NTFSLDR' (or something similar) error, and a message asking me to press a key to reboot -- this effectively put the machine into a perpetual loop. Suffice to say that I can no longer boot into my XP OS using this hard disk. Even trying to reinstall Windows proved fruitless because it kept wanting to reformat the drive. Luckily I have a slightly older XP installation on another hard disk(30GB), and swapping the drives has allowed me to get internet access to find help. Using Norton SystemWorks and XP's own CHKDSK facility I've at least managed to get the original 120GB drive recognised as an E: drive now, and I do have full access to it. My problem now, however, is that I have my E: drive hooked up as a slave to my DVD/CD-RW, and my C: drive is acting as my main drive. My E: drive has just about everything I now need, while the C: drive is relatively old and out of date, so how do I swap them back again? I have tried to swap the two hard disks, but E: is no longer recognised as a System/Boot drive and thus XP goes through to the Windows XP logo and just hangs or restarts over and over. Ideally, I would like to have a dual boot OS option so I can learn and 'play' with Linux without losing my XP partition. A:: Wow, what a journey: I commend your persistence! You should be able to select the operating system you want to boot, Linux 2.4, 2.6 or Windows XP, from a boot loader such as Grub or LILO. These boot Windows filesystems in completely different ways, so once you have figured out which loader you are using, you may want to review the dual-boot documentation at www.tldp.org. Booting a Windows XP install from Grub can be done with a simple --- rootnoverify(hd0,0) chainloader+1 boot ,,, at the command line. You may also be able to boot into your Windows installation using a Windows boot disk and run fdisk /mbr to reinstall the Windows boot loader on to the disk. I would always advocate making sure you have good backups of data prior to installing Linux or repartitioning your drives. It's rare for things to go wrong, but you can be sure that when they do, they really go wrong. As you've got access to the disk and the NTFS filesystem is intact, you should be able to recover the system using Windows tools and rebuild the Linux boot loader configuration to dual boot the pair. Once you have the boot loader working happily, you can physically swap the disks so that the drive currently E: under Windows becomes C:. Back to the list ****** US Robotics 56k modem on Linux Q:: I've used Linux SUSE 9.2 for about five years, with a US Robotics 56k modem. I've just installed Homecall broadband from Homecall.co.uk. Installing on Windows went without any trouble but how do I install it on SUSE Linux? Homecall's help desk directed me to http://speedtouch.com/support.htm for a driver, which I downloaded and unzipped. That gave me KQD6_ 3.012 and ZZZL3.012. So are these files drivers, and how do I load them? My modem is a Thomson SpeedTouch 330. A:: Detailed documentation on using the SpeedTouch modem with SUSE 9.2 can be found at www.linux-usb.org/SpeedTouch/suse. Yes, the two files you downloaded are the correct files for use with your system, but you need to move them to the appropriate location, as per the HOWTO. Reader John Gregory has also sent in this advice about configuring SpeedTouch modems: "The easiest route to take is with SpeedTouchConf. http://speedtouchconf.sourceforge.net will give chapter and verse on what to download, where to get it and how to install it. I have used it with a SpeedTouch 330, WinXP and three flavours of SUSE (2.4 and 2.6 kernels). At the start of the connection process a driver file is loaded into the modem by the OS. This only happens once unless you reboot or hot-unplug the modem. The driver file is the same as used for Windows but must be the correct version for the modem." Thanks John! Back to the list ****** Can't get access to USB drives as normal user Q:: I have a couple of pen drives for transferring files between my Windows laptop and my Mandrake 10.1 PC. I can get read access to the pen drive as user but whatever I do, I cannot get write access to the removable drive as a user, only as root. I cannot change the group to my sharing group or my user, and owner remains as root. Even logged in as root I am told that I do not have enough permissions to change the group/ownership of mnt/removeable. When I change permissions it is still inaccessible when I return to user and ownership is returned to root. I have also been unable to transfer it to a sharing group. I have tried running the partitioner and allowing write access to all users but this also fails. I understand that if I install SUSE 9.2 I will get full read/write access but is there a less dramatic answer? A:: You can allow users to mount and write to devices by modifying the /etc/fstab file. A typical fstab entry that permits user mounting of files is: --- /dev/sdb1 /mnt/usb-key ext3 defaults,user 0 0 ,,, The user option allows the device to be mounted by a non-root user, who can then write data to the USB device. Remember to umount the device before unplugging it from the machine, otherwise you risk potential data corruption due to files not being written completely. Back to the list ****** A faster alternative to KDE and Gnome - IceWM Q:: I have been wondering for a while if there is a faster alternative to KDE or Gnome for my Fedora machine (Athlon 1.2GHz, 256MB). So I decided to try out the IceWM window manager, because I wanted something that was vaguely similar to what I was already using, with a task bar, launcher button and menu. It wasn't all plain sailing, but I did eventually get there. The first stumbling block was getting IceWM to start at all. Eventually I found I had to put an .xsession file in my home directory (and make sure it's executable). Here is what I use, so other readers don't have to struggle like I did: --- # run profile to set $PATH and other env vars correctly . $HOME/.bash_profile # setup touchpad and the external mouse xset m 7 2 xinput set-ptr-feedback 0 7 1.9 1 # run initial programs uxterm & # start icewm, and run xterm if it crashes (just to be safe) exec icewm-session || exec xterm -fg red ,,, The next problem was to figure out how to get any extra items on the launcher menu. I found that you can do this by editing files in the .icewm directory (a sub-directory of home). The main file to edit is menu (obvious really, when you think about it). The nice thing about this is you don't have to reboot your machine to get the menus active - they are immediately there when the file is saved. My one and only gripe, which is stopping me from using IceWM all the time, is that because it's so quick my mouse is too fast to control easily. My guess is that there is some kind of parameter to change in the xorg.conf file or in the nvidia-config file. Could you tell me what needs changing to get my mouse under control? A:: Thanks for sharing your discovery. IceWM is a great little window manager, although I prefer to use Sawfish as it seems a little more solid in general use. Most login tools such as gdm and kdm will allow you to select a window manager, so you won't need to edit your .xsession file manually to change to the one you want. For anyone looking for a slightly less minimalist window manager, Enlightenment DR0.17 is looking pretty crazy these days (check out www.enlightenment.org). If you've got the time to compile the dozen or so support libraries for it, it's a really slick system, although it is still in development. A quick machine is recommended for Enlightenment, but if you turn off many of the bells and whistles it will work happily on older boxes. Now to your question. You can set the mouse speed using xset, such as --- xset m 5 1 ,,, You can also set a 'Resolution' value in your xorg.conf in the Mouse section to adjust the speed of the mouse. In both cases, it's a trial and error situation where you have to tune your settings as you go. Of course, you will probably only want to change one at a time otherwise you'll drive yourself crazy! Back to the list ****** What a turn-off Q:: I have installed SUSE 9.2 and have a problem with switching off my Acer laptop. After I select Turnoff from the KDE menu everything goes fine until I receive the message, 'The system will be halted immediately'. After that the system reboots instead of switching off. I have the same situation when trying to use the command line by typing in poweroff as root user. What's surprising is that the problem doesn't exist when I'm using the battery. I didn't have this problem with the Knoppix 3.6 Live CD or Yoper 2.1.0-4 either. If I try the boot options --- apm=off acpi=off ,,, it's the same story. When the ptop. After I select computer is shutting down at the end of the process I again get the message, 'The system will be halted immediately', but straight afterwards it reboots anyway. Giving the line --- apm=off acpi=off makes the machine hang instead. ,,, I get: 'The system will be halted immediately Master Resource Control: runlevel 0 has been reached Skipped services in runlevel 0:/'. I installed SUSE in safe mode as well. When I tried to turn off the computer this message appeared: 'The system will be halted immediately Master Resource Control: runlevel 0 has been reached Skipped services in runlevel 0: stty: standard input: unable to perform all requested operations'. I think I've tried everything, including upgrading BIOS (which supports ACPI) and installing different kernels. SUSE is an excellent distro but with this kind of problems I wouldn't be keen to stick with it. A:: Please don't give up on it yet! I think we can help. Disabling both APM and ACPI is probably not a good idea, since it will disable all power management features. Your laptop is probably using ACPI rather than APM for power management, though it depends on the age of the machine, and ACPI in Linux has its fair share of bugs and problems with certain BIOSs. I've known ACPI to allocate IRQs of191to NICs and other crazy stuff, which doesn't make the system very stable. Occasionally tweaking BIOS options will help, but as it works with Knoppix and not with SUSE, I'd err on the side of caution and avoid breaking anything that isn't actually broken. I would suggest instead that you review the boot logs from your system with dmesg and inspect what the kernel finds with respect to your APCI system. It's not unlikely that it worked under Linux 2.4 and broke in 2.6 kernels. SUSE has kernel updates available now for its 9.2 release, so you may want to give one of those a go and see if you have the same problem. Another place to try is www.linux-laptops.net, where you can find out how other people have installed Linux on to the same laptop. It's worth remembering that many distributions patch their kernels, so if it works with Fedora or Mandrake, it doesn't mean it will work with SUSE. You can always post a bug report with SUSE and find out if they have a work around for a fix for it. Back to the list ****** How to install software from a bz2 file Q:: I'm 12 years old and have a reasonable computer in my room, which dual boots Mandrake 10.1 and XP. But the only computer with access to the internet is the family XP (I'm not allowed to install Linux or boot from a Live CD). My computer is unlikely to have the net for a while, and the two machines are a long way away from each other so sadly there is no network or shared connection. Now, if I try to download the source of a file and save it to a USB drive it sort of works, but when I come to extract it on Mandrake it says, 'Error this is not a .bz file' (or whatever type it was - the same happens for RPMs), however I try to extract it in a terminal or ark. I think Windows mucks it up when I download it but I'm not sure. Please help as there is only a limited amount on your DVDs. A:: If you have the full disc set for Mandrake 10.1, there is loads of software available which you can install and play with. If you want to install software on Mandrake, I would suggest ensuring that you start out by downloading Mandrake RPMs and installing them on the system, as anything that is bz2-compressed is probably source code and can take some effort to compile. If you want lots of Linux apps, www.linuxemporium.co.uk is a great place to get Linux software cheaply. Trying a few different Linux distributions is a fun way to get open source experience without giving yourself too many headaches. Back to the list ****** How to secure a RHEL server Q:: I have a standard Enterprise Linux 3 server with no control panels and no hardware firewall. I only run an HTTP web server that requires MySQL and Sendmail for outgoing mail via PHP. There is only one user on the server me. I have not touched the default install apart from disabling VSFTP. All data is backed up daily. What I'd like to know is where the system is vulnerable, where any attack is likely to target and which area of my system should I be monitoring closely. This is how I see things from a security point of view: Shell login is via SSH, and FTP is only available via SFTP. The authentication system PAM ensures that root cannot log in directly (I think). Portsentry stops port scans. The standard daily cron job shows me the logwatch activity and I take note of all the error logs such as failed attempts to log on (I'd really like to see a list of successful logons), plus disk space used. Red Hat up2date keeps my software fully patched. I don't have a firewall and see no reason for one, or iptables, although I could be very wrong on this one. A:: You've asked a host of really great questions - I'll try to answer them all briefly. To see what else on your system besides just HTTPd and Sendmail is 'exposed', we need to look at what makes up what I call your network profile. To see this for yourself, run netstat -antp to see which TCP services/ports are bound (and in your case exposed) as well as which binaries are associated with each: --- # netstat -antp Active Internet connections (servers and established) Proto R-Q S-Q Loc.Addr. For.Add State PID/Program name tcp 0 0 0.0.0.0:10000 0.0.0.0:* LISTEN 1018/perl tcp 0 0 0.0.0.0:1 10 0.0.0.0:* LISTEN 16577/xinetd tcp 0 0 0.0.0.0:143 0.0.0.0:* LISTEN 16577/xinetd tcp 0 0 0.0.0.0:1 1 0.0.0.0:* 1 LISTEN 1809/portmap tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 918/httpd tcp 0 0 0.0.0.0:21 0.0.0.0:* LISTEN 875/vsftpd tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 16351/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 18632/sendmail: acc tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 918/httpd tcp 0 48 69.20.9.105:22 64.39.0.38:32910 ESTABLISHED 19647/0 ,,, As you can see, there are around eight or nine different daemons binding to ports on a stock system. Now compare this with a remote portscan of your server using a tool like nmap (eg nmap -sS <IP>) to see what the world sees as your network profile. Remember to turn off Portsentry on your server or it will block you if you try a portscan! Your first layer of security is the network and/or iptables, so yes, I would look again at your decision not to have it. Iptables will block anything bad getting to the kernel. If a vulnerability is discovered in the Linux networking stack you could be vulnerable without iptables. It is excellent at preventing malformed or invalid packets from reaching your server. One big problem in this day of web forums, blogs and other cool web apps is back-door bugs in non-vendor-supplied application layer packages such as phpBB and VBulletin. These cool web apps offer themselves out via your daemons and expose you to all kinds of bugs that are not fixed through anything you have set up on your system. In fact, this is probably the most successful 'flank attack' that we see these days with web hosting customers. Attackers exploit some weak code in phpBB or VBulletin, which gets them local user access as the user Apache, and then they're free to upload and launch local exploits or strong-arm attack tools to try for escalated user privileges (ie root access). All I can say is that if you choose a package like phpBB or VBulletin, you should track the bugs and patches very closely. The sftp/SSHroot restriction is actually set in the /etc/ssh/sshd.conf file, but you're right, the root user access control can also be controlled at the PAM layer. Portsentry is a good outer layer warning and lockdown system that has saved many an insecure box. Try to combine manually going over your logwatch emails with tools such as chkrootkit, regular netstat and MD5SUM baseline comparisons. Red Hat's up2date is an essential tool in an enterprise server environment. If you're away for a few days the server can patch itself against most big vulnerabilities until you can get to manual patching. Back to the list ****** Managing email user accounts Q:: Do you know of a decent user email account manager to allow someone of less technical knowledge to administer email accounts via a web-based control panel? We've set up sendmail on our server and can happily add accounts ourselves, but they want control of this. A:: This really depends on what you want to do. If you want to make a single adminstrator responsible for every user on every domain or have an administrator for all the users on each domain, then you can use Webmin. Webmin comes pre-installed on all Rackspace's servers, but if you want to load this elsewhere you can download it from www.webmin.com. If you would like to give each user control of their own account, I can recommend two options. On the free software side there is Usermin (from the makers of Webmin). This allows users to change their passwords, set up mail forwarding, configure SpamAssassin and set up fetchmail, and that's just the mail aspect of Usermin. If you're looking for an officially supported commercial product, then take a look at Plesk. Plesk is a full virtual hosting control panel, and you can delegate control to users for many things, including mail. Bear in mind that running Plesk on Linux is less like running Linux and more like running a "Plesk Appliance". You cannot fully control the underlying Linux back-end like you used to. Back to the list ****** Setting up a mail server with Courier, Sendmail/Postfix and Fetchmail Q:: I want to set up an IMAP mail server so that the wife, children and me can log on to any of our three PCs and have the same email format, whether it be Linux or Windows. I have a SUSE Server set up for File > Printing that the three PCs connect to. I run a hosted website, which does my main mail, and I also get mail through my ISP. What I want is for all mail to be dumped on to my server so that it can be read via IMAP, and to remain there unless deleted. The problem I have is understanding how I link Sendmail/Postfix, Courier IMAP and Fetchmail together. I understand what each bit does, just not the type of mail service I need to run - SMTP, POP or IMAP? A:: I would recommend Postfix. Each user can collect their mail from their home directory with their mail client. IMAP will be a good choice of mail server if your family will be moving from PC to PC, as everything is always stored on the server, and because the server is local to the clients it will be just as quick as POP. You can also easily tie a web-mail tool into IMAP, using OpenWebMail or IMP, which will give you mail access through a browser if necessary. Fetchmail can be configured to inject mail through your local mail transfer agent, which I'd suggest should be Postfix, and deliver it to each user. If there are separate mail accounts, each can be sent to a different user, or specific users can have mail sent to their mailbox if there is a combined account, such as is available from ISPs. Back to the list ****** Cron jobs to back up servers Q:: As part of my day-to-day work, I need to back up MySQL databases from various servers (both local to the office and external). I have created cron jobs for each server to be backed up at night - some servers have only one or two databases, but others have hundreds. My cron jobs are simple bash scripts. They take the name of the database being backed up and append the date and time to create a unique filename, then use mysqldump to retrieve the data: --- mydatabase="mydatabase 'date'.sql" filename=${mydatabase// /_} mysqldump -h mydbaseserver1.co.uk -u username -ppassword mydatabase > /var/backups/sqlbackup/mydbaseserver1/$filename ,,, This all works perfectly. If a database fails to be backed up, the cron job sends the error report to me on email. However, the email does not tell me which database failed. As each cron job contains backups for more than one database, I could get a confusing email like this: --- 'mysqldump: Got error: 2013: Lost connection to MySQL server during query when retrieving data from server. mysqldump: Got error: 2013: Lost connection to MySQL server during query when retrieving data from server. mysqldump: Got error: 2013: Lost connection to MySQL server during query when retrieving data from server'. ,,, The subject of the message lets me know which cron job was running, but apart from going in to the directory and looking through the files for backups that haven't been created properly (far too laborious and time- consuming!) there is no way to identify which databases failed. It does not really seem practical to create single cron jobs for every database as there are hundreds of databases. Is there a way that the error email I get from the cron job can include the database names of backup jobs that have failed? A:: Yes, it is possible to configure mysqldump to give more verbose output using the -v or -verbose switch. This should report the status of individual database successes and failures. The mysqldump output should be emailed to you by cron. As an extra level of intelligence you could set up a local mail filter on your workstation or procmail on the server to parse for a keyword synonymous with a failed database backup and only bring it to your attention if a failure has occurred. To have cron mail you the results, add a MAILTO=username to the top of your /etc/crontab or an individual user's crontab. An alternative to mysqldump is mysqlhotcopy. Many administrators prefer it because of its supposedly superior locking and better reliability. You can find information on mysqlhotcopy from MySQL directly at http://dev.mysql.com/doc/mysql/en/mysqlhotcopy.html. Back to the list ****** Connecting to OpenVPN server from outside world Q:: I have set up an OpenVPN server on one of my internal machines (a Linux machine) and have a problem talking to it from the outside world. I've tried everything, but I cannot get a connection to the damn server! I have no problem connecting to the VPN with the same configuration from an internal IP address, but as soon as I try to connect from outside my LAN, via my WAN interface, I have difficulties. My LAN is connected to the net by a Zoom ADSL X3 modem, router and firewall. I have made sure to allow 1194 UDP port forwarding to the local IP of the server (using the Virtual Server options). The Linux server does not have a firewall. Even when I run the server in a DMZ (totally open on the web) configuration it fails! That leads me to believe it is the VPN configuration that's messing up somewhere. The other concern I have is that the router operates automatic DHCP for the LAN - I wonder if this could be the problem. The thing is, I don't know how to assign fixed IPs on this router. I have spent days trying to sort this out and have completely lost hope. A:: The first step in this process is to use a tool such as tcpdump on the Linux box to see if it even receives packets coming from outside the network. If you have it open on the internet, and it doesn't receive any packets, it must be an issue with the router that you have in place. As you can connect internally, I would suggest that OpenVPN is working and configured, although it would be worth checking that OpenVPN is listening on all necessary IP addresses for new VPN traffic. As the router is basically NATing the connection through, it shouldn't make any difference. You really need to get down to the most basic configuration, send some packets and see if they come through. It may be that your ISP is not permitting UDP traffic on that port, and you will have to call its technical support to verify this and figure out if the ISP blocks it. Many ISPs block IPSec for home users; however, OpenVPN is obscure enough that you'd think they'd not care about it. Back to the list ****** Apache redirecting Q:: I was first introduced to Linux a couple of years ago when I started using Plesk. Over time I wanted to do more and more things that Plesk wasn't geared to handle, not at all because it's a bad product but because I have some customers with really weird and diverse requirements. What I'm doing now is setting up another server without Plesk and trying to do all the things Plesk was doing manually. I've learned loads by doing this so far but there are some things I still need to address. At the moment I'm focusing on Apache and my question is simple but I can't find an easy answer. I'd like to be able to point a certain directory on a customer's site (call it http://domain.com/secure) to an entirely different website, which is hosted at their premises for their own internal policy reasons. The domain they are using is http://secure.domain.com). The secure.domain.com is a new host with the appropriate DNS pointing to them. All the links in their site now point to the new location but they are concerned about people who have bookmarked to old page. Obviously I can't set up secure. domain.com in my Apache config as I don't run it. The client said they could do it on their web page but don't want to take the traffic and overhead. In all honesty I don't know enough about it to discuss it with them properly. A:: I'm convinced that there would be almost no extra load on Apache by having a web page doing the redirecting as you suggested, but if your customer really wants Apache to do the work at a lower level it's dead easy to do. Try adding the following into the virtual host configuration block on the server: --- Redirect permanent /secure http://secure.domain.com ,,, There are several other options for this type of redirect such as temp, seeother and gone. The Apache documentation has a good explanation of the differences between them but essentially it comes down to the HTTP code returned by Apache. With each of these, Apache on your server will give the browser the new URL and will not stay involved in the connection, which should suit your customer's policy. Your customer may also want to do some intervention for web traffic coming in from your server with this redirect to tell them that the link has changed and that they should update their bookmarks. Back to the list ****** Find an ADSL modem that works with Linux Q:: ADSL is hitting even rural areas of northern Scotland these days, hence my need for a suitable Linux-compliant wireless modem/router. I have identified a number of potential devices without knowing their Linux compatibility, which apparently depends upon certain chipsets: Netgear DG834GT complete with PCMCIA transceiver. Belkin F5D7632UK4. Linksys wireless 4-port ADSL Gateway, WAG54-UK. 3com Wireless ADSL modem/router complete with PCMCIA transceiver, 3CRWE754G72-AGBUN. D-Link DSL-904 Wireless ADSL modem/router with 802.11g PCMCIA card. Your advice on the devices that I've listed here, or any others that are suitable, would be much appreciated. A:: The sure-fire way to get a DSL modem and router that works with Linux is to find one that will terminate your PPPoE or PPPoA session and hand off plain old Ethernet to your network. Even if it does not handle PPPoE itself and bridges it on to Ethernet, your Linux system can handle PPPoE out of the box very easily. You're right that the OvisLink doesn't have an ADSL modem built in. A device such as the Zoom X6 (www.zoom.com/products/adsl_overview.html) will do everything you need, and provide wired and wireless Ethernet access to the network. The D-Link DSL-904 on your list is also a good choice, but check that the card will work with Linux before you buy it. A quick search on Google will locate for you the appropriate Linux kernel configuration required to make it work. Back to the list ****** Remove old version of Apache before installing new one Q:: I need to install Apache and mod_ssl, but the tutorial says that first I have to get rid of the Apache version that is there already. It was put there by hand from source. Given that I can probably find the install directory (and then delete it), what else is it that makes Linux aware of an app, in the same way that Windows has a registry where you can clear stuff from? I have managed for the last two years by only uninstalling stuff I put on my Linux box with apt, so there is loads of junk that I need to clear! A:: When you install software from source, it generally installs into /usr/local, unless you specify an alternate location with the --prefix switch. Apache, for example, will install into /usr/local/apache, so simply deleting this directory will purge Apache from your system. Linux has no registry, although for services that start at boot time, /etc/init.d contains the scripts that are used when switching between various run levels. Software installed from source generally will not change anything in /etc/init.d, but will often distribute sample init.d scripts that you can install manually. You can also often do a make uninstall from the source code directory, which should remove binaries, configuration files and libraries installed. However, not all applications provide this, so it requires a little brain-work to hunt code down and delete it manually. It's often a good idea to use dpkg and rpm, depending on which distribution is used, to establish if a file is connected with a package, so you can see if httpd' is actually provided by a package or is just floating around. Unfortunately, since there is no record of things you install manually, if you install it in /usr and it mixes with package-based code, the best option is often just to force an install of the package over the top and clean up after it. Starting off on the right foot with a source install, such as installing it in /usr/local/<package>, is a good idea, so that once you're done with it you can just rm the directory. Back to the list ****** Which Linux distro should I use? Q:: I got a Pentium II computer recently from the London Freecycle group (www.freecycle.org), which I want to set up for my dear mother to use for the internet, instant messaging and OpenOffice.org. I also want it to be part of my planned wireless network. My question is, which easy-to-use distribution should I go for? I'm thinking Fedora or Gentoo because I heard that Fedora has excellent support for wireless cards, and Gentoo because it's optimised for your hardware. Although Gentoo is more difficult to install, Pentium IIs are supported, according to www.gentoo.org, and I could not find official Fedora system requirements anywhere. The hard disk capacity is 6GB, but this is upgradeable, and I imagine I'll be able to get the RAM up to at least 128MB from 32MB. A:: Gentoo is going to be horrible on a slow box, especially with such a small amount of memory, as compiling anything is going to take an age. I would actually suggest Debian. It's a great distribution for low-end systems, and it will run happily on the hardware you mentioned. You can either download a Debian netinst disc, which will download the required packages from the internet, or obtain the full set of discs and install it. www.debian.org has links to the various ISO images, as well as sites where you can buy a CD set. Back to the list ****** Fix ACPI on Linux Q:: I've been out of the Linux game for a bit, and decided recently to give Ubuntu 5.04 a try. I downloaded the 64-bit version for AMD processors (I'm running an A8N-SLI system). The install went very well, and I liked the distribution instantly. The problems began when I started installing drivers. I downloaded the NVIDIA chipset Linux drivers, and discovered I needed the kernel source to install them. I figured out how to download the source, and the drivers installed OK. But on reboot, I discovered my keyboard would not work. I was able to determine it has something to do with the Linux kernel and the BIOS ACPI timings. Disabling ACPI restores the keyboard functionality, but now the processor dynamic scaling does not work, and I'd rather not disable ACPI if I can help it. I read that there are kernel patches that might solve this problem, but I'm unable to decipher the instructions to patch the kernel. If you can list the steps necessary to do this, and possibly what patch to try, I will figure out the rest. A:: ACPI is always a lot of fun, especially with new motherboards that are not 100% supported by Linux. You may want to check out http://acpi.sourceforge.net and see if you can find your chipset in the mix. Often there are patches for specific boards, especially if they are popular. Another option would be to upgrade your kernel to the latest release, which is 2.6.11.9, although 2.6.12 will most likely be available once this is in print. If you have specific errors that the kernel outputs when the keyboard fails to work, these will help you establish what the cause of your problem is. You may also want to disable ACPI for IRQ assignment, but leave it running for everything else, which can be done with the pci=noacpi option. There are a number of Linux AMD64 lists that you could try, some of which are distribution-specific. Ubuntu does not seem to have anything AMD64-specific, but its forums are full of questions from people using 64-bit systems. Back to the list ****** Running Java programs on SUSE Linux Q:: I'm trying to install DVDRipper for perfectly legal reasons and it's driving me nuts! I'm using SUSE 9.2 and KDE. Here's what I've done, several times: 1. Copy the folder from the disc to my home directory. 2. Open Konsole. 3. cd to my home directory and to the program's directory that I copied from the disc. 4. Do the tar xzvf thing to the file marked .tar.gz. 5. Konsole lists the contents of the file as DVDripper.jar and README. 6. cd back to the location of DVDRipper.jar. 7. Try ./configure and get the message, 'No such file or directory'. This is not surprising as the .jar file appears to be an archive in itself. So I tried to tar xzvf the .jar file and got the message: --- 'gzip stdin has more than one entry, rest ignored. tar: child returned status 2 tar: error exit delayed from previous errors'. ,,, I have also tried to extract the .jar file manually with Ark. Where do I copy it to, and shouldn't it be configured and make installed first? A:: As DVDRipper is a Java archive, you don't need to ./configure or install it. Simply cd into the directory containing DVDRipper.jar and run --- java -jar DVDRipper.jar ,,, Back to the list ****** Installing a new kernel on Gentoo Q:: I am using Gentoo Linux 2004.2 and am installing gentoo-dev-sources for the 2.6 kernel. The instructions said to use --- make && make modules_install && make install ,,, at the command line to build the kernel and install it in /boot and set up symlinks. No other action is required. I am having a heck of a time getting emerge grub to work. The error message is: --- '/usr/sbin/ebuild.sh: line 55: a local command not found. !!! ERROR: sys-boot grub 0.94-r1, failed !!! Function src_compile, line 55, exit code 127 !!! (no error message)'. ,,, I have tried the Gentoo forum and the drift seems to be that the kernel is not installed properly. Are there any additional steps? Everything went OK up to the emerge grub point. I can chroot and do /bin/bash. A:: From the research I did, the error you see indicates that you need to upgrade Portage on your system. The specific release of Grub you are trying to install most likely expects functions to exist that are only in the most recent version of Portage. The process of compiling a kernel you mention is accurate, though you'll have to manually edit your lilo.conf or menu.lst files to ensure that your boot loader notices it. We like to modify the filenames of kernels, so rather than just bzImage we use vmlinuz-2.6.11.9 to indicate which version it is. Back to the list ****** Fixing slow Linux servers Q:: I just got a new dual Intel Xeon Linux machine running Red Hat Linux 3.0ES, MySQL 3.23.58,PHP 4.3.2 about two months ago. I use it to process and manipulate data, but it seems very sluggish most of the time. Here is a recent top output. I have a feeling that the problem lies in the extremely high iowait, but I'm not a server pro so I could use some help diagnosing my problem. --- 10:03:09 up 12 days, 20:07, 1 user, load average: 4.87, 3.76, 3.03 73 processes: 72 sleeping, 1 running, 0 zombie, 0 stopped CPU states: cpu user nice system irq softirq iowait idle total 25.6% 0.0% 6.8% 0.0% 1.6% 340.4% 24.4% cpu00 20.7% 0.0% 1.9% 0.0% 1.9% 65.3% 9.9% cpu01 1.9% 0.0% 1.9% 0.0% 0.0% 93.0% 2.9% cpu02 1.9% 0.0% 2.9% 0.0% 0.0% 91.1% 3.9% cpu03 0.9% 0.0% 0.0% 0.0% 0.0% 91.0% 7.9% Mem: 1027996k av, 1012136k used, 15860k free, 0k shrd, 35760k buff 700900k actv, 132820k in_d, 14184k in_c Swap: 1052248k av, 125840k used, 926408k free 739172k cached PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME CPU COMMAND 21366 ishop2 16 0 6016 3544 576 S 22.7 0.3 130:01 2 php 31523 root 24 0 2336 1992 1 196 D 1.9 0.1 0:10 3 sendmail 2974 mysql 15 0 78792 61M 1084 S 0.9 6.1 4665m 0 mysqld 31488 ishop2 15 0 4276 3276 772 S 0.9 0.3 8:49 1 php 753 ishop2 23 0 1244 1244 912 R 0.9 0.1 0:00 0 top 1 root 15 0 512 472 452 S 0.0 0.0 0:12 3 init 2 root RT 0 0 0 0 SW 0.0 0.0 0:00 0 migration/0 3 root RT 0 0 0 0 SW 0.0 0.0 0:00 1 migration/1 4 root RT 0 0 0 0 SW 0.0 0.0 0:00 2 migration/2 5 root RT 0 0 0 0 SW 0.0 0.0 0:00 3 migration/3 6 root 15 0 0 0 0 SW 0.0 0.0 0:00 0 keventd 7 root 34 19 0 0 0 SWN 0.0 0.0 0:00 0 ksoftirqd/0 ,,, I only have a single 73GB SCSI drive in the server now. My feeling is that the disk just can't handle the requests for info. Would a RAID solution improve my situation? During this top output the server is only running two of my processing scripts and the iowait is through the roof. I could get the exact same numbers if I were running seven or eight scripts instead of two. Any thoughts or suggestions would be appreciated. A:: Without getting on your box and looking closer using vmstat and iostat, I would recommend disabling hyptherthreading by adding the "noht" option to the grub.conf kernel line: --- default=0 timeout=10 splashimage=(hd0,0)/grub/splash. xpm.gz title Red Hat Enterprise Linux (2.4.21-4.EL) root (hd0,0) kernel /vmlinuz-2.4.21-4.EL ro root=/dev/hda2 noht initrd /initrd-2.4.21-4.EL.img ,,, Save and reboot. Test it again and see how it's working. If this makes a difference you may want to also turn hyperthreading off in the BIOS. If the I/O problem is hard disk related, then moving to RAID may help. Again this all depends what type of I/O is causing the bottleneck. A RAID 0 is very fast on both reading and writing, but offers no redundancy, in fact you are more likely to lose data as you have multiple points of failure instead of just one. RAID 1 offers the best read speeds, but you're limited to using two physical disks - not great if you need lots of space, and with a whole disk wasted not the most efficient price-wise either. RAID 5 is very common and is easy as well as cost effective to implement. It offers very good read speeds, but writes are a little slow. Back to the list ****** Merge two hard drives into a single third drive Q:: I have a Linux box running SUSE 9.3 Pro. It was installed with three hard disks in the following configuration, which was taken from fstab: --- /dev/hda3 / reiserfs acl,user_xattr 11 /dev/hda1 /boot ext2 acl,user_xattr 12 /dev/hdb3 /home reiserfs acl,user_xattr 12 /dev/hdb4 /tmp reiserfs acl,user_xattr 12 /dev/hdb1 /usr ext2 acl,user_xattr 12 /dev/hdb2 /var reiserfs acl,user_xattr 12 /dev/hda2 swap swap pri=42 00 /dev/hdd1 /shares reiserfs acl,user_xattr 12 ,,, hda is 40GB, hdb is 15GB and hdd is 4GB. I have now decided that, as I want to have a CD-writer and a DVD reader installed, it would be better to combine hdb and hdd on to a new 40GB hard drive that I have purchased especially. What I want to know is how I should go about installing, copying and configuring the new disk so that all the data on the existing partitions is copied correctly to new partitions on the new disk. A:: This should be a straightforward process. I would install the new hard drive, along with the three existing ones. Boot Linux in single user mode. This is done by either adding single to the kernel parameter line in your lilo.conf or grub.conf, typing linux single on the LILO boot prompt. Or press E on the Grub splash screen, add single after the line starting with kernel, press Enter to save and press B to boot. Then partition your new hard drive to your heart's desire, using fdisk or parted. Format the partitions with the filesystem of your choosing. Mount them manually and copy the files from the old partitions to the new ones - I would use the -a flag with cp: --- # cp -a /old-partition-mountpoint/* /new-partition-mountpoint ,,, or --- # tar cp /old-partition-mountpoint/* |tar x -C/new-partition-mountpoint/ ,,, Then it's down to modifying /etc/ fstab to point to the new locations and you're set! Back to the list ****** Allow LAN access but not internet access Q:: I needed to set up a PC so that access to and from the internet was stopped (DROP) while access to and from the local LAN was allowed (ACCEPT). I looked at a couple of useful tutorials on the web. I was successful with the following commands: --- # iptables -P INPUT DROP # iptables -P FORWARD DROP # iptables -P OUTPUT DROP # iptables -A INPUT -s 192.168.0.0/24 -j ACCEPT # iptables -A OUTPUT s 192.168.0.0/24 -j ACCEPT ,,, Great. However, I have two questions. First, when I reboot the settings are lost. They default back to a default of all ACCEPT and my local LAN ACCEPT rules have gone. How can I make the changes stay after a reboot? The second is a curiosity question. 192.168.0.0/24 refers to all devices on the subnet 192.168.0. I thought it would only refer to devices 0 to 24. I have checked that it does what the article says - 192.168.0.102 is covered by 192.168.0.0/24, and I am able to ping it on my LAN. I just do not understand why. A:: Many distributions have a /etc/init.d/iptables script which can be used to save your iptables rules for reload at boot time. As you didn't indicate your distribution of choice, you may want to check its iptables package and see what exactly it provides for you in terms of init scripts. As a last resort you can use iptables-save to save the rules, then use iptables-restore at boot time to load them again. The /24 means that the first 24 bits of the IP are for the network, and the last 8 are for the host. When a /24 range is defined, 192.168.0.0 through to 192.168.0.255 is included. You can find information on the use of CIDR or 'slash' notation for network addressing at http://en.wikipedia.org/wiki/CIDR. Back to the list ****** Change DocumentRoot in Apache to avoid 404 errors Q:: I am setting up an old machine to use as a web server internally on my office network. This is for design purposes before uploading sites to a web host. I am using Mandrake 9.0 and Apache/PHP and have it running OK. I can open the web server using 192.168.0.3:80, which opens the file at /var/www/html/index.shtml. I have replaced this file with my own index.html, which is fine. I would like to keep the files for the websites on /home and would like to know how to reference them. I have tried, temporarily, accessing a content management system at /home/mike/tmp/cinj152/index.php as a link but get a 404 error. What am I doing wrong? A:: The root directory for your web services is set by the DocumentRoot option in /etc/apache/httpd.conf. Changing this to /home/mike/tmp/cinj152/, rather than /var/www/html will result in the functionality you need You may also want to move your website into /home/mike/public_html, then you can simply do http://192.168.0.3/~mike/, and get your site. Back to the list ****** Find out if a service has gone down on a remote machine Q:: I have a remote server thousands of miles away. Unfortunately, all I have is the bandwidth and the hardware. Whenever things go wrong, I either have to pay extra or fix things myself. I am also running a software firewall, using the iptables. When a service is unreachable, what's the best way to find out where the breakage is occurring? Secondly, can you recommend a good way of applying firewall rules while making sure my SSH session doesn't get dropped? A:: I'll start with your first question. Let's check for the most obvious cause: whether there's a process listening on the port we're trying to connect to: port 25, say. --- # netstat -vatnpu | grep 25 tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 3971/master ,,, That shows Postfix is running but is bound to the loopback interface. Loopback is unlike regular network interfaces in that anything bound to it is not accessible to the outside world, but is limited to the same machine. So that might be a point of failure. But what if the output suggested everything was OK on that front? --- # netstat -vatnpu | grep 25 tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 3971/master ,,, That indicates the daemon is listening correctly on all addresses for all addresses. So let's check if the daemon is actually running healthily. We do this by initiating a Telnet connection from the same machine to the public IP of the external interface. Let's pretend it's 1.2.3.4. --- $ telnet 1.2.3.4 25 Trying 1.2.3.4... Connected to 1.2.3.4 (1.2.3.4). Escape character is '^]'. ,,, Exact output depends on the daemon's config. So now we know that the process is alive and kicking and that the daemon is listening on the correct address or addresses. One last thing we can do on the local machine is to sniff the interface that the daemon is supposed to be listening on; eth0, say. You should look for packets in both directions: --- # tcpdump -vni eth0 tcp port 25 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 21:53:16.627942 IP (tos 0x10, ttl 64, id 4623, offset 0, flags [DF], proto 6, length: 60) 1.2.3.5.52056 > 1.2.3.4.25: S [tcp sum ok] 2918495501:2918495501(0) win 32767 <mss 16396,sackOK,timestamp 34318082 0,nop,wscale 2> 21:53:16.628093 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto 6, length: 60) 1.2.3.4.25 > 1.2.3.5.52056: S [tcp sum ok] 2929251633:2929251633(0) ack 2918495502 win 32767 <mss 6396,sackOK,timestamp 3431808 34318082,nop,wscale 2>/ ,,, These two packets are a SYN and a SYN/ACK message to and from the daemon on port 25 respectively. We could consider two more outputs. The first is where the daemon seems to reply with an address that is different from the destination address in the first packet, like this: --- # tcpdump -vni eth0 tcp port 25 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 96 bytes 21:53:16.627942 IP (tos 0x10, ttl 64, id 4623, offset 0, flags [DF], proto 6, length: 60) 1.2.3.5.52056 > 1.2.3.4.25: S [tcp sum ok] 2918495501:2918495501(0) win 32767 <mss 16396,sackOK,timestamp 34318082 0,nop,wscale 2> 21:53:16.628093 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto 6, length: 60) 5.6.7.8.25 > 1.2.3.5.52056: S [tcp sum ok] 2929251633:2929251633(0) ack 2918495502 win 32767 <mss 16396,sackOK,timestamp 34318082 34318082,nop,wscale 2> ,,, You may ask why the daemon would send a packet back with a different address than the one it was contacted with. This can happen when you use source NAT incorrectly. If you looked through the output from iptables -t nat -L -vn | grep '[MASQ|NAT] you would probably find the culprit. The last possible output you might come across is where you can't see anything in tcpdump. That happens when your host is blocking access to that port. I assumed the client and server machine have adequate connectivity and that the server is reachable from the client machine, otherwise the answer could fill a book! To answer your second question, I've found that it's not uncommon to get locked out of a machine due to a hasty firewall command or a wrong sequence of commands. There are some precautions you can take. Where I'm implementing a firewall for the first time and need to set INPUT's policy to DROP, I time a service iptables restart in case I'm locked out just after adding all the ACCEPT rules. You need to be in 'screen' if you would like to disconnect using the same terminal and then reconnect. You need to reconnect because the existing connection might still be healthy as the packets are matching a rule with ESTABLISHED. The command is as follows: --- # iptables -P INPUT DROP && sleep 10m && service iptables restart ,,, Do Ctrl+A Ctrl+D to detach from the screen session, log out and reconnect. You can reattach the screen session by typing screen -r and Ctrl+C in the shell, which will cause sleep to fail and service iptables restart not to be issued. We get the same outcome on a generic Linux install by using the iptables save/restore commands supplied in the iptables package. iptables-save is a utility that dumps the kernel iptables setup in a format that iptables restore understands, to STDOUT by default. As you might have guessed, prior to running the script or the DROP rule, we will be saving the current in kernel config, only to a file, by regular redirection. Do this by typing --- # iptables-save > ~/iptables-dump ,,, The same config would be instantiated in kernel by typing --- # iptables-restore < ~/iptables-dump ,,, The previous process of issuing the iptables commands in a safe way could be repeated by typing --- # iptables-save > ~/iptables dump && iptables -P INPUT DROP && sleep10m && iptables-restore < ~iptables-dump ,,, Or where we're running a script: --- # iptables-save > ~/iptables-dump && /path/to/firewall/script && sleep10m && iptables-restore < ~/ iptables-dump ,,, Back to the list ****** Synaptic access forbidden errors Q:: I'm setting up a computer for an absolute newbie. He lives very remotely and I will probably never actually meet him. Without him having anyone in his area who knows Linux, I want his computer to be set up to be as newbie-friendly as possible. Two questions then... Firstly, in the Network Device Control interface are seen eth0 and also the external modem, ppp0. This is what we will be using to initiate dial-up. At the moment, eth0 is at the top and therefore selected by default. How do I change this so that the modem is the top of the list? I think there's a config file. Secondly, I want my friend to update through Synaptic as I have found up2date to be extremely temperamental. But Synaptic is having problems downloading files it recommends for updates. A lot of repositories like http://ayo.freshrpms.net are giving 'access forbidden' and such like. What's going on? It lists files that should be updated and then seems to time out or say they are not there or just give 'access denied' on download. I'm using Fedora - don't ask why! A:: The order of the devices in Network Device Control should not matter, although you will want to remove the default route from the Ethernet device before moving to the modem. /etc/sysconfig/network-scripts/ contains the actual boot time scripts for the various devices, so you can completely disable eth0 by changing the ONBOOT flag within ifcfg-eth0. You can use the yum package manager to manage RPM-based distributions, although up2date should just work on Fedora. It sounds like Synaptic is having some problems locating the correct updates and downloading them, but verifying each specific URL that it's trying and figuring out what it's trying to do is the best place to start. I would highly recommend using yum, as it is really easy to use and can be used to update a system very quickly. Back to the list ****** Best laid plans... Q:: Having bought my wife an MP3 player for her birthday, I'd hoped to be able to rip all the stuff under Linux so she could pick and choose what she wants to upload. Unfortunately, it would appear that my setup seems to have decided that the partition that my /root, /home and /mp3 partitions (extended) is formatted as a 'Linux extended' partition, so the XP install can't or won't see it. The hard drive is set up like this: --- hda1 XP(primary) hda2 /boot(primary) hda3 /swap(primary) hda4 extended (apparently 'Linux extended') into: hda5 /root (logical) hda6 /user (logical) hda7 /mp3 (logical) ,,, I thought that what I'd do is to change the format of the extended hda4 from Linux extended to some sort of extended Windows filesystem (FAT32, maybe). But that, it seems, would just screw up all the logical Linux partitions. So my current thinking is to dump everything after the XP/hda1 partition and start from scratch. Which would be something like: --- hda1 XP (primary) hda2 mp3 (primary, but formattedas FAT32) hda3 extended into: hda5 /boot (logical) hda6 /swap (logical) hda7 /root (logical) hda8 /user (logical) ,,, That would leave a primary hda4 for anything else. I'm presuming that it would be easier if the third primary extended drive should be formatted as some sort of Windows format, but I could still use ext3 for the /boot and /swap directories, and reiserfs for the /root and /user. Do you have a better suggestion? A:: There is no such thing as a 'Linux extended' partition type, but you will need to change your MP3 filesystem to FAT32 and modify the partition type so that Windows will pick it up. There are several utilities for Windows that allow you to read ext3 filesystems, but it's simpler to allow mp3s to be FAT32 and have Windows pick it up automatically. The extended partition isn't actually formatted - it's just a place for the system to plonk additional logical partitions due to the old limitation of four partitions per drive. You can organise partitions within an extended partition however you want, although you will need to use /usr rather than /user. Your swap partition isn't actually a mount under /swap, just a partition with its type set to 'Linux Swap'. Back to the list ****** Segmentation fault when installing Mandriva Q:: I managed to get my laptop to install Mandriva 2005, but I'm having a problem installing to my desktop from the DVD version. All seems to go well until reboot then I get a segmentation error and the boot halts. I've tried doing a fresh install with only the keyboard and mouse attached but I get the same error. I've also given Knoppix 3.3 a go on the same machine as specified below and that fires up no problem. The basic machine spec is: Athlon XP2400+XFX KT400ALH mobo, 3/4 gig DDR333 RAM, SBLive, HP PSC2110 printer/scanner, Line 6 GuitarPort, Sync cradle for iPaq, BT Voyager wireless router through the LAN port. The installation is on to a fresh 20GB drive in a caddy. A:: Trying another distribution is often a good way to go, so it's easier to verify if it's a hardware incompatibility issue, or something else. Segmentation faults are generally caused by mismatched library versions, or by bad hardware, but it's worth investigating the problem with Fedora or Ubuntu before going to the hassle of swapping hardware out. Knoppix 3.3 is fairly old and based on a 2.4 kernel, so if you have a copy of Knoppix 3.7 or 3.8 it may help to install that instead so that you're comparing apples with apples. Back to the list ****** Fedora GRUB problem after installation Q:: I am trying to install Fedora as a Samba server for the Win98/ME boxes in my workshop. During installation I receive no messages of anything unusual, but after rebooting it comes to a halt at Grub. It is completely stuck and it's not possible to write anything. I have tried twice with the same result. Setup is System P2 300, 256 of RAM, Drives 3G for the system and 80G for the files. I am only installing X, KDE and the Samba server plus some minor packages. No mail, internet, firewall nor office packages. I am trying to keep it as simple as possible. I started with SUSE Server 9.0 but it started installing the whole Linux world, and uninstalling packages under Linux is not that simple. I'm close to concluding that Linux is a non-productive system. A:: This suggests the machine is unable to read the stage1.5 loader. This is often due to a filesystem issue, or because Grub was not installed correctly. You can boot off a Knoppix disk and manually reinstall Grub with the following commands: --- # grub grub> root (hd0,1) grub> setup (hd0) ,,, This will tell Grub that your / filesystem is on /dev/hda1 and to install the Grub system into /dev/hda. As you have multiple drives, you will have to review your partition assignments before doing this. Once you issue the 'root' command, it will tell you if it can read the stage1.5 and stage2 loaders correctly or not. You may also want to look at installing Fedora, or a distribution such as Mandrake, which is particularly user-friendly. FC3 contains lots of updates to FC2, which may solve your problem without the headache of reinstalling Grub. Back to the list ****** Safely transfer data between partitions Q:: I recently installed Fedora on to my Dell 510M laptop, in a dual-boot setup with Windows XP, and it's giving me a problem. One of the Linux partitions (/) is nearly full, while the other one (/home1) is almost empty. I used Partition Magic 8.0 on the XP partition to resize it before installing FC 3, but I've used PM on another PC to resize a Linux partition and it messed up the whole Linux partition - it didn't boot anymore and I had to reinstall everything from scratch - so I'm reluctant to use PM again. On the Linux partition of my laptop I have installed QtParted, but this doesn't allow me to manipulate a mounted partition. I can see the settings, but can't change them. How do I safely transfer unused space from the /home1 partition to the / partition without destroying my Linux installation or losing data? I've spent so much time installing and configuring everything on this machine and I really don't want to have to re-do it from scratch. A:: As you are using partitions rather than volumes, it's very difficult to resize them without wiping everything out. A smart option is to back everything up prior to altering the partitions, or even to copy everything to a second disk (or the Windows partition if it is not NTFS-based) and then go back to the beginning and install Linux again. I would recommend that when installing Linux for the first time you create a separate /home partition in addition to the mandatory / (root) and swap partitions. In this way data is separated from the system so you can do an update or a complete reinstall while keeping your data safe. If QtParted isn't working, you might consider using resize2fs to shrink ext2 or ext3 filesystems then change the partition structure around, but it's hit or miss whether you actually adjust the partition correctly this way. To resize the filesystem on /dev/hda1 to 512MB, you would issue the following command as root: --- resize2fs /dev/hda1 512M ,,, Unmounting is a good idea prior to making any changes that modify the structure of the filesystem. Back to the list ****** Linspire not recognising video card Q:: I'm having a problem with the Linspire 5.0 Live distro. I can run it as a Live CD with no problem, but when I try to install it by typing startx at the prompt the cursor appears and the screen turns grey - then it just stops. I'm not sure what to do to get it to work. Any help would be appreciated. My hardware specs are: Pentium 4 3.00 GHz, 512MB DDR2 RAM, 200GB ATA hard drive, ABIT AA8 DuraMAX motherboard, ATI PCIE Radeon vGuru series RX600 graphics card. Linspire has a voiced tutorial on how to run Linux - but not how to install it! A:: It sounds as if Linspire doesn't like your video card. It might not have recognised it correctly, or even support it. As you have a high-end and fairly new Radeon chipset, the chances are that Linspire doesn't even know what it is. Video card drivers for X are updated frequently but many distributions are a few revisions behind and some are a long while overdue for an update. You may want to try another Linux distribution, such as SUSE (which lists your card as fully supported in version 9.2 and 9.3), Fedora or Mandriva, just to see what they do with the card. These distros tend to be updated a little more frequently than Linspire, plus they have more community support to get updates in the mix early. Linspire may also have updates available, but you're kind of stuck if it won't even start a graphical desktop. Back to the list ****** File ownership with chmod and chown Q:: I'm having a problem with chmod in two areas. I generally use WS_FTP PRO to upload files, and sometimes have to change the mode after uploading. The first problem I'm running into (and this has never happened before, with 35 sites running), is that a certain file will not change... I get permission denied. I also get that if I try to re-download it back to my computer. The dir it is in is set to 777 and I can set the chmod of another file in that dir. So basically Linux will not let me change the mode on this one file. Any ideas? Also, if I try to CHMOD thru php it tells me: --- Warning: chmod failed: Operation not permitted in /home/httpd/vhosts/etc...etc on line 65 ,,, I think this is due to a php.ini setting... Please Help! A:: I believe that both problems are related here. It sounds like the file that you are trying to isn't owned by you, and you are not in the group perms of the file. Try examining the ownership of the file you are able to change. That's the owner/group combination that is allowed to change those files. When files are created, you need them to all be owned by the same group, and then make sure that the group has write permissions. For example, if the directory's owner/group permissions (and all subfiles) need to be "root:apache 775", then you need to do a: --- chown -R root:apache dirname ,,, and then to set the SGID and sticky bits to enforce this, do a: --- chmod 3755 dirname ; chmod -R 755 dirname/* ,,, (if there are more subdirs, thecommand is a bit more complex). This will set up the "dirname" directory with the proper user/group perms for a shared upload environment. Back to the list ****** Knoppix not recognising BT Voyager modem Q:: I tried the Knoppix Live CD, and it recognised almost all of my hardware except for my ADSL modem - for me, the most important bit of hardware by a mile. I have a BT Voyager 100 USB ADSL modem, which I received when I first went to broadband with AOL, and now use for my current ISP, Central Point. I have no idea where to go for drivers for this modem or how to go about installing them. The current drivers from BT also launch into the sign-on interface - would this come with any Linux- compliant drivers? I would like to install either Fedora or SUSE. A:: The Voyager 100 is a popular modem, but it isn't supported by many Linux distributions. However, you can obtain drivers that will work with most systems. I'd suggest you head over to http://eciadsl.flashtux.org/download.php?lang=en and scroll down till you find the EciAdsl Nortek section. Now download the .bz2 file, and reboot to Linux. Copy the .bz2 file to somewhere like /usr/src and start a shell prompt: --- tar -jxvf eciadsl-usermode-0.10- nortek-alpha.tar.bz2 cd eciadsl-usermode-0.10-nortek- alpha ./configure make install make cfg cp GS7470_SynchFiles/gs7470_ synch01*.bin /etc/eciadsl ,,, Now you have to configure the driver with the settings BT gave you. The VPI is 0 and the VCI is 38, and the password can be anything you want. When it asks you to choose a modem, type 10 and enter the chipset as GS7470. When choosing synch files, you want gs7470_synch01.bin. There is some other information in the tarball in case you run into problems, but this should be enough to get you up and running. Back to the list ****** Edit Windows Apache http.conf from Linux Q:: I am a newcomer to Linux, and am trying out Mandriva Linux before I make the switch from Windows. On my Windows setup I have installed Apache, PHP and MySQL - all of which I use to test my websites offline, before I expose them to the public. Everything works fine. In Linux, Apache auto- loads (I noticed this from a previous Mandrake install), and I have also installed PHP and MySQL. So far, so good. The problem is that all my data is on my FAT32 /Windows drives. I don't want to copy over the files to a Linux partition because then I wouldn't have access to them from Windows. So I would like to know if it's possible for Linux to access the Windows partitions for Apache, and what and where to write in the Linux Apache config file httpd.conf. I noticed the httpd.conf setup file for Apache is considerably shorter in Linux than it is for the Windows machine, so that's doubly confusing me about how to edit the file. Editing Linux line-ending files when in Windows is no problem, as I have an editor that will read and write both Windows and Linux formatted text files and line endings. A:: You can edit the config file httpd.conf and adjust the DocumentRoot option to point to your web directory mounted from your FAT32 filesystems You can check with mount which filesystem is mounted where on the Linux file structure and point to the appropriate directory. Bear in mind that Apache doesn't much like files with spaces in them, and is case-sensitive under Linux, so your existing websites may not work exactly as they do in Windows. You can find httpd.conf in /etc/apache, which is generally well commented under Linux so you can easily adjust configuration options without worrying what they will do. Back to the list ****** WebBox not accessible Q:: I have three machines connected on my network, and problems working between them. They are: LinBox running Mandrake 10.0, WinBox running Windows 2000, WebBox running Mandrake 9.1 as an internal web server. The network seems to be working only partially. The connection seems fine between the Linux and Windows boxes - I can ping between them and WebBox can ping to both - but I can't ping to WebBox from either of them. However, I can open the default web page on WebBox (running Apache) at http://192.168.0.3:80, and using LinNeighborhood I can see the Linux and Windows boxes on all three machines, but not the web box (they are all in the same workgroup INTEGRANE). Can you suggest anything? A:: It sounds to me like your web server box is not participating in the Windows network properly, even though its IP configuration is sound. It may be that there is a firewall running on the system, and it's blocking the SMB traffic from the network. To verify whether a firewall is running, run iptables -nvL. If you can ping the system by IP address, in this case 192.168.0.3, I think it is unlikely to be a routing issue. The fact that you can access the web server directly pretty much discounts any IP problems. What I'd suggest is that you compare the smb.conf files from LinBox and WebBox to establish if there is any configuration variation within Samba on the two systems, and to ensure that Samba is running correctly on your server. Back to the list ****** Which CMS should I use? Q:: I work for a company with about 75 people in different departments, some of them technical, some of them not. I would like to have a content management system to hold all the data that could be shared within the organisation, ideally with the minimum technical input possible. Maybe text files or Word documents. Can you recommend one? A:: As a matter of fact, I have looked into this issue recently and I'd like to share some of my findings. Mambo (get it from www.mamboserver.com) takes the cake: it's the most popular, most versatile and most configurable CMS. The only snag is that it has by far the steepest learning curve around, and from what I know, customisation involves delving into the code. I won't comment on the usual suspects - PHP-Nuke, Postnuke and the like - because they've been amply discussed elsewhere. I would like to mention TWiki here (http://twiki.org) because it is a nice little system, though before customisation it looks overwhelming to the non-techie. But the CMS I liked the most was Exponent (www.exponentcms.org). It's the most configurable, the most straightforward and the most docile to the untrained eye. Just set up an underprivileged user in it, log in as that user and you'll see that in Edit mode it is nowhere as scary as the others - it's quite inviting and intuitive. One last thing - I'd invite you to look at www.opensourcecms.com, which has live websites of web systems that are reviewed and rated to help IT managers make up their own mind on the subject. Back to the list ****** Upload files with FTP or SSH? Q:: I have a networked web server set up with Apache, which I am going to use for testing websites. I can access the web pages via port 80 over the network, but at the moment only the default index.html file is available in /var/www/html. What is the best way to upload files? Can I set up FTP, and if so, how do I do it? A:: Actually there are a number of ways to get remote file access with Linux, the most popular of which is the SSH protocol. SSH offers secure transport or shell or file transfers, although here is some overhead due to the decryption and uncompression that takes place (more of which in Snail- paced Backups). Of course, on a public network security is a major concern, so the safety of SSH offsets many of its throughput problems. SSH works with Windows using a client such as WinSCP for file transfers and PuTTY for shell access. If you are going from a Linux workstation to a Linux server on a local network you can use NFS to share a filesystem on the server and access it on the workstation as if it were local. This is usually the best configuration, as NFS runs at near to line rate. However, it is insecure on anything close to a public network. For a Windows client, you can always install Samba on the Linux server and access the web directories using a share. Copying data back and forth is simple, although Samba might take a little coaxing to work exactly as you want it to. Back to the list ****** Fix slow backups with Unison Q:: I'm trying to set up Unison so that it will synchronise backups on a small LAN. It all works fine, except that the transfer rate is very slow compared with the LAN's normal speed. The PCs are both using Unison on SUSE 9.2, connected by SSH. I'm getting about 80KB/sec on a 100MB/sec LAN that normally manages about 10MB/sec over NFS. Is the lack of speed due to SSH or to Unison? Is there any way to tweak whichever is causing the slowdown to improve matters? A:: Depending upon the speed of the systems transfering data, the encryption overhead can cause the throughput rate to drop significantly, but for modern workstations and servers, it probably won't be noticable. However, when you're accessing the drive and possibly zipping up the contents, things can run slowly and start to drag. 80KB/sec is pretty dismal we doubt SSH has anything to do with this. You could try to run Unison over rsh or rsync and see if you're able to squeeze any more life out of it. Trying to copy some data using SSH directly will help to indicate if SSH is really any burden at all. Back to the list ****** Cannot browse Debian Samba share Q:: I've got a Samba web server on Fedora, which is fine for browsing and generally works OK, so I decided to try a Debian server as well. I set up a very simple Debian Stable system and Samba, and the storage share is working fine, but I can't browse to it. I can open the share by IP address, so it seems to be a NetBIOS problem. I don't understand this, because I've been going over and over the smb.conf and comparing the two. The only difference is that Debian is using domain authentication and I used SWAT on it instead of hand editing as I did with Fedora. Also on Fedora, I've noticed that I can't ping any Windows systems by hostname, although they can ping me by hostname. Both systems have nmbd and smbd running and both these machines have their hostnames set as well as other general networking settings (IP, gw, netmask, broadcast, etc). A:: You can always add a Linux system to the hosts file on Windows, which can be found using the Search option. Here you can set specific hostnames for IP addresses on your network, which will work in the event of it becoming unreachable via NetBIOS. You can also add the following option to the global section of your smb.conf. Of course, this needs to be unique for each of the Samba servers. --- netbios name = SambaServer ,,, Back to the list ****** Large file uploads with Apache Q:: I would like to use HTTP to upload large files (in excess of 200MB) to a server, because some of our customers aren't able to use FTP due to firewall restrictions (using Perl or PHP as the upload handler). What issues are there with Apache handling this amount of traffic in an occasional burst, or even frequently? Should there be any tuning done to the server that would help? Or are there any amends I could make to our Apache configuration file that would help? A:: Indeed, there are a number of configurable limits to how big the file size is. The ones that might affect you are Apache's LimitRequestBody, though it defaults to unlimited in version 2.0, and PHP's upload_max_filesize and post_max_size. However, what generally affects Apache the most in file uploads is the memory consumption and how long the httpd process stays up. The latter limits the number of requests httpd can handle, either because of the number of processes that have to be in memory or the memory footprint of the processes themselves. Also make sure you're not allowing uploads via SSL, to avoid the overhead that encryption would incur. If you're using Perl, you should fork so that you release the httpd process and have your child process handle the rest of the transaction. I couldn't tell you anything about that in PHP as I haven't used it for quite a while. Back to the list ****** Mount partitions as read-write in Knoppix Q:: I have discovered the wonders of Knoppix while waiting for my new PC to arrive. The HD from my Linux PC is on /dev/hdb and all my data is there. I wanted to copy a few files from /mnt/hdb7 to /mnt/hda5 but I get a read only message. I have done su and checked the /etc/fstab and listed the permissions - it all looks fine. I have never worked with Knoppix before; I just booted it up as a novelty. Now that it is a truly useful tool I need to know how to make the most of it. I am sure that I am missing something obvious. A:: The general philosophy of Knoppix is to allow users as little write access as possible. For this reason, existing partitions are either not mounted or only mounted as 'read only' click with your right mouse button an icon, the 'read only' attribute under item device' can be un-checked. After this, the partition can be mounted 'read write' (for already mounted partitions, first click on Unmount). CAUTION: writing to NTFS partitions can lead to data loss, since Linux does not really support this filesystem! However, DOS and FAT32 filesystems are safe for write access. In the shell, the command mount o remount,rw /mnt/ <partition name> can allow already-mounted file systems to be made writeable. Back to the list ****** Cannot write to USB Lacie hard drive in Linux Q:: I am using a USB2 160GB Lacie external hard drive, formatted as a primary FAT32 partition. Whenever the disk is connected, an unmounted icon is placed on the desktop, which mounts just fine when I click on it. Unfortunately, I cannot write to it. I checked the relevant permissions and as far as I can see there's no reason why I shouldn't be able to write to it - but it still denies me write access. Ideally I would like to just put the entry for this disk into my /etc/fstab file. I tried to do that with: --- /dev /sdb1 /mnt/LACIE vfat 0 0 ,,, I have even tried it with fmask=775 and mask=775 but it still won't let me write. Also when I have mounted sdb1 to /mnt/LACIE the automated USB icon gets confused because it wants to mount the device sdb1 to /media/sdb1. I am using Kubuntu with KDE 3.4 A:: You can try passing the User option to fstab, which will allow the disk to be mounted by a non-root user so that the disk can be written to and read by the user who mounted it. Alternatively, you can pass uid= and gid= statements to set the disk's default UID/GID, although with certain filesystem types, such as ext3 or VFAT, This won't make any difference. Back to the list ****** USB storage driver issues Q:: For some time I have been looking to upgrade from Win98 to Linux, but was reluctant to modify my computer running Win98. A few months ago I was given an old Dell Optiplex with PII processor, 4GB hard disk, 128MB RAM and CDROM drive, running Win95. By a happy coincidence I have a live CDROM of MandrakeMove, and running this showed me that Linux would work on the Dell, and pick up all its features. I used MandrakeMove on the Dell to learn more about Linux, and when you produced a magazine about Fedora, I decided to make the Dell a Linux machine. Loading Fedora went more smoothly than I expected, and more smoothly than some reloads I have done. Later I was given a USB memory stick and found it very useful on my main Win98 machine, so I decided to try it on the Dell. By this time I had learned enough about Linux to work out what I had to put into the fstab file and the mount point, and it worked on the first boot into Fedora. Now I was really getting bold and decided to see if the USB Zip 250 drive from the other machine worked on the Dell, and loaded MandrakeMove to see if it was detected. There was a success and I deduced that the Zip drive for some reason was /dev/sda4. I put this into the fstab file with a mount point and changed the USB memory stick to sdb1, feeling very clever. It was then that I got into trouble. Fedora saw the Zip drive, which worked, but did not see the USB stick in the other USB slot. I went back to MandrakeMove, but was very disappointed to see that it too saw the Zip drive, and it worked, but it too no longer saw the USB stick. Without the help of a crib from MandrakeMove I am now stuck. Please can you help? A:: The USB storage driver, responsible for mounting devices such as USB keyring drives, hard disks, some cameras, optical drives and anything else storage related, has a hard job to do. Part of the problem is the hotplug nature of these devices -they can be attached and removed at any time, and to these ends your drive may not always end up with the same designation, particularly if you plug them in after the system has started. The likelihood is that the drive is being recognised, but is mapped to a different address. The only real way of knowing where it has been put is by looking at the kernel messages. Using: --- dmesg ,,, will show you the system messages, but there is a lot of junk in there. However, being cunning, we know what it should say, so we can filter the output: --- dmesg | grep 'SCSI device' ,,, This filters the messages to see only the lines that contain text we are interested in. You should see something like: --- SCSI device sdb: 128000 512-byte hdwr sectors (66MB) ,,, Here we can see that the device has been attached as 'sdb' but it could be any other available slot. The reason the Zip uses sda4, or the fourth partition of any drive, is a throwback to the multi-platform support of Iomega devices, which keep special Mac stuff on the first partition. Back to the list ****** Blocking SSH access to a particular PC with iptables Q:: I'm configuring a firewall that's got an eth0 link to the internet and an eth1 link to an internal subnet (172.16.2.0). I've put in the following rule to stop all SSH access to a PC (172.16.2.120) on the subnet via the firewall, thus: --- iptables -A FORWARD -p tcp -s 0/0 -d 172.16.2.120 --dport 22 -j DROP ,,, However, this rule is still allowing other PCs on the subnet to connect to the PC. I've also tried the following rules, and even gone to the point of specifying an individual source PC on the subnet, dropping all SSH traffic to the destination PC and changing the FORWARD policy to DROP. --- iptables -A FORWARD -p tcp -s172.16.2.0/24 -d 172.16.2.120 --dport 22 -j DROP iptables -A FORWARD -p tcp -s172.16.2.220 -d 172.16.2.120 --dport 22 -j DROP iptables -A FORWARD -p tcp -d 172.16.2.120 --dport 22 -j DROP iptables -P FORWARD DROP ,,, Yet I can still contact the destination PC from another PC on the subnet. I've read and read and read till I'm blue in the face, and can't for the life of me figure out why this isn't working. A:: As you are SSH-ing between two systems on a local network, you won't route across your firewall for this access. Thus, the packets will never be inspected by the firewall. If you want to block SSH access, you will have to set a firewall up on the SSH server to block traffic itself. Another option if you have a spare NIC is to split the network into two sections and bridge the two using the bridge-utils package in Linux. You will hen be able to perform packet filtering on the firewall for traffic that goes between the two LAN segments, even though the packets are not actually routed. Lots of information on his configuration can be found at http://bridge.sf.net. Back to the list ****** Automatically set up internet sharing Q:: I have a two computer network at home, one running Windows XP and the other Mandrake 10.1. The Mandrake box acts as the server for the windows machine and shares the internet connection. My trouble is that every time the internet is restarted I have to enter the following three commands as su: --- iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE echo 1 > /proc/sys/net/ipv4/ip_forward route add default ppp0 ,,, What files do I need to edit to put those commands into so I don't have to manually do this each time? A:: Once you add the iptables rule, you can do /etc/init.d/ iptables save, which will save your iptables configuration for the next reboot. IP Forwarding is enabled through /etc/sysctl.conf, where you can add a line net.ipv4.ip_forward = 1' to set up IP Forwarding next time you reboot. For the default route through ppp0, you should be able to reconfigure your dialer to automatically add a default route when the modem connects. It's difficult to stop it setting a default route, so checking the logs to establish why one is not being added may be a good place to begin. Back to the list ****** Automate backups in crontab Q:: Back up, back up, back up is the first rule of the trade. Besides my business, I run a small home web server and tend to back up my files on it as often as possible. I use rsync on my backup machine: --- rsync -avz myserver:var/www /backup ,,, What I would like to do is schedule the process so that it runs every four hours or so. Any advice is welcome - my data needs you! A:: Well, we know that scheduling tasks in Linux is possible, because, as most of us have found to our cost, the /tmp directory automagically deletes files with tmpwatch at defined intervals - often before their usefulness has expired! So let's explore how that is implemented. First we'll find the tmpwatch configuration file: --- [root@carve ~]# rpm -ql tmpwatch /etc/cron.daily/tmpwatch /usr/sbin/tmpwatch /usr/share/man/man8/ tmpwatch.8.gz ,,, It looks like the only file in /etc is there under cron.daily. Let's see what's in there: --- [root@carve ~]# cat /etc/cron.daily/tmpwatch /usr/sbin/tmpwatch -x /tmp/.X1 1- unix -x /tmp/.XIM-unix -x /tmp/ font-unix -x /tmp/.ICE-unix -x / tmp/.Test-unix 240 /tmp /usr/sbin/tmpwatch 720 /var/tmp for d in /var/{cache/man,catman}/ {cat?,X1 1R6/cat?,local/cat?}; do if [ -d "$d" ]; then /usr/sbin/tmpwatch -f 720 $d fi done ,,, If you checked the man pages for tmpwatch, you'd find no switches for setting a time or an interval for running it, so obviously something else is driving this program every so often. Let's query the rpm database for the owner of the parent directory: --- [root@carve ~]# rpm -qf /etc/cron. daily/ crontabs-1.10-7 ,,, A quick search on the web tells us that crontabs are actually a bunch of files that allow us to run commands at an hourly, daily or monthly interval. Ah, we're getting somewhere. On closer inspection of the files provided by the same package, we get: --- [root@carve ~]# rpm -ql crontabs /etc/cron.daily /etc/cron.hourly /etc/cron.monthly /etc/cron.weekly /etc/crontab /usr/bin/run-parts ,,, Looking at the code, /usr/bin/run-parts is just a script that makes those intervals work with crond just by adding a script in the first four directories in the listing above. run-parts does not in any way make scheduling possible, which leaves us with /etc/crontab. --- [root@carve ~]# cat /etc/crontab SHELL=/bin/bash PATH=/sbin:/bin:/usr/sbin:/usr/bin MAILTO=root HOME=/ # run-parts 01 * * * * root run-parts /etc/cron. hourly 02 4 * * * root run-parts /etc/cron. daily 22 4 * * 0 root run-parts /etc/cron. weekly 42 4 1 * * root run-parts /etc/cron. monthly ,,, The listing now makes it obvious that our answer is in those encrypted-looking lines just after the variable assignments. Let's inspect the first line: 01 * * * * root run-parts /etc/cron.hourly. A quick rummage in the manual pages provides us with an explanation. The first entry on the line is the hour at which a command should be run, the second entry is the minute, the third is the day of the month, the fourth is the month and the fifth is the day of the week. Any interval can be defined providing we use the * character to tell cron (the running daemon that's configured via /etc/ crontab) that it should run the command we want at every iteration of the field type - in other words, every second or every day. The sixth entry (root) is the user under which the command should be run. The seventh and last field is the command, which in the example above is run-parts /etc/cron.hourly. Let's go through the rest of the crontab entries to further understand the format of /etc/ crontab. In the line 02 4 * * * root run-parts /etc/cron.daily, it tells crond that it should run run-parts /etc/cron.daily every 02 minutes when the hour is 4, no matter what day of the month, month or day of the week it is, as the user root. So run-parts /etc/cron.daily runs daily at 4:02. Moving on to the next line, 22 4 * 0 root run-parts /etc/cron.weekly, we can tell already that run-parts /etc/cron.weekly will run as the user root at 4:22 no matter what day of the month or month it is, at day 0 of the week. Or, translated, run-parts /etc/cron.weekly will run at 4:22 on Sunday. A description of the values you can use in each field is available in section five of the crontab manual pages. You should read it, as some fields start with 0, others with 1. Now, let's set up a cron job for you. You said you want the job to be run every four hours no matter what day or month it is. Fortunately, crontab's hour field accepts not only defined values, but also ranges and lists of values. Ranges are defined as <start>- <end>: lists are defined as a group of comma-separated values, such as 'value1, value2, value3...'. So the command you want to run can be implemented with the following lines of code: --- * 0,4,8,12,16,20 * * * root rsync -avz myserver:var/www /backup ,,, Alternatively, you may use a step-type hour definition: --- * 0-20/4 * * * root rsync -avz myserver:var/www /backup ,,, This is an instruction to go through the integers from 0 to 20, incrementing the counter by four each time, which would equate to exactly the hour definition above. Alternatively, you could use an 'under-privileged' or non root crontab, which can be done with the userspace tool crontab. Back to the list ****** Turn off ports using iptables via a web-based front-end Q:: I'm using a Red Hat 9 server as my router and iptables to shut down all unnecessary ports, but sometimes I want to turn off two additional ports using a web page interface, while keeping the existing rules in place. I figured PHP was the best item to use, but since I've never really used PHP I was hoping this would be a simple question for an experienced programmer. How do I do it? A:: Modifying iptables rules can be done easily through PHP using the system function, which allows execution of a system binary. However, this would require the web server to run as root, which is pretty insecure and may compromise the system through the web service. You may want to look at a firewall system that gives you a graphical interface to your iptables rules, such as Astaro, ClarkConnect or SmoothWall. Depending upon what exactly you want to do with the ports you open, a technology that permits VPN access to the network such as OpenVPN or IPsec may be a better alternative than opening the Linux system up to possible security breaches. Back to the list ****** ATI video card problems with SUSE Q:: I have an ATI All-in-Wonder 9800 Pro AGP video card. I need help getting the video card installed and working with SUSE 9.2 Pro. I used an old 16MB video card to boot the system in order to flash upgrade the BIOS, as the onboard video wasn't working. My first attempt at switching the power on with the new card only gave me a black screen, and the monitor had a 'Check cable connection' message on the screen during the boot cycle. I have yet to get beyond this information on the screen. Can you help? A:: If your machine doesn't show the BIOS boot screen with the ATI video card installed, it is likely a hardware issue. Most motherboards will beep with an error code during POST indicating why they don't like the hardware. We'd recommend returning this video card and obtaining a replacement, as it seems to be defective. Back to the list ****** Downgrade Fedora distro to earlier release Q:: I recently installed Fedora on my Dell Inspiron 6000 laptop after about six months of using Fedora and I'm having mixed feelings about it. I love how it automatically finds and configures my widescreen display, but I'm rather disappointed that since installation my soundcard no longer makes any sound. It worked fine in FC3 without any intervention from me, but now I run Soundcard Detection and it plays no sound but gives me a very long model name: 'Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) AC'97 Audio Controller' and the module 'snd-intel8x0'. I seem to remember the AC'97 in Fedora, but the rest is Greek to me. My laptop is dual booted with Windows XP Professional, where the soundcard works fine, giving me the name 'SigmaTel C-Major Audio' under the device manager. A similar problem has occurred with my Logitech Quickcam for Notebooks Pro webcam. It worked without a hitch in GnomeMeeting on FC3, but in the same program on FC4 only the microphone seems to work. It comes up in Soundcard Detection as 'unknown'. I'm disappointed that a newer version of Linux seems to be less compatible with my devices than older versions. Has anyone else had this problem? I live in the US Midwest, where Linux isn't very well known at all, but I'm working to change that! So I'd like to say thanks and keep up the good work... A:: There are lots of changes in new versions of Fedora, so you may simply want to jump back to one you trust, ensure you have the correct updates, and submit some bug reports to Red Hat to find out if anyone else has the same issues. Unless you follow through with bug reports and make sure that people working on USB and sound support know that it's a problem, FC5 is going to be just as broken. There are also a number of mailing lists and IRC channels associated with Fedora that may help solve your problems, or at least make sure that the information is routed to the correct individual. Almost every US city has a LUG of some variety, even in the Midwest - check out your local universities or colleges, as these are often great places to get information from fellow Linux enthusiasts. Back to the list ****** Monitor Apache system load Q:: My customers have been reporting my website being down, usually around peak times. Could you suggest a simple way to monitor my system load? A:: An Apache slowdown is almost always due to memory. If you're not looking for something like Cacti (http://cacti.net), which uses RRDtool to record almost anything imaginable on a server, the simplest way I can think of is to have a shell scriptlet that just runs an infinite loop while capturing the output from a couple of utilities. You could use sar to monitor CPU usage, free for memory usage and vmstat. Use this last tool with a delay of three to five seconds to capture a couple of successive snapshots of your system state - this is just to make sure that the information you're getting from the redirected outputs are not spikes. Use your judgement as to how frequent the dumps are made unless you're prepared to code a monster application to sift through all the information. Back to the list ****** Firefox times out when using IPv6 addresses Q:: I have SUSE 9.2 installed on my box at home, and am having problems with browsing the web. I have no problems with Konqueror, but I installed Firefox and Mozilla and neither will surf the web - they always time out. The network settings are the same as Konqueror - but they don't load any pages. I don't know if this is a red herring or not but when I change the network settings from Direct Connection in Konqueror to Auto-Detect From The Proxy I can browse web pages that I have loaded first in Konqueror. However, I have problems submitting forms or accessing links from the page and it times out again... I'm utterly bemused! Any ideas how to get Firefox up and running? I need it so that I can ditch my Windows partition and do all my dev work in Linux! A:: This is a common error and is caused by Firefox or Mozilla trying to access the network using an IPv6 modified IPv4 address. 0 The simplest way to fix this is to add the following code to /etc/modules.conf: --- alias net-pf-10 off ,,, This will disable 'Protocol Family 10' on the system, which is essentially the IPv6 system. Both Firefox and Mozilla support both IPv4 and IPv6, which can often cause problems on installations where the networking isn't set up quite as it should be. Back to the list ****** Graphical X Window System applications failing after disk upgrade Q:: I recently moved my system from an old 120GB disk to a new 200GB disk. To do this, I booted from a Live CD, mounted both disks and used cpio to copy all the files across. --- cd /mnt/old_disk find . -print | /mnt/old_disk/bin/cpio - pamd /mnt/new_disk ,,, All went fine up until the point fluxbox wanted to start. To cut a long story short, it didn't. I can run TWM, xterms and aterms, but that seems to be about it on the graphical application front. All my favourite C apps seem OK. When I try to launch Gaim or Opera or pretty much anything else that uses X apart from MPlayer, they all segfault. Sometimes xine will appear briefly before dying, but Gaim never does anything. I've checked permissions to make sure they were preserved OK (as far as I can tell, they were). I've also recompiled my kernel and rebuilt X, all to no avail. Everything else, including Apache, PostgreSQL, MySQL and SSH, seems to be doing just fine; it's just the X apps. Any advice? A:: If you still have the original 120GB drive, you may want to copy the data over again to ensure that all of the shared libraries that X applications need are complete. We normally just use cp -fra /mnt/old /mnt/ new to copy the contents of a whole hard disk, which generally does a good job of making sure everything is as it was. A tool such as strace can be used to look at exactly why a process fails: perhaps it can't open a /dev file, or there's a permissions issue that it doesn't know how to handle. The strace command is a little cryptic, but it is usually reasonably easy to figure out where exactly the application is bombing out. Back to the list ****** SUSE cannot see other distro files Q:: I have a hard drive (hda) with Windows 98 and SUSE 9.2 installed on it. I also have another hard drive (hdb) on to which I installed MEPIS PRO. The problem is that when I boot into SUSE, although it sees my Windows files, it cannot see the other MEPIS drive. Running dmesg shows that SUSE is seeing all the partitions including hdb1 - but for some reason I am unable to access it. Originally each Linux distro tried to overwrite the MBR and delete any reference to the other Linux distro. Currently the SUSE GRUB loader gives me Windows and the SUSE option, and it would be nice if it could give me the MEPIS option as well. Because of this tendency to overwrite the MBR I installed MEPIS, including its boot loader, on hdb1. I can access it using the floppy install. The annoying part is that when I am in MEPIS it can see the SUSE partitions on hda. Any attempt to write a line in the fstab file is greeted with an error message when I try to load the hdb1 partition. Any help would be appreciated. A:: You can start by checking the partition structure on hdb using fdisk l /dev/hdb. If /dev/hdb1 is the filesystem you want, running the following will manually mount it under /mnt/tmp, where /mnt/tmp is a directory that must already exist: --- mount /dev/hdb1 /mnt/tmp ,,, If this fails, check with dmesg to find out if there is an error from the kernel trying to mount the filesystem, or review the error output from mount to establish why it will not work. Back to the list ****** Fixing missing dependencies Q:: I have SUSE 9.1 installed on my computers at home. The boxes all have KDE 3.2.1 installed. The SUSE web page has KDE 3.3 to download. I have downloaded all the required files. Now, how the heck do I install it? YaST doesn't let me see the files or maybe I am doing it wrong. If I use rpm, I have problems with conflicts with installed files, missing dependencies, or some files need updated packages in order to continue with unpacking. Please can you suggest another course of action? Also, I have noticed that some flavours of Linux (under KDE) make use of the Windows key on the keyboard to activate the menu. SUSE linux doesn't permit it or activate it. Now, it works with my laptop but not my desktop. I compare KDE settings and don't notice anything different. Mandrake does let you use the Windows key to activate KDE menu... How do I activate it? Is there any way to activate the internet keys? A:: If you have already downloaded the packages, then you need to set up YaST to look for them. There is an option to add a new source of packages - point this at the directory where the downloaded files are. X actually controls how the keys are mapped, but assuming the key you want to use is recognised under X (the Windows key is usually mapped to 'F13') then you can change the use of the key in the KDE control centre. Launch it and open System>Khotkeys. SUSE should already be set up to pop up the K Menu with Alt+F1. Back to the list ****** How to audit root account activities Q:: I'm looking for a way to audit root's activities on the server. The root password is held by three people who look after the server for me. Any other better way to do it? A:: The sudo command is the answer to your problems. What sudo does is run a command as a substitute user. You have two ways of doing this. You can either give those people the root password and have them authenticate twice, once for their own user then another to run the command using sudo. The other way would be to make them authenticate once, which would hide the root user to them. I suggest the first and rotate the root password as frequently as you're comfortable with. You also get a thorough log of all commands executed using sudo along with information on who ran it and an expanded command line, so if you have wildcards, you get the full picture. Editing the sudo configuration file, /etc/sudoers, is preferably done with the command visudo. You'll need the following lines: --- # /etc/sudoers exampleUser ALL=(ALL) ALL,!/bin/ bash,!/bin/tcsh,!/bin/sh,!/bin/csh,!/usr/bin/strace ,,, Basically, we've allowed user exampleUser to use sudo to run all commands from all hosts except for /bin/bash and the other commands on that line, because otherwise a user could run sudo bash or sudo trace to hide what they're doing. There is an element of trust here. It isn't viable to restrict people with elevated privileges to not sidestep limitations using such a simple way. If you really want to lock your server down, you should consider using SELinux. It's gaining users every day, so the online help is expanding all the time. Back to the list ****** Map Windows Storage Server on Linux Q:: I am running Red Hat Linux 9, and using the following command to map Windows Storage Server 2003. --- mount -t smbfs -o username=<username> ,password=<passwd> //<ipaddress>/share /mntpoint ,,, On the Windows side, I just make the share folder read-only. It successfully maps between Linux and Windows but recently Windows has begun to refuse the connection. When I check the Linux host, the mount is still there, and when I restart Windows I can read and write files to the directory without remounting on Linux. Before I restart Windows, I can't read the file in the mounted directory on Linux. I get the error message: 'LS: Stale file handle'. Is there any thing I can do on the Windows side? Is there any service that I can restart without rebooting Windows? A:: You can always restart the File and Print Sharing service in Windows 2003, which I hope might solve your problem. It sounds as if a scheduled service, such as Windows Update, is causing the Windows system to fall over. Windows has a comprehensive event log, which may help you locate the specific issue. Our forums are full of people who are crazy enough to use Windows as a file server, so that may be a good way to find out if ther are any changes in Windows 2003 which need to be made in order to reliably mount Windows shares in Linux. Back to the list ****** AL511 monitor not supported in Linux Q:: I'm setting up a computer for a complete newbie and couriering it to them across the country. The box has an NVIDIA Pro card and their screen will be an Acer AL511 flat screen. If I set the monitor up to suit my hardware, a 'detected hardware' change at the user's end will effectively uninstall the NVIDIA drivers. I therefore want to set the box up with the correct settings for their monitor just before shipping the box off, so it will work straight away on their system. Problem: there are lots of Acer monitors under display config, but no AL511. I know that they are probably cross-compatible, but some will not be. I don't want to pick the wrong one in case it causes grief to this complete newbie at the other end. I certainly don't want it to run diagonal lines, have bad flicker or have too low a resolution for their hardware. A:: LCD displays are fairly easy to set up. You just need to choose the appropriate vertical refresh rate in X to ensure that the modes used are not beyond the maximum refresh rate for the display. A 'General LCD' setting should be sufficient to ensure that the display works correctly. Reviewing the display specifications on the internet will help you decide which resolution should be set as default and which refresh rates can be used. Back to the list ****** Getting SpeedTouch broadband modem to work in Linux Q:: I have just installed SUSE 9.3 - my introduction to Linux. All seemed to go well except that it didn't recognise my SpeedTouch broadband modem. My ISP is Kingston Communications. As I'm a real newcomer to Linux (and an octogenarian to boot), would you please give me precise instructions as to how I can install the drivers (if indeed it's what I need)? All I can see when opening the file is what appears to be program coding. A:: Lots of information on the Alcatel SpeedTouch USB modem, otherwise known as 'the frog', can be found at http://linux-usb.sourceforge.net/SpeedTouch. This includes open source versions of the drivers, as well as setup documentation to get you on to the internet using the modem. As you are running SUSE 9.3, you can follow the instructions at http:// linux-usb.sourceforge.net/SpeedTouch/suse/index.html to get it up and running. Many ISPs give you the option of using either PPP over Ethernet or PPP over ATM, although the SpeedTouch USB documentation recommends using PPPoA. In either case, you will need to follow the specific instructions for the PPP method used to connect to your ISP. Back to the list ****** Can't use PATA and SATA drives at the same time Q:: I tried installing Fedora on my Athlon 64 box last night. The problem is this: I have two PATA drives and four SATA drives, and if I try to use both types I get a lot of garbage during boot and a lock up. The info is excessive and contains a lot of 'fffffffffffff' and'CPU locked' messages. When I disconnect the PATA drives all is fine, or if I disconnect all the SATA drives and leave only the PATA drives, things are again fine -but I want to use both types. It is not a fault of the motherboard, because Windows can handle the six drives at once, not to mention two DVD drives. A:: There are some known conflict issues with controllers used for both PATA and SATA devices (ie the same controller handling both types of drive). You haven't told us exactly which controller you are using, so we can't be certain, but that seems to be the most likely cause for this behaviour. Some devices do have boot-time workarounds though. Log in as root and use dmesg to check for the hardware found at boot. The lsmod command will tell you which modules are running on your system (check the entry for libata, which is often used to load the SATA drivers). A Google search for 'Fedora' or 'Linux + the device name,/driver/etc' may yield some results, or tell us specifically what hardware you have and we can investigate further. Back to the list ****** DVD playback is choppy in Xine Q:: I have recently installed Ubuntu on my Toshiba 1800 laptop. It seems to work fine but DVD playback is choppy in Xine and MPlayer crashes as soon as I try to play a movie. Any guidance you can give me would be appreciated! A:: Top of the list of things to try is to make sure DMA is enabled for your DVD drive. Something like: --- hdparm -d1 /dev/hdc (changing hdc for the device of your DVD if it's different) ,,, should do the job. MPlayer as supplied with Ubuntu has a number of problematic issues - if you really want to have it, it's best to compile it from source. Back to the list ****** Stop Postfix from adding new system users for each email address Q:: I want to set up Postfix so that it won't add new system users for each email address I want to add. I usually learn quite well by example but the tutorials I have found on this are very confusing. Can you suggest an easy tutorial or HOWTO? A:: Better, I can write you one! As always on machines with a firewall policy of ACCEPT, you should start by restricting the relevant port to the local machine until you're satisfied with the configuration. This should do it quite nicely: --- iptables -I INPUT -i ! lo -p tcp -- dport 25 -j DROP ,,, The main configuration file for Postfix as a whole (as opposed to the daemon config file, which is master.cf) is main.cf. This is usually found in /etc /postfix. By default, Postfix should come configured to only listen to localhost. Postfix binds to loopback in such a way that it doesn't accept connections from the wild. What we need to do is append the inet_interfaces to the public IP. Usually the entry is: --- inet_interfaces = localhost ,,, We change it to: --- inet_interfaces = localhost, 123.213.312.132 ,,, This enables Postfix to listen on the supplied IP. To make life easier, we'll also be making Postfix look for any variable info - such as added email addresses or domains - in files other than the main configuration file (main.cf). Let's see a dump of the additions required. You may want to append them to the file main.cf: --- # /etc/postfix/main.cf virtual_mailbox_domains = virtualdomain1.tld, virtualdomain2.tld virtual_mailbox_base = /path/to/mail/root virtual_mailbox_maps = hash:/path/to/postfix/virtual-mailbox-maps virtual_minimum_uid = 5000 virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 virtual_alias_maps = hash:/path/to/postfix/virtual-alias-maps ,,, There are seven configuration lines here. In the first, you tell Postfix which domains are virtual. We want everything except system mails to be virtual, so list any domains on that line that you would like to host. You're really telling Postfix that these domains should be handled by the Postfix Virtual Delivery Agent (man 8 virtual). Line two is where you specify the parent directory where all the emails will be stored. I suggest you specify something other than /var/spool/mail. The argument to hash: is a file with key/value pairs. The virtual_mailbox_maps directive is where you list the one-to-one mappings of email address to filesystem location. We'll get to that in a short while. Line five, virtual_uid_maps (yes, we skipped one; we'll get back to it right after this one) can be a variety of things. In this example, we went for a common UID for all email users, so we use the keyword static:, which in turn accepts one argument, the UID. Back to virtual_minimum_uid. You've probably guessed by now that it's a security constraint that restricts the user file UIDs to a level above a certain threshold. In our setup, we used a static UID for all users, but if we were using, say, hash, the virtual_minimum_uid would give us the security of knowing that any human errors in defining UIDs would be rendered harmless. Line six, virtual_gid_maps, is just like virtual_uid_maps only for GIDs (group IDs). Now that we've set those two, let's create the directory in virtual_mailbox_base and change the ownership of that directory to reflect the settings we chose: in our example, user and group 5000. Note that we don't have to create the user or group on the system; it's optional. Line seven, virtual_alias_maps, points Postfix to the file where the virtual alias mappings are listed. Virtual aliases 'redirect' email messages meant for a virtual domain (see above) to any other destination. The file should contain pairs of email address/filesystem destination, such as: --- # /path/to/postfix/virtual-mailbox-maps account1@example.com example.com-dir/account1/ account2@example.com example.com-dir/account2 ,,, The first line tells Postfix to dump all emails addressed to account1@example.com in the directory /path/to/mail/root/example.com-dir/account1. The trailing slash makes Postfix use the Maildir format, which is recommended for most IMAP setups - check your POP3/IMAP service documentation. The real destination directory is the value of virtual_mailbox_base and the value of the file appended to it. You'll probably want the settings in /path/to/postfix/virtual-mailbox-maps to be checked when an email message comes in. For this to happen you have to make sure that the domain in the recipient address be listed as a virtual domain. We can do this in the file specified in virtual_alias_maps, which, as far as our settings in main.cf go, is /path/to/postfix/virtual-alias-maps. Let's alias postmaster@example.com and abuse@example.com to account1@example.com: --- # /path/to/postfix/virtual-alias-maps postmaster@example.com account1@example.com abuse@example.com account1@ example.com ,,, This setup is at least compatible with Dovecot IMAP and POP3 servers, except that both mailboxes should be in Maildir format, not Mbox format. Other things to consider are: using dbm instead of hash; moving the setup to MySQL; using Postfix Admin; and setting up a POP3/IMAP server. Before we can revel in a shiny new daemon ready to pipe thousands of email messages to the world, we need two things. First, let's allow access to the port: --- iptables -D INPUT -i ! lo -p tcp --dport 25 -j DROP ,,, Then I suggest you pay a visit to http://abuse.net and have your server tested for open relaying. Start it up with service postfix restart on a RHL-like machine. If you need more guidance, you really should see is www.postfix.org/VIRTUAL_README.html#virtual_mailbox Back to the list ****** Mandriva rebooting into text-mode Q:: Being a very, very new user of Linux and an octogenarian to boot, I decided to try Mandriva. Everything seemed to run very well until it was time to reboot the system. I was then faced with a black screen with a request to login, which I did; but this didn't get me very far. I saw a question which seemed to reflect the same problem, so I typed in root and my password, and at the root prompt, typed Xfdrake as suggested in your reply. I got back Command not found'. I also tried the less/var/log/Xorg.O.log to find out some more information and received the message 'No such file or directory.' A:: This is a fairly common problem with Mandriva (and previously with Mandrake) installations. Sometimes the installer does not automatically configure the X system for your graphical desktop, usually because it does not recognise either your graphics card or your monitor. When this happens, it gives you very little warning, only a red 'not configured' next to the graphical interface section of the summary screen. The solution, as the Mandriva Linux special tries to tell you, is to log in as root and run the command XFdrake. Unfortunately, this appeared as Xfdrake in print, and commands are case-sensitive in Linux. We do apologise for the confusion this has caused. The person responsible for the letters F and f will be severely reprimanded. Back to the list ****** Expanding space on the root partition Q:: I had SUSE 9.2 installed on a new computer, and since the full install needed about 5GB, I reserved 6GB for the root partition and the rest of my hard disk for /home. Having OpenGroupware.org installed and a few other things that reside in /var, /opt or /usr, I'm running out of space in the root partition. I thought (being a photographer and storing all my pictures in my home partition), it would be a good idea to have a second hard disk installed as backup for my pictures and as an extension for my root partition. I want to divide the new hard disk into two partitions. One partition should be an exact and frequently renewed copy of the /home/mydir/mypictures_directory. The other partition should be the new /usr directory (containing the old /usr directory and files from the original root partition). Any suggestions on how to do this? A:: Use YaST to partition your new drive into two partitions and format them with your filesystem of choice. If in doubt, accept the default of ReiserFS. It is vital that you set your new partition to be mounted at /newusr, not /usr, or you could stop your system working. Then do the following as root: --- rsync -a /usr/ /newusr/ umount /newusr rmdir /newusr ,,, Load /etc/fstab into your favourite editor and change the entry for /newusr to /usr and type mount /usr. You are now using your new /usr partition, and need to remove the contents of the old /usr directory with --- mkdir /tmp/root mount --bind / /tmp/root rm -fr /tmp/root/usr/* umount /tmp/root df -h / /usr ,,, The last command should confirm that / now has plenty of free space. If you feel that you might need to alter your partitioning again, you should consider using LVM to handle your partitioning. You'll find full details are at www.tldp.org/HOWTO/LVM-HOWTO/index.html. To keep an up-to-date mirror of your mypictures directory, I recommend rdiff backup from www.nongnu.org/rdiff-backup. In addition to keeping a mirror, it also holds older versions of files you have deleted or altered - ideal if you change your mind. You can run it from a cron task to make the backups as frequently as you like. For example, you could save the following as a script in /etc/cron.daily: --- #!/bin/sh rdiff-backup /home/mydir/mypictures /mnt/backup/mypictures ,,, Back to the list ****** Locked out of /home directory Q:: I'm having a problem accessing my /home/stuart directory under Red Hat 8.0 and Debian Sarge, and was hoping you could help. I'm using three hard disk drives. Two hard disks (40GB and 20GB respectively) have been fitted into a removable caddy and take it in turn to be hda. (I'm swapping drives in order to familiarise myself with several distros.) Each holds /, /boot, / swap, /tmp, /var and /usr partitions and has been formatted as ext3 while the fixed hard disk (hdb, 6.4 GB) has only the /home/stuart partition and has been left with its old ext2 formatting as it contains files and programs which I want to retain. One removable hard disk holds Mandrake 10.1 and can access /home/stuart just fine either as root or as user 'stuart'. The other removable hard disk holds RH 8.0 and this is where the problem lies. When using RH and logging in as root, I can access files and programs in my home directory, /home/stuart, without any difficulty. However, when I try to log in as 'stuart' the system displays the following error dialog: --- 'Please contact your system administrator to resolve the following problem: Could not open or create the file "/home/stuart/.gconf-test locking file"; this indicates that there may be a problem with your configuration, as many programs will need to create files in your home directory. ,,, The error was"Permission denied" (errno=13)'. I've also replaced Red Hat 8.0 with Debian Sarge (which installed without a hitch). But I now get error messages such as: --- 'Could not set mode 0700 on private per-user gnome configuration directory '/home/stuart/.gnome2_private/': Operation not permitted.' ,,, During installs, I'm always careful to ensure that the fixed hard disk is recognised (by the installer) and mounted but not formatted. The permissions are 711 on /home/stuart and 700 on /home/stuart/.gnome2_private/. A:: This is a permissions problem. Although you are using the same user name with both distros, they will have different numeric IDs - and the filesystem uses the numeric IDs to track user privileges. This is quite simple to fix, but you will then run into further problems. You should have a separate home directory for each distro, although all on the same /home partition. To fix the ID issue, you need to tell Debian to use the same numeric ID for 'stuart' as you do in Mandrake. The process is made a little more difficult, because you cannot access the Debian and Mandrake files at the same time. So boot Debian and type --- grep stuart /etc/passwd ,,, You should see something like this. --- stuart:x:1000:1000:Stuart Elliott,,,:/home/stuart:/bin/bash ,,, Make a note of the first number: this is your user ID (UID). Now boot into Mandrake, log in as root and load /etc/passwd into your favourite text editor. Find the line for 'stuart' and change the UID to the same as in Debian. The reason for doing it this way round is that Mandrake is unusual in starting user UIDs at 501, whereas most start at 1000. Save the modified passwd file and type --- chown stuart: -R ~stuart ,,, to change all files to the new UID. Now reboot into either distro and you should be able to access your files. You may still face problems, however, because you are using the same home directory to store config files for two different distros. If, for example, you have different versions of Gnome on each distro, the last one you use will overwrite the config files with potentially incompatible data. The safest approach is to have a separate home directory for each user, say /home/stuart-mdk, /home/stuart-deb and so on. To do this, you need to mount the partition on hda as /home instead of /home/stuart, and put all the contents in the directories into it. Edit /etc/passwd on each computer to set the appropriate home directory. I would then create a /home/stuart-common directory for all the data you wish to share and link the directories in here to your distro-dependent directories. For example: --- ln -s /home/stuart-common/Mail/home/stuart-deb/Mail ln -s /home/stuart-common/Mail/home/stuart-mdk/Mail ,,, will allow you to access the same email in both systems. Use your existing /home/stuart directory as the basis for this directory. I have successfully used this method to run a multi-boot system without any of the corruption of desktop or personal settings that happened with a single home directory. Back to the list ****** Recording voicemails in a database Q:: Hi, I have an application running on a Microsoft-based system that allows me to record telephone voicemails directly into an SQL database. People can send messages to friends, who can either phone in to pick them up or receive them from our website (streamed back to them) or request them over email. I've moved all the code from ASP to PHP, but I'm not certain how to handle the phonecall to WAV file to database conversion. This was the proprietary part of our old setup. Can you help? A:: This is typically done with a modem that allows digitised audio playback and incoming voice digitisation. Most modern internal and external modems do it now. It's just a matter of the software. On the server side, you probably want to look at mgetty-voice (for Linux): http://home.leo.org/~doering/mgetty/. There will be many ways of doing this, but mgetty-voice seems to be the standard underpinning for this type of service on Linux. Next, you need the logic/programatic control of the voice modem. You could probably script something easily to work with mgetty-voice. But here's something that already does something like that called vgetty_web. Just search for"vgetty" or "mgetty-voice" on sourceforge.net and check out the other stuff out there. There are some good 'how to' tips for setting up a Linux-based answering machine if you go to: http://ilug-hyd.org.in/lamhowto.html and to http://frank.harvard.edu/~coldwell/answering_machine/ The sound files you generate as WAVs could be stored quite easily into a mysql database in a binary field. It may be worth doing some sort of compression to MP3 or Ogg Vorbis to save some space. Back to the list ****** Set up a gateway with NAT Q:: I have been trying to help a friend set up a very basic gateway - I just need to NAT everything for him, but I've had no luck so far. I realise a complete script might be a lot to fit in, but could you get me started? A:: OK Vikram, here's a quick and dirty guide. I am assuming only that the mangle table is cleared and does not affect things. Let's do it: --- # iptables -F INPUT # iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # iptables -A INPUT -i lo -j ACCEPT # iptables -P INPUT DROP # iptables -F FORWARD # iptables -P FORWARD ACCEPT # iptables -t nat -F # iptables -t nat -A POSTROUTING -j MASQUERADE ,,, I don't like the last rule that much: it is the quickest way to do it but it's too dirty. Let's replace it with: --- # iptables -t nat -A POSTROUTING -o <externalinterface> -j SNAT -to <externalIP> ,,, If you have a static IP on the gateway, or if you have a dynamic one, just run --- # iptables -t nat -A POSTROUTING -o <externalinterface> -j MASQUERADE ,,, You should replace <externalInterface> with the interface name facing the internet, ie eth0, eth1 or whatever. Back to the list ****** Mandriva freezing on boot Q:: I recently decided to give Linux another try. I partitioned my drive into three logical drives: FAT32, Linux native, and a Linux swap partition (2GB) as well. I installed Windows first, and then tried to install Mandriva. All went well until after the boot. During startup it would always freeze. I put it on verbose mode and could see that the freeze came after initiating the swap partitions. I tried repeatedly to repartition, reformat and so on - but to no avail. I concluded that there must be something wrong with my hard drive. And so I waited till pay day and went and bought a new hard drive. I still have the same problem, though. Installation runs smoothly, the main native drive is mounted cleanly but as soon as it comes to the swap partition it hangs. I have tried to format the partition and check it for errors with PartitionMagic running from Windows, but now when I boot up it still comes up with an error. Do the partitions need to be in any specific order? Must I have a mount point for the swap partition? What could the problem be? A:: It is impossible to say exactly what is wrong without knowing the error message you get when booting. The swap partition must be correctly formatted and have an entry in /etc/fstab. The most likely culprit in this situation is PartitionMagic. It is known to cause problems when used to create Linux partitions. The solution is not to create your partitions in PartitionMagic, but let the distro's installer do it for you. This is especially true with Mandriva, as it has an excellent partitioning tool, but most of them are good nowadays. Instead of letting PartitionMagic create the Linux partitions, just leave the blank space on the disk for the distro installer to use. You may need to create a single FAT partition on the disk, covering the space you want to use for Linux, as some versions of PartitionMagic don't like leaving unallocated space. Once you have done this, delete the partition and let your distro's installer do its stuff. This has two advantages: the partitions are created by the same system that will be using them, so you can be confident that it will create them correctly; and the installer will set up the correct mounting for them. It may be as simple as the swap partition not having been formatted correctly, but as you have no data to lose, I would recommend deleting the partitions and allowing the Mandriva installer to set them up for you. Back to the list ****** SUSE not switching on NETGEAR card Q:: I'm running two Toshiba laptops. The first is a Toshiba Satellite 522 with a built-in Atheros AR5001 wireless adaptor that conforms to the 802.11a/b specifications, the second a Toshiba 8000 with a NETGEAR WG511 wireless card. SUSE 9.3 recognises the NETGEAR card but does not switch it on! The ADSL wireless router is a NETGEAR DG834G. Assuming a clean, out of the box installation of SUSE 9.3 with no configuration, what are the actual individual steps I need to take to configure the hardware? A:: SUSE 9.3 has support for the prism54 chip used in the NETGEAR WG51 and it is enabled by default. Unfortunately, the specification of the prism54 chip was changed recently and it is not possible to make the driver work with the newer version. This is made worse by the fact that you cannot tell which chip a card has until you try to use it. Manufacturers have an annoying habit of changing the chip on which a product is based without changing the model number... The new chip requires some firmware code in the driver - code that the open source community cannot use. So in their ever-resourceful fashion, they found another solution, a program called NdisWrapper, which uses the code from the Windows driver supplied with the card. The first step is to install NdisWrapper from YaST. You can now use the driver for the CD that came with the card or download the most recent, tested driver. To use the supplied driver, find the directory on the CD containing the drivers (there should be a .inf and a .sys file for your card in here). As root, type --- ndiswrapper -i /path/to/driver.inf ndiswrapper -l modprobe ndiswrapper ,,, The first command installs the driver and the second shows the installed drivers, so you can see if the installation worked. The third command loads the driver, and your card should start working. You can now configure the card in YaST's Network Devices > Network Cards section. It should be enough to leave this on Automatic Address Setup and let the card get its address and connection information from your router. If you have problems with the drivers on the CD, follow the instructions at http://ndiswrapper.sourceforge.net/mediawiki/index.php/Installation#Install_Windows_driver to download a new version. This procedure should also work with the Atheros adaptor in your other laptop. Back to the list ****** Updating Nvidia kernel modules Q:: I work for a web design company specialising in websites for graphic design companies. We have programs that manipulate large multimedia files, and use the graphics processor unit functions on an NVIDIA card, but find ourselves the graphics card. We currently use having to compile the drivers every time we recompile the kernel - which we do quite often. Then the old drivers for the old kernel stop working if we roll back to the old one, then we have to install again. Can you help? A:: The NVIDIA drivers are frequently updated (this is a good thing), but your problem is really related to the split nature of the code - there's a software driver and a kernel module. Fortunately, you can add one without the other. Installing just the kernel module should mean that you can keep the old, compatible software driver and have it work from whatever kernel you are running. Download the appropriate driver packages from the NVIDIA site and try the following: --- /NVIDIA-Linux-x86-1.0-7667-pkg1.run -A -K ,,, This should install the module, but leave the driver code the same. Back to the list ****** Installing Kanotix to the hard drive Q:: I have burned a Kanotix ISO, and it booted quite happily on my second system. I really like the look and feel of it as well as all the abundance of software installed. But my question is, how do I install it to a hard disk? Booting from the CD is kind of slow, and I really want to learn more about this awesome OS. I reviewed the docs and they said something about a ToHd=/dev/hdXX command line parameter, but I couldn't get it to work. The desktop I am using has a single ATA/100 HDD (/dev/hda) and a DVD burner (dev/hdc). Oh, and can you also recommend a good distro for an older laptop (P3 500 Dell 256MB RAM) that will support PCMCIA wireless? I have tried Fedora 2 an 3, SUSE 9.1 and three different PCMCIA wireless cards (Buffalo, D-Link and Linksys) with no luck. A:: First you need to prepare your hard disk. To start the partitioning tool, press Alt+F2 and type: --- sudo qtparted ,,, Here you can resize existing partitions and create new ones in the space you create. You will need at least two partitions, for / and swap. Your / partition needs to be at least 3GB. Now save your changes, exit QtParted, press Alt+F2 again and type: --- sudo kanotix-installer ,,, Select the Configure Installation option and follow the instructions to have everything installed in a few minutes. If you need to do any further configuration after installation, go into the Kanotix section of the KDE menu. The installer did not set up networking on our test system, but it only took a couple of clicks in here to get back online. Now to answer your second question. Slackware and Debian are generally considered good choices for older hardware. The wireless configuration is largely independent of the distribution, although some distros have better graphical configuration tools than others. Are these 11Mbps or 54Mbps cards that you've tried? Direct support for the latter can be patchy, and you may need to use NdisWrapper and the Windows drivers. See http://ndiswrapper.sourceforge.net for more information. Back to the list ****** Looking for a lightweight Linux distro Q:: I have got a laptop computer and want to run Linux on it. At the moment it has Red Hat 8 but it is so slow. These are the laptop's specs: --- Processor Intel Pentium 133 Memory 48MB Video 2MB controller chip C&T HIQV32 (CT65550) Hard drive size 2.02 GB CDROM ,,, Which Linux would go best? A:: I am not surprised to hear that Red Hat ran slowly on this machine. Red Hat uses the Gnome desktop, which is quite resource-intensive. I imagine it would run as smoothly as frozen treacle in 48MB of RAM. You need a lightweight distro specifically targeted at older hardware. Vector Linux would be a reasonable choice. It is based on Slackware, which is popular for lower-end hardware, and comes with the IceWM desktop. While this is a lot less demanding than Gnome or its equally memory-hungry alternative, KDE, it still boasts a good range of useful features. For better performance, but slightly fewer features, FluxBox may be a more suitable choice. You get to pick which window manager you want during installation. Although suitable for older hardware, Vector Linux is not old software. The latest release, 5.1, is only a couple of weeks old at the time of writing. You can get Vector Linux from www.vectorlinux.com. Make sure you download the 5.1 release and not the 5.0 SOHO version: that uses KDE, and would run about as slowly as Red Hat on your laptop. If you can add more memory to this system, I would strongly advise you to do so. Linux works best when it has plenty of elbow room, so doubling the memory would make a lot more difference than a faster processor. Your hard drive size is also fairly limited, especially as you will need some swap space; so do not install anything more than you need. Just the core software and desktop should be enough to get you going. Back to the list ****** HyperTerminal-like program for Linux Q:: Please can you tell me if there is a program such as HyperTerminal available for Linux? A:: There is, although it is more common to use a remote login over an internet connection these days. For a dial-up terminal, there are a couple of options worth mentioning. Minicom (http://alioth.debian.org/projects/ minicom) is the most commonly used. It runs in a console, so may not be quite what you are looking for. Komport is a GUI based dialup terminal for KDE. It is available from http://komport.sourceforge.net as source code and packages for RPM and Debian. Back to the list ****** Replacing Sendmail with Postfix Q:: I am running Sendmail with SendmailAnalyzer and a custom web interface, and I would like to replace the whole lot with Postfix and add ons. Can you provide me with at least a direction to follow? A:: I can do one better: we will be setting up Postfix to use a database to store user credentials, in such a way that it works with Postfix Admin (http://high5.net/postfixadmin) out of the box. As you may know already, the start of the magic happens in main.cf. Let's make the app even more efficient in its lookups, and also open the door to having POP3/IMAP daemons look up password information in the database. A lot of config files use hash: argument to specify files where we have name /value pairs. We should change all the relevant entries to include a new type - mysql:. mysql: is a highly configurable driver that allows us to set up a database and its tables however you want. You can then configure it to fetch the info you need from the structure you created. It will all become clearer after we go through the first example. For now, just think of it as a pointer to a location that says, "Look up info from a MySQL database". The first line we should change in the main.cf file is the virtual_mailbox_domains line. It shouldnow read: --- virtual_mailbox_domains = mysql:/etc/postfix/mysql-virtual-mailbox-domains ,,, Now, mysql-virtual-mailbox-domains has only a suggested path; it can be wherever you want it to be. In the file mysql-virtual-mailbox-domains, we get to tell the mysql: argument where and how to get the info it needs, rather than the file itself being the repository of the info. Ah! Things are getting more interesting! The file mysql-virtual-mailbox-domains should contain at least: --- hosts = unix:/var/lib/mysql/mysql.sock, 127.0.0.1 user = postfixadmin password = p0stf1xadm1n dbname = postfix table = domain select_field = description where_field = domain ,,, These parameters should be in any and all files that configure the mysql: argument.The hosts directive allows us to specify which database server we want to use. Here, we chose to list the Unix socket and the IP address. mysql: will try the locations we set from left to right until it is successful. The user is the database user mysql:, and should connect to the database; password is the password,; dbname is the database name; table is the database table. The other two directives require a little bit more explanation. In an attempt to do just that, let me show you how mysql: actually uses these directives to query the MySQL database. The exact query constructed by mysql: is: --- SELECT select_field FROM table WHERE where_field = 'magicValue' ,,, which translates to: --- SELECT description FROM domain WHERE domain = 'magicValue' ,,, And this query is submitted to database postfix on a database server listening on Unix socket /var/lib/mysql/mysql.sock. If that fails it will be submitted on 127.0.0.1, with user postfixadmin and password p0stf1xadm1n. description, domain and domain are values from the file above. However, in this case magicValue is the domain name from the recipient's email address of the email message that the parsing is being applied to. Let's check another directive from main.cf. --- virtual_mailbox_maps = mysql:/etc/postfix/mysql-virtual-mailbox-maps ,,, and the file mysql-virtual-mailbox-maps: --- hosts = unix:/var/lib/mysql/mysql. sock, 127.0.0.1 user = postfixadmin password = p0stf1xadm1n dbname = postfix table = mailbox select_field = maildir where_field = username ,,, The query in this instance would be to the same database server with the same username/password, but constructed differently, as per the file configuration: --- SELECT maildir FROM mailbox WHERE username = 'magicValue' ,,, magicValue in this instance would be the full recipient address in the email being parsed. From last month's configuration, if magicValue is account1@example com then the returned value would be example.com-dir/account1/. Don't worry if you don't understand this fully: I am replicating a setup that will work with Postfix Admin. Once that is set up, you will be able to use that interface to easily add example domains, then real domains, then check the database for how the data is laid out to make sense of it all. This is the missing part from the Postfix Admin documentation, ie INSTALL.TXT http://high5.net/postfixadmin/index.php?file=INSTALL.TXT. The rest of the main.cf entries that should be changed are --- virtual_alias_maps = mysql:/etc/postfix/mysql-virtual-alias-maps ,,, And the respective file contents in mysql-virtual-alias-maps: --- hosts = unix:/var/lib/mysql/mysql.sock, 127.0.0.1 user = postfixadmin password = p0stf1xadm1n dbname = postfix table = alias select_field = goto where_field = address ,,, And that's it! The unchanged entries to main.cf are: --- virtual_mailbox_base = /path/to/mail/root virtual_minimum_uid = 5000 virtual_uid_maps = static:5000 virtual_gid_maps = static:5000 ,,, One caveat though. If you get the following error once you have set everything up: 'fatal: unsupported dictionary type: mysql' then your package lacks MySQL support. On an RPM-based distribution you can edit the spec file and change --- %define MYSQL 0 ,,, to: --- %define MYSQL 1 ,,, then rebuild the source RPM and install. Your Postfix setup should now be compatible with Postfix Admin. I suggest that you install it, play around with it and check the database to see where all the info is going and everything will be crystal clear! Back to the list ****** Triple booting Windows, Fedora and Knoppix Q:: I have a 40GB hard drive split into two 20GB partitions for Windows XP and Knoppix. I made a new 10GB partition and installed Fedora, wanting a triple boot so I could choose Windows, Knoppix or Fedora. When I rebooted after installation, only Fedora and Windows were on the boot menu, not Knoppix. I went into Fedora, and Knoppix is still there on hda6. I installed Knoppix again hoping that it would add Fedora to the boot menu, but it did not. Now it only shows Windows XP or Knoppix. Is there a way to add Fedora to the boot menu so I can pick XP, Knoppix, or Fedora? My setup is as follows: --- hda1 Windows XP hda3 Fedora hda6 Knoppix ,,, A:: This is a fairly common situation when installing a second Linux distro. The installers are good at detecting an existing Windows system and adding the relevant option to the boot menu, but very few will pick up on other Linux installations. The result, as you have already discovered, is that your previous distro is still installed, but there appears to be no way to boot into it. As a short- term solution, you can usually use the installation disc in rescue mode to boot your installation. Pressing F1 after booting from the disc usually shows the options. The long-term solution is to add an entry to your current boot menu for the hidden distro. This is made easier here because both Knoppix and Fedora use the Grub bootloader. You need to load the Grub configuration files from the two distros into a text editor. First you will need to mount the other distro's partition with --- mkdir -p /mnt/fedora mount /dev/hda3 /mnt/fedora ,,, Then select System > More Applications >File Manager -Superuser Mode from the KDE menu, navigate to /boot/grub and load grub.conf. Now go to the same directory in your Fedora setup and do the same. Highlight the three lines in the Fedora file starting with 'title Fedora' and copy them to the Knoppix file. Save and reboot and you should get your three way choice. If the Fedora installer used LVM (Linux Volume Manager) when partitioning the disk, you might not be able to access your Fedora files from within Knoppix, as Knoppix doesn't support LVM. In this case, you need to perform the process from Fedora, so you'll need to first reinstall the Fedora boot loader from the installation disc. Start the installer as before and select the Upgrade An Existing Installation option. Select the Update Bootloader Configuration option to reinstall the Fedora bootloader. Now let the update finish, boot into Fedora and copy the relevant section of the Knoppix boot menu to that of Fedora core. If you are interested in the various options for triple (and more) booting with Linux, there is an excellent reference at http://home.planet.nl/~elst0093/motub/multboot.html. Back to the list ****** Migrating a server's DNS, websites, home directory and email Q:: I have had a dedicated Linux server with a hosting company for about six years or so. As part of their customer-retention strategy they have arranged for me to upgrade to a newer server with better bandwidth allowance, SLA [service level agreement] and so on. The server is used by my very small business for email, web hosting, a testing server and managing DNS for domains that we own. A few friends and family also use the server for their email and web hosting. Is there a best-practice way of transferring DNS, websites, home directory files and email (Sendmail) to the new server with minimal disruption for myself and other users? I cannot afford to pay the hosting company's staff to do the transfer for me. A:: Server migration, especially when changing distribution and versions (MySQL, Apache and the like), is never a straightforward procedure. However, it is a good opportunity to revise your current setup, and just as when moving house, you will come across a lot of stuff that can be thrown out. Also, this is a perfect opportunity to audit all your configurations and create and test backup and disaster recovery strategies. On the new server, set up and test all the zone files, email configurations, user accounts and websites. You can trick your workstation into believing that the sites are on the new IP by modifying your hosts file (/etc/hosts on Linux, c:\windows\system32\drivers\etc\hosts on Windows XP). DNS is the first target for migration. Point NS records at the registrar to the new name servers at least three days ahead of the planned switchover. Reduce the TTL ('time to live') on your new records to a reasonable 35 minutes so that on IP switchover visitors will clean their cached records of your IPs and pick up the new ones in no time at all. Set the TTLs back to a more internet-friendly value, say 24 hours, a few days after the migration. Depending on your budget and how critically reliant your sites are on a database back-end you may have to take extra precautions on the way you move over your databases. Finally, tend to your log files and web statistics. They're often forgotten during migrations. Back to the list ****** Configuring the kernel for IPTables support Q:: Thank you very much for including Gentoo 2004 with your last issue -what an amazing distribution, and what amazing documentation! I have an x86 machine connected to the net through a Draytek Vigor 2600G ADSL Modem/Router. This is also a four-port 10/100 switch with wireless capability and a firewall -quite expensive but well worth it! The ADSL service is PPP over ATM (PPPoA) and my interface is an Intel Pro/1000 MT Desktop Adaptor referenced as eth0. By the way, I statically compiled e1000 support into my kernel. In light of the fact that Gentoo also provides a separate e1000 module/package, was this a good decision? I have yet to find any issues with my setup. Safe in the knowledge that I was protected by this comprehensive firewall, I've only just begun to look at IPTables, and here's my problem. First of all, I'm a little lost as to how to configure my kernel (linux-2.4.26-gentoo-r6) for IPTables support. There seem to be several incompatible options here that I can't fathom. Secondly, if you compare some of your previous FAQs on your help pages, such as IP security, firewalls and Linux and the Internet, with some documentation I found at http://gentoo-wiki.com/HOWTO_Iptables_for_newbies, you'll notice a little difference in the number of rules and amount of detail given. I hope you're not as lost as me upon viewing the latter! Presently, I'm fearful of tinkering before understanding things more, so please help! A:: You should be able to get going with IPTables simply by running iptables -nvL from the command line. This will list the three basic 'filter' tables that you can configure to block traffic. Gentoo's kernel comes with IPTables support as default, although if you've compiled your own kernel with support for the Intel EEPro 1000 NIC, you may want to compile IPTables into the kernel rather than using modules. Generally, it's a good idea simply to compile all of the options into the kernel because it can be very frustrating to have to reboot a firewall simply to add support for a particular IPTables feature. The documentation from Gentoo contains a very complete firewall configuration, which is beyond the needs of the vast majority of users. The script is useful because it allows for easy modifications to permit access to and from specific ports, making it a great starting point for anyone building a complex firewall. Back to the list ****** Booting from external USB and CompactFlash devices Q:: Regarding booting from an external USB or CompactFlash device - I have four questions: 1. Is it possible to boot from a USB flash drive if your BIOS doesn't support USB booting? 2. Can a boot CD with Smart Boot Manager pass off the process to the USB? 3. Is it possible to fit an uncompressed Knoppix or Kanotix CD to a 2MB CompactFlash or USB? 4. What parts of the system would be better off on the hard drive, eg swap or logs? A:: I'll answer each of your questions in turn. 1. Yes, but you will need a boot floppy to do this. For example, Puppy Linux has a floppy disc image on the website that can be used to boot it from a USB flash disc. See www.puppylinux.org. 2. No. Smart Boot Manager does not support booting from USB devices. 3. It should be, although you shouldn't uncompress the image. USB reading is quite slow, so any reasonably powerful machine would probably uncompress faster than it reads, making the compressed image faster. There are instructions for installing Knoppix on to a 1GB (or larger) flash disc at www.knoppix.net/forum/viewtopic.php?p=64999#64999. Alternatively, Damn Small Linux can be installed on a USB device, and DSL is a cut-down Knoppix. So you should be able to put a full Knoppix on to a 1GB flash disc by booting DSL, running sudo dsl-usbinstall to install it to the USB device and then replacing the knoppix/knoppix file with the full-sized one from a Knoppix CD. 4. Anything that involves writing, especially frequent writing like swap and log files, should not be placed on the flash device. Flash memory has a limited write lifetime, typically between 100,000 and 1,000,000 writes (and as little as 10,000 writes for older devices). If the system is continually writing to the same area of memory, it could fail in a few months. Your choices are: to put these on the hard drive, which limits portability; to write logs to a RAMdisk and copy it back to the flash drive on shutdown, which would limit the writes to one per session; or to use the JFFS2 filesystem from http://sources.redhat.com/jffs2. This is a special filesystem designed to address this problem with flash devices. If you want a ready-made distribution for a flash drive, try Flash Linux from http://flashlinux.org.uk. Back to the list ****** Installing Mandriva software from the CD/DVD drive Q:: I have very recently installed Mandriva 2005 LE. The install went very well for a first time Linux user. But I have a problem with installing some of the additional programs. Sometimes I am informed that Mandriva needs to have my DVD in drive hda (my CD drive) and I must then hit Enter. When I put it in, the disc is immediately ejected - not surprising as it is a DVD! If I try putting the DVD in the DVD drive instead and hit Enter, the disc is again ejected. If I then simply try to abandon that task I am forced to end the session to do so. My /etc/fstab contains --- /dev/hda /mnt/cdrom auto umask=0, user,iocharset=iso8859-15,codepage=850,noauto,ro,exec,users 0 0 /dev/hdc /mnt/cdrom2 iso9660 user,iocharset=iso8859-15,noauto,ro,exec 0 0 ,,, A:: This occasionally happens with Mandriva installations. Despite the installer running from the DVD drive, it will sometimes add the path to your CD-ROM (or CD-RW) drive to its list of software sources. In your case, it is looking at /mnt/cdrom when your DVD is mounted at /mnt/cdrom2. Fortunately, the solution is dead simple. Put the DVD in the correct drive and start the Mandriva Control Center. Go into the Software Management section and select Media Manager. This will show you a list of software sources - possibly only the DVD. Click on the DVD entry, press Edit and change the URL entry from removable://mnt/cdrom/media/main to removable://mnt/cdrom2/media/main, that is, the path to your DVD drive. Click on Save Changes and all should be well. Back to the list ****** Maintain a persistent SSH connection Q:: I have to SSH into three Red Hat servers at one desert outpost (I work for an oil company). The TCP/IP connection to the field servers is an unreliable internet over satellite link. Very often I find myself losing connectivity halfway through an operation, and if I leave a session open for more than 15 minutes, the satellite router (to which I have no access) rudely drops my connection. I know that I can run most applications in the background but I am looking for a solution to maintain a persistent connection. Do you know if there is a budget solution that I can implement? A:: Yes, there is. The Nohup command runs a command immune to hang ups, with output to a non-TTY, while Screen is a full-screen window manager that multiplexes a physical terminal between several processes (typically interactive shells). I am a big Screen fanatic. When invoked as Screen, you can create new windows through a Ctrl+A C with Ctrl+A " (note that is double quotes) to list and select active windows. Ctrl+A D detaches from Screen, to which you can reattach by invoking screen with the -r parameter. If your session drops and you want to reattach to a screen that hasn't been properly detached, invoke as screen -x. Amazing! Back to the list ****** Access Windows FAT filesystems in SUSE and Fedora Q:: I have an old but dependable Compaq Deskpro EN (733MHz, 488MB of RAM) which I've set up with a 10GB hard drive with Windows XP. I also have the My Documents directory including all my music files, and my SUSE Linux 9.2 installation, on a 40GB hard drive. This allows me to completely max-blast Windows when I want to start again without affecting my documents or Linux. My problem is that SUSE cannot see either of my Windows directories - when I look under storage devices it shows only the Linux filesystem. I've tried Mandriva and Fedora, but the same thing happens. I really want to be able to listen to my music in Linux (and Windows) without changing my windows/documents partition to FAT32. Any ideas? A:: If you installed SUSE after Windows, the installer should have picked up your Windows directories and added them to /etc/fstab, so they would be mounted on boot. If this is not the case, you need to add them to /etc/fstab manually. This has to be done as root, so open a terminal, type sux - to log in as root and give your password. Use whatever editor you prefer to change the file, for example: --- sux - ,,, <enter root password> --- kwrite /etc/fstab ,,, You need to add two lines to the end of this file. This example assumes your Windows NTFS partitions are the first primary partition on each drive. Otherwise, change the device names accordingly. --- /dev/hda1 /windows/C ntfs ro,users,gid=users,umask=0002,nls=utf8 00 /dev/hdb1 /windows/D ntfs ro,users,gid=users,umask=0002,nls=utf8 00 ,,, Save the edited file, then create the mount points. Mount the partitions and check that they are mounted. --- mkdir -p /windows/{C,D} mount -a df -h ,,, Provided the mount command gave no errors and df showed the partitions, you should have full read access to them. Write support for NTFS is very limited in Linux, so Windows partitions are usually mounted read-only. If the mount command gave an error along the lines of 'wrong fs type, bad option, bad superblock...', double-check the options you typed into /etc/fstab. NB Back to the list ****** fsck command asking to repair manually Q:: I wonder if you can help. I am running Ubuntu and during boot it says the following: --- 'fsck failed. Please repair manually. * CONTROL-D will exit from this shell and continue system startup. root@(none)::~#'. ,,, Ctrl+D does indeed continue the boot process and brings me to the login screen. Logging in gives the error message: --- 'Your home dir is listed as: /home/john, but it does not appear to exist. Do you want to log in with the / (root) dir as your home dir. It is unlikely that anything will work unless you use a failsafe session'. ,,, A:: Despite what it sounds like, Fsck is not a chain of clothes shops, nor a strange Linux curse. It is the FileSystem Check program, which performs a similar function to the likes of Scandisk. The first message means that the boot process has detected an error on one of your partitions that needs your attention. It wants you to do this before pressing Ctrl+D to continue. By pressing Ctrl+D straight away you have left it in its faulty condition, so the partition could not be mounted. Presumably, this partition is mounted at /home, which explains why /home/john cannot be found when it fails to mount. The error message should have told you which partition was affected, for example /dev/hda6. If not, typing --- grep /home /etc/fstab ,,, will tell you which it is. Now type the following code, replacing 'N' with the partition number: --- fsck -f /dev/hdaN ,,, After some disk activity and various screen messages, Fsck should exit without an error. To be safe, I prefer to run the command a second time, to make sure things really are fixed. Now you can press Ctrl+D to continue with the boot process. If the problem persists, it is likely that your disk has a fault. Your first action should be to back your data up, now - not tomorrow. Then you should install the Smartmontools package - from http://smartmontools.sourceforge.net or an Ubuntu or Debian repository, and run --- smartctl /dev/hda ,,, for a report on your disk's health. Back to the list ****** Virtual terminals disappeared from Fedora Q:: I have just installed Fedora and have one quick question -where have all of the virtual terminals gone in this release? Pressing Ctr +Alt+F1 to F6 just gives blank screens. A:: This is a known bug. The fault lies in the file /usr/X11R6/lib/modules/libvgahw.a so move this somewhere safe (don't delete it, just in case) and replace it with the same file from Fedora. If you do not have FC3 installed, you can get the file from http://rapidshare.de/files/2399145/libvgahw.a.html. This is advertising-financed web space, so you will need to scroll down to the bottom of the page, click the Free button then scroll to the bottom of the next page for the actual download link. Copy the replacement file to /usr/X11R6/lib/modules/ and reboot. Your virtual terminals should now be back. Back to the list ****** Notify if a Linux machine has been broken into Q:: I have recently been humiliated by my ISP for spamming. It turned out that a forum I had set up on my home box had been hacked and I was mass mailing the whole world. That issue has been fixed but now I'm losing sleep fearing that my server is a zombie. Do you recommend tools or frameworks for quickly and reliably telling if a machine has been broken into? A:: While there is no short answer to that, there are some simple steps that can be followed to reveal most common scripted break-ins. 1. Use ls -lai to determine whether there are any files in /tmp and /var/tmp that shouldn't be there. In particular, watch out for executables, scripts and text files that are full of email addresses. 2. On RPM-based systems it is possible to verify whether system utilities such as Ls, Ps, Netstat and so forth have been replaced with ones that hide the hacker's activity. On a Red Hat-based system, the following packages should be verified using --- rpm -V <packagename> ,,, for the following packages: util-linux, coreutils, net-tools, procps and lsof. 3. Check running processes with ps -auxf. 4. Use netstant -tanp to find out whether there are processes listening on strange ports, or inexplicable amounts of outgoing traffic. The -p option shows which program is being used. Very often this is named in a way to make it look like a legitimate program (such as httpd). The lsof command can also list listening ports. 5. Review /etc/passwd to see if any users have been added to the system or have had their UID changed. It's a good idea to compare to a known clean copy of the password file. 6. Check Apache log files for tell-tale signs of exploits where utilities such as Wget were used to download some form of malware. Check other system log files for anything suspicious- in particular, for log files that have been redirected to /dev/null. 7. Finally, Chkrootkit (www.chkrootkit.org) checks your server for signs of rootkit presence. Back to the list ****** Cable modem connection not working after booting up Q:: I just installed SUSE 9.3, and am having a couple of networking problems. I've got the system all running smoothly so far but I'm trying to share my internet connection with two other computers running XP. I'm also trying to have my Linux machine identify my XP network for mapping drivers and view shared folders and so on. I have a problem with my cable modem: now and again my internet connection does not work after booting up. Can you recommend a good book that covers issues like this without too much technical stuff? A:: You need two network interfaces on the computer - one to the local network and one to the internet. The former would be your local Ethernet, the latter your cable modem, which could be Ethernet or USB. You can set up connection sharing from SUSE's Yast. First you need to make sure your internet connection is working properly on the SUSE machine, then turn on routing by going to Network Services > Routing in Yast and ticking the Enable IP Forwarding button at the bottom. Press Finish and it's all done. Now you need to tell the other computers where to look for their internet connection. On each computer, set the gateway address to the IP address of the local interface on the SUSE computer. Browsing shared folders on your windows machines is easy if you use the default KDE desktop. Open a file manager window and type smb:/ (that's a single slash) in the location bar. You'll see a list of your workgroups (usually only one) and you can browse through here to access the various shared folders. If you want to share folders to be accessed from the Windows computers you will need to set up Samba. Go to Internet & Network > Samba in the KDE Control Centre, click the Administrator button for root access and set up any directories or printers you wish to share. Make sure your workgroup name, in the Base Settings tab, is the same as on the Windows boxes. Finally, your intermittent cable modem problem may be a timing issue. Is this a USB modem? Does it have an Ethernet option? If the answer to both is yes, it would be best to add another Ethernet card to your computer and connect the modem that way. If you are stuck with USB, it is likely that the connection is not coming up fast enough to be ready when the computer boots. In this case, the quick solution is to unplug and reconnect the modem from your USB port. This should force it to reconnect to the ISP. As for reading material: do you have a boxed version of SUSE? The SUSE manuals are some of the best around, and have the advantage of being specific to your distro. Back to the list ****** Limit user to only restarting Apache Q:: My web developer has been granted access to FTP and SSH into a dedicated server that we are renting. He can upload pages and manage MySQL together with an Apache include file for the server's site-specific configurations. Since our company's security policies dictate that we cannot disclose the root password to a contractor, we are being called by the developer to restart Apache a number of times a day, which is not ideal. What do you recommend? A:: If you are running Webmin you will be able to create a user that is restricted to doing nothing but stopping and starting Apache. First, create a new user through Webmin > Webmin Users and select Apache Webserver. Click on the Apache Webserver link to restrict access specifically to whichever aspects of Apache administration the contractor needs. Alternatively, if command line access is preferred, Sudo becomes the way to go. It is likely that a copy of Sudo (www.sudo.ws) came preinstalled with your distribution. The sudo command allows certain users or groups to execute a number of commands as root or otherwise specified. The configuration file /etc/sudoers, editable through visudo as root, defines who can do what as who. The configuration itself can be a bit daunting, and time spent reading the man pages is time well spent. Here is a simplified configuration that can be used to allow user webman' to execute the Apache and MySQL startup files. The user will also be able to kill, as user 'apache', any renegade process belonging to user 'apache': --- Cmnd_Alias HTTPD = /etc/rc.d/init.d/httpd Cmnd_Alias MYSQLD = /etc/rc.d/init.d/mysqld Cmnd_Alias KILL = /bin/kill webman ALL = (root) NOPASSWD: HTTPD, MYSQLD webman ALL = (apache) NOPASSWD: KILL Usage: $ sudo /etc/rc.d/etc/httpd stop $ sudo -u apache kill 9982 $ sudo /etc/rc.d/etc/mysqld restart ,,, This should set you straight. Back to the list ****** Apply GTK themes to KDE apps Q:: I've been using Linux for a while, but there's something I've been puzzling over that I've never worked out. I use KDE for my desktop, but I still use some GTK apps such as Gimp. Is it possible to apply GTK themes to GTK apps running under KDE? If so, how? A:: Yes, you can use Gtk-chtheme to preview and select GTK themes. The program is available from http://plasmasturm.org/programs/gtk-chtheme as source code or RPM packages. There's also a Debian package available from the various Debian repositories. An alternative solution is a module for the KDE Control Centre that adds a panel for GTK Styles And Fonts to the Appearance & Themes section. You can get this from www.freedesktop.org/Software/gtk-qt. This module allows you to select a theme in the same way that Gtk-chtheme does, or apply your KDE theme to GTK applications. I use both programs, because the KDE Control Centre module has no preview facility. I'd use Gtk-chtheme to browse newly installed themes, or the KDE Control Centre when I know which one I want. You may as well install both - some distros come with the KDE module pre-installed - and make up your own mind. Back to the list ****** Mouse problems in Fedora Q:: I just received your December issue with the Mandrake 10.1 disks. I've installed the system, but for some reason it just won't boot. My computer is home assembled with an AMD CPU, an ATI Rage 128 video board and a two-button mouse on COM2. I have two hard drives: one has SUSE 9.0 and Windows ME in a dual-boot and another that's used for experiments, such as the Mandrake installation. The experimental OSes are run from /dev/hdb. I tried Mandrake 9.2, but the video was distorted and it ran like treacle. I have a feeling that the ATI board may be the problem visually. I tried Fedora, but apparently it couldn't stomach a mouse on COM2 (at least, it never found it) so I couldn't use that either, and I never found out how to tell it where the mouse was. Now Mandrake 10.1, when selected from GRUB, simply returns to reboot. I managed to get something to'take' by using the SUSE vmlinuz and initrd files instead, but there were too many errors for it to finish. Obviously they're incompatible, but the Mandrake ones seem to contain errors. Any ideas please? All offerings gratefully received. A:: A reboot immediately after boot generally indicates a kernel issue, and since you have an AMD CPU, it may have complaints with the i686 compiled kernel from Mandrake 10.1. Using the SUSE kernel will allow the box to boot. However, as each vendor has such different kernels, as you've found, it's not always successful. Fedora should be able to function with a mouse on COM2, or ttyS1 in Linux language. In the worst case, you can simply modify your /etc/X11/XF86Config-4 and point it to /dev/ttyS1 rather than /dev/psaux and ensure that the mouse type is set correctly. You may want to purchase a PS/2 or USB mouse with extra buttons because Linux really likes that middle mouse button, and having to click both at the same time gets tiresome very quickly. Back to the list ****** Debian not booting into graphical mode, only text mode Q:: I installed Sarge from a DVD distro. Everything seemed to go OK. At the end of the install, it asked me to log in and then dumped me in a full-screen Bash shell. I expected to see a desktop environment. I repeated the procedure using a net load ISO from the Debian site. Same result. What is going on? What do I have to do to get a desktop? Why didn't the install create it for me? A:: Debian installs very little by default: just the basics to get a core system working, which does not include X. During the second stage of the installation, after the reboot, you are asked to choose from software collections. The first in this list is Desktop Environment. It looks like this is pre-selected, because the cursor is in the selection box to the left of the name, but it is not. Package groups are only installed when there is a star in the box (see screengrab above). You need to explicitly select the groups you want by moving the highlight bar over them and pressing space. If you simply press Enter at this stage without selecting anything, you will get exactly the system you describe. All is not lost. There is no need to reinstall. Log in as root and type aptitude to load the package manager. Highlight Tasks and press Enter, move down to End-User and press Enter, then highlight Desktop Environment and press '+' to select it. Press G to see what will be installed and G again to begin installation. This will install both the KDE and Gnome desktops - you will be able to choose which you use when you log in. There are a few basic configuration questions to answer, but the defaults are fine if you are unsure. You will also be asked some questions to help configure the graphical display. These are the same as you would have been asked during installation, had you selected the desktop option. Once installation has finished, which will take several minutes, your desktop should load the next time you boot up. Back to the list ****** Secure file transfers by switching from FTP to SCP Q:: I have a simple shell script that is scheduled to download files from a remote server by FTP. In the shell script I have hard-coded USERNAME and PASSWORD to string variables to access the remote server. How do I prevent the USERNAME and PASSWORD being seen by others when they just open up the shell script file? A:: The safest way to do this requires SSH access to the server. If this is available, you can use the scp command to send the files. The syntax for this is similar to cp, but it works over an encrypted SSH link. For example, you would download a file with --- scp -p user@server:/path/to/my/file ,,, As it stands, this will still ask for a password, but SSH has a means of authenticating users by means of keyfiles. If you do not already have a keyfile pair, use ssh-keygen to generate them. Full details are in the man page, but ssh-keygen -t dsa will create a pair with the default settings. This generates two files, a private key named id_dsa, to go in ~/.ssh, and a public key named id_dsa.pub. The names will be different if you choose to create RSA instead of DSA keys. Copy the public key to a file named authorised_keys and put this in ~/.ssh on the server. Now SSH will use the keys to authenticate and not require a separate password. If SSH is not an option, you will have to use an FTP client to transfer the files. Some of these have the option to store passwords in a configuration file, which you should chmod to 600 so that only you and the root user can read it. This is safer than putting the password in a script to be used when you run the programs, because then the password can be read with ps while the program is running. For example, Ncftpget and Ncftpput are part of the Ncftp package and accept a login definition file instead of a URL. The file format issimple: --- host ftp.host.com user myuser pass mypass ,,, Then you can download the files with a single line in your script --- ncftpget -f login.def dest/dir path/to/file1 path/to/file2 ... ,,, where login.def is the file containing the login information. Ncftp, the interactive FTP client in this package, is able to store encrypted passwords in its bookmarks file, but this file is not used by the non-interactive Get and Put programs.0 FTP is inherently insecure. Even if your password is not stored anywhere, it is still sent in plain text when logging in. If security is important, you should really look for an alternative means of transferring the files. Back to the list ****** View log files in Webmin Q:: We had a Fedora-based solution set up as an email, DNS and firewall server for an office of eight people. We manage the server mostly through Webmin, and we have our phone system wired to the server's serial port where call activity gets written to the log file /home/phone/cdr.current. A Cron job emails the previous day's activity daily while we review current telephone activity over SSH a number of times a day.cdr.current gets deleted automatically at the end of each month. After a week it is already substantially long, and too long to view entirely over SSH. Is it possible to view an arbitrary log file through Webmin to avoid additional software being written or installed? We are trying to avoid both unnecessary SSH access and additional software being installed. A:: The shell using tail -n would return the last 'n' lines of the log file. This in turn could be conveniently piped into tac to reverse the text, effectively listing the latest entries at the very top. --- tail -n 200 /home/phone/cdr.current | tac ,,, Webmin, under the Options tab, makes such commands easily executable. Here is a possible configuration option: --- Definition = View CDR Command = /usr/bin/tail -n 200 /home/phone/cdr.current | /usr/bin/tac Run in directory = /tmp Run as user = Webmin User Command outputs HTML = No Maximum time to wait for command = 5 seconds ,,, Saving the configuration creates a View CDR button, which should provide the functionality. Back to the list ****** /var/log/messages contains no data Q:: I tried to review /var/log/messages and found the file to be empty. Listing the contents of /var/log I found /var/log/messages to be dated two days ago while /var/log/messages.1 was dated today. I created a few entries using logger and saw that my messages were in fact being written to /var/log/messages.1. I rebooted the system and from then on new entries were being written to /var/log/messages. I have checked the Syslog configuration at /etc /syslog.conf and confirmed that messages were meant to be written to /var/log/messages. Am I overlooking something? I run Red Hat Enterprise Linux 4. A:: On Unix filesystems, 'inodes' describe the type, permissions, ownership, timestamps and the data that make up a file. In fact, the filename is just a link in a directory to the inode as identified by its inode number. The Syslog daemon, Syslogd, is responsible for the entries written to /var/log/messages. Using lsof, it is possible to determine the inode that corresponded to /var/log/messages when you started Syslogd, like this: --- syslogd 3579 root 1w REG 3,5 926461 7898395 /var/log/messages ,,, Calling ls with -lai includes the inode number in the listing, giving --- 7898395 -rw------- 1 root root 933307 Sep 14 20:13 /var/log/messages ,,, It follows that Syslogd will keep on writing to the same inode regardless of any link name change, be it a rename or a deletion. The link /var/log/messages should have been rotated to /var/log/messages.1 by Logrotate. Under Red Hat ES 4, logrotate is run daily as specified by the crontab entry /etc/cron.daily/logrotate, while /etc/logrotate.conf and the included /etc/logrotate.d/* files dictate which logs are to be rotated and how often. Inspecting the logrotate configuration file illustrates how, after log files are renamed, new log files are to be recreated. Also, /etc/logrotate.d/syslog dictates that Syslogd is to be restarted to reload the configuration while closing and reopening all log files for append. This last step must have failed, and Syslogd kept on appending to the inode linked by /var/log/messages the last time Syslogd had been properly restarted. Rebooting restarts Syslog, which is why the issue was, on this occasion, resolved. Back to the list ****** SuperTux missing SDL_image library Q:: I have just installed Debian 3.1 and everything seems OK. But then I tried to install the game SuperTux using the Autopackage installation. Everything seemed to work, but now when I try to start it up I get the following message: --- bruno@tux:~$ supertux supertux: error while loading shared libraries: libSDL_image-1.2.so.0: cannot open shared object file: No such file or directory ,,, Obviously I am missing something. But what? A:: You are missing the SDL_image library. This is used often in games; so often that we have it in the Essentials directory of the DVD, although not as a Debian package. The easiest way to install this (or almost any other package) is to run Synaptic by selecting System > Package Manager (Synaptic Package Manager) from the K menu, click on Search and type sdl-image. Tick the box by the package and click Apply to install it. SuperTux should now run. If you ever have another program complain about a missing file, go to http://packages.debian.org and follow the link to Search The Contents Of Packages. This page is so useful it deserves a bookmark even if you don't use Debian, as it provides a good clue as to which package would contain a particular file in any Linux distribution. Back to the list ****** Xampp log files growing too big Q:: I'm running a box with SUSE 8.2 (yes, I know - it's very old) and Xampp [the Apache distro]. How I can get Xampp's log files into the log rotation possibilities of SUSE? They are growing and growing. A:: I take it from your question that you already have Logrotate running and successfully rotating other log files, which are normally configured in /etc/logrotate.d. You need to add a file to this directory for each set of log files you want to rotate. Yes, SUSE 8.2 is rather old; and you do not say which version of Xampp you are using, but this configuration file will rotate the logs for Apache 2 if they are stored in /var/log/apache2: --- /var/log/apache2/*log { missingok notifempty sharedscripts postrotate /etc/init.d/apache2 reload > /dev/null 2>&1 || true endscript ,,, Save this as /etc/logrotate.d/apache2, and the logs should be rotated the next time your machine runs Logrotate (usually on a daily Cron job). The options are documented fully in the Logrotate man page. The first line specifies the files to be rotated; the next two lines cause logrotate to move on if the log file is missing or empty; sharedscripts means that the prerotate (not used here) and postrotate functions are run once for all files matching the pattern, instead of once for each file. The postrotate section specifies the action to be taken after rotation. In this case, it reloads Apache's configuration, which causes Apache to release its locks on the old files and start logging to new files. You may need to change the path to the log files, and you might also have to replace /etc/init.d/apache2 reload with apache2ctl restart or apachectl restart. You can also add options to this file to set how often the logs are rotated and how many rotations are given. Otherwise the defaults will be used (four rotations given once a week). For example, the lines --- daily rotate 7 ,,, will rotate the logs daily and keep the last seven logs. Back to the list ****** File uploads not matching their MD5 checksums Q:: We are getting very strange corruptions whenever we download files bigger than 100MB on to a new server running Red Hat Enterprise Linux ES 3. The files download to the server without reporting any errors, but the MD5 checksum of the downloaded file does not match that of the source. My colleagues argue that this is a hardware failure, but the vendor insists that this is not the case. We have looked through /var/log/messages and dmesg for signs of errors but could not find any. As an act of goodwill, the single IDE disk and associated cables have been replaced and the operating system re-installed without any errors. However, on downloading massive attachments we started experiencing the same errors. We are very confused. A:: I have come across a very similar situation just once. The issue was inconclusively diagnosed as a faulty motherboard, potentially the onboard IDE controller. To rule out any transfer issues we transferred the file over SSH (SCP or SFTP). With the issue still present a 512MB file of random data was generated locally: --- $ openssl rand 536870912 -out testdata.0 ,,, The generated file was then copied over another four times to create a test sample: --- $ for FOO in 1 2 3 4; do cp -v testdata.0 testdata.${FOO}; done ,,, The MD5 checksum for the five 'theoretically identical' files was computed and compared: --- $ md5sum testdata.? ,,, In this case the MD5 checksums did not match. As in your case, the disk was replaced but the problem was reproduced. However, when the motherboard was swapped the problem went away. The Kernel-utils package on ES 3 provides the Smart Monitoring Daemon, which can monitor the 'Self-Monitoring, Analysis and Reporting Technology' system built into most modern-day ATA drives. Using Smart it may be possible to single out a failing disk before it actually commits suicide. You could also try disabling DMA and repeating the process: --- # /sbin/hdparm -d0 /dev/hda ,,, Back to the list ****** Running two distros at the same time Q:: I would like to find a way to use two distros at the same time without rebooting. Is there a program that allows a user to boot up another distro when already in another? For example, if I had a dual boot PC running distro A with distro B installed on a separate partition, would there be a way to boot up a distro B session while running distro A? If so, would there be any issues such as latency and responsiveness? A:: There are several options for this, most of which involve some sort of emulation. Which one you choose depends on how much effort you want to put into it, and whether you want to spend any money! The easiest, and most expensive, solution is the commercial VMware Workstation 5. Although often perceived as a means of running Windows on Linux, or vice versa, it is very good for Linux on Linux and is how I test software on various distros. The VMware website is at www.vmware.com. This is a virtual machine that does not try to emulate a processor, so it achieves near native speeds. The next alternative is Qemu. This started life as a processor emulator, but there is an accelerator module available now, which turns it into a VMware-like virtual machine when emulating a PC on a PC. It's slower than VMware, even with the accelerator, but it is free, and could be ideal for your needs. Qemu is available from http://fabrice.bellard.free.fr/ qemu. Qemu is open source, but the accelerator module is only free as in beer (it's proprietary). Back to the list ****** Installing updates on Mandriva via CD or DVD Q:: I've been trying to install Mandriva. It loaded and it runs but I cannot add the updates disc. I've tried to load it in via the Mandriva Control Center and RPM. When it scans the disc it reports errors and gives the following message: 'Unable to add medium, errors reported: ...copying failed'. A:: What's happening is that the updates disc is failing to load the GPG key for the updates packages. All packages are signed with a GPG key so you can verify that they're genuine. The error occurs because the file containing the key is not where the Mandriva Software Manager expects it to be. Despite the message, the updates CD has been added. If you go into the Mandriva Software Installer (click on the box icon with the green plus symbol) and select All Packages, By Update Availability. You will see that the updates are indeed available for installation. However, when you try to install any of them, you'll get a warning that their signatures cannot be verified, because of the missing GPG key. Just this once, it is safe to install them: the packages on the CD are indeed the official ones. The GPG key is on the updates CD, so to do to get rid of this error you can mount the updates CD, open a terminal and type --- su <root password> rpm --import /mnt/cdrom/main_updates/media_info/pubkey ,,, You should be returned to a command prompt after typing the rpm command. If you get an error message, it will probably be because the updates CD is not mounted on /mnt/cdrom. Make sure you can view the contents of the CD before running this command. Once the update CD and signature are set up, you can install the updates themselves from the Software Installer, as described above. You need the section marked Look At Installable Software... to get the updates from the CD - the section marked Look At Available Updates... is for online updates. Back to the list ****** Mandriva: insufficient room to complete the installation Q:: I'm running Mandrake 10.1 on one of the partitions of my PC. At the third disc a dialogue box appeared stating that there is insufficient room to complete the installation. I am totally stuck, as I can't get back to Mandrake - it's just a black screen. A:: You have not given us a great deal of information to go on, but there are a few possible causes for the symptoms you describe. Are you trying to perform an update or a new installation? If it's an update, the installer should just replace your existing software with newer versions, so the space requirements would be about the same. If it's a new installation, the installer should reformat your partition (assuming you told it to install on the same partition that contained Mandrake 10.1) before starting to install the software. I suspect that this partition is fairly small - probably only just large enough to hold your previous system - and Mandriva is trying to install more or larger packages. Your first priority is to get your system working again. I suggest you run the installer and select only the minimum number of packages. The bottom of the package selection window shows how much space your selected packages will occupy. Make sure that this is less than the size of your root partition - keep well under to be on the safe side. Once Mandriva is running you can easily install any extra packages you need from the Mandriva Control Center. An alternative is to use the custom partitioning option during installation, and resize your other partitions to leave more room for Mandriva. Back to the list ****** Partitioning: FAT vs FAT32 Q:: People often recommend FAT partitions, which I think can be up to 4GB, for easy read/write access from both Windows and Linux. This would be of great use to me, but I haven't been able to set this up. I run an Evesham (May 2003) with Windows XP Pro and SUSE 8.2 Pro mounted on separate 80GB Hard Discs. Is it possible to repartition either hard disc to provide such a 4GB FAT partition without having to reload either of the operating systems and thus losing my settings? If so, how? When I loaded SUSE 8.2, I used the recommended single partition. Now, a little wiser, I'd like to repartition that hard disk anyway for Linux use, with /home separate so I can try new Linux distros and so on, without losing my tried and trusted system. I seem to remember reading a few months ago that an easy way to partition a disc is to start loading Mandrake and stop once partitioning has been done. Does this overwrite everything already on that disc? Please advise on the best and safest way to repartition, with a FAT partition at the end of one of the hard discs that's recognised by both Windows and Linux. A related problem I have is that when I'm using a 32MB USB pen drive to transfer between the two systems, or indeed to other PCs, writing to the pen drive in Linux results in case changes to file names. The only way to correct this is to then read the files into Windows and then write them back from Windows. The correctly cased file names are then read by any Linux or Windows PC. Is this inherent or is it a driver problem, and do I need to load a specific driver rather than rely on a default? A:: You can either use Partition Magic or an Open Source tool such as GNU PartImage to repartition a disk without wiping it. Both of these will adjust the filesystems prior to modifying the partition structure, allowing for the modifications to be made without destroying data. You can then carve out a partition on the disk and build a FAT filesystem on it. Using FAT32, you'll be able to create a partition far greater than 4GB, or alternatively you could simply mount your Windows XP filesystem and access a specific directory on the disk. Mounting a filesystem, either disk-based or USB, using 'fat' will result in naming issues and problems with long filenames. Using 'vfat', you can ensure that information is preserved and will allow for the easy exchange of data between Linux and DOS. Back to the list ****** Should I upgrade Linux or reinstall? Q:: A couple of months ago I bought the Complete Linux Handbook 2 with the Mandrake 9.2 distro on the DVD, and decided to install it. Since then I have spent a very long time teaching myself Linux; most of the time has been getting all of my peripherals to work. Needless to say my Winmodem took by far the longest. Now I'm at the state of actually doing stuff, instead of configuring stuff. The OS is absolutely fab, and recently I changed my default boot option from WinXP Pro to Mandrake (yay!). Now I notice that 10.1 will be coming soon. Will upgrading simply be a case of putting the disc in, clicking a few icons and supping a cup of tea, or will the arduous task of setting up all those peripherals, software, etc, have to be repeated? Great mag by the way. A:: The short answer to your question is yes - if you upgrade, you will spend more time reconfiguring stuff. But perhaps not everything. Mandrake handles upgrades by simply upgrading the packages you have on your system. If you have KDE3.1 installed, and the new version contains packages for KDE3.2, it will simply update the rpms. Theoretically, any configuration files you have will remain the same. Unfortunately, other changes in the OS may cause the upgrade not to be so smooth. As an example, the 10.1 release uses X.org as opposed to Xfree86 for the x server - so it's not just a version upgrade. Some of the base Mandrake packages also change the locations of some files, and may over-write configurations. On the plus side, the newer OS will probably recognise a lot more of your hardware and configure it straight off - but only if you do a new install. Ultimately, though, you will need to upgrade if you want to keep getting packages for your system that you can actually use. The best idea is to create some space or use an extra drive and install the new Mandrake alongside the old one, then work out what stuff you need to re-implement. Back to the list ****** Dependency problems when installing RPMs Q:: I successfully installed Mandriva Linux alongside Windows. I have found myself getting frustrated with all the new things to learn. But I am sticking with it! The one thing I am really struggling with is installing software. I can't get to grips with RPMs/binary packages or using the console to install. Am I doing something wrong? I keep getting error messages when I try to install, like Libstdc++.so.5 issues. I have been told that the problem has to do with dependencies but I really don't get it. A:: The basic RPM system is quite... well, basic. It will identify dependencies, but not actually do anything about them (a dependency is where one piece of software requires another in order to run). The software you are installing needs version 5 of the Libstdc++ (standard C++) library. You could go searching for an RPM of this, install that and then chase the next dependency, but Mandriva's installer will take care of all of this for you. If the package you are installing is on the Mandriva discs, you're better off using the Mandriva Control Center to install it from there. This will install the RPM package you want plus any dependencies. Even if you have downloaded a separate RPM from somewhere else, you can still use Mandriva's dependency handling by using its urpmi command instead of the basic RPM. For example: --- sh armyops230-linux.run ,,, The sh is needed because these installers are in fact shell scripts. Because Linux does not allow you to arbitrarily execute files, you can't just run it by typing its name as you would with a Windows .exe file. Back to the list ****** Get HP Scanjet 2300c to work in Linux Q:: I recently converted my small graphic art studio to a GNU/Linux-only environment and wiped my main system clean of Windows, replacing it with Fedora. I have an abundance of open source graphic software tools - Blender, Gimp, Sodipodi, Inkscape, KIconEdit, Qcad and more - which have served me well and should continue to do so. However, I have an HP Scanjet 2300c (USB), which until recently was unsupported by stable Sane back-ends. A few days ago, updated versions of those back-ends (Genesys) came out which do support this scanner. I've tried to download the correct RPM to replace my existing back-ends, but no matter what I do, I can't get it installed or my scanner working. A:: You don't say exactly what your problem is - whether it is with installation or configuring the scanner. The version of Sane that supports this scanner is quite new, and hasn't yet made it into the main Fedora package list. There is a package in their development repository, at http://download.fedora.redhat. com/pub/fedora/linux/core/development/i386/Fedora/RPMS/ sane-backends-1.0.16-1.i386.rpm, although the version number may have increased by the time you read this. However, this requires a later version of Glibc, also available from the development repository, which may break other software. Using a development version of such a critical library on a production system is not a good idea, so you need to install the latest Sane from source, replacing the RPM version. Do this as root. --- rpm --erase --nodeps sane-backends rpm --install --justdb /media/cdrom/Fedora/RPMS/sane-backends-1.0.15-9.i386.rpm tar xzf sane-backends-1.0.16.tar.gz cd sane-backends-1.0.16 ./configure prefix=/usr make make install ,,, The rpm commands remove the existing Sane files, but also send a message to the RPM database to make it think that the files are still there. Without this step, an update may try to install the 'missing' files, replacing your new Sane with an older version. Doing it this way means that when the newer version is released as an RPM, your system will be updated as normal. Back to the list ****** FTP vs SSH - security and chrooting Q:: As a design agency receiving artwork and other raw material by email and through the post on CD and DVD, we were considering teaching our clients to upload material to us on to a co-located server running Fedora. At first we were considering implementing a web-based application. However, after much deliberation we saw the light and are now considering using FTP. Our primary concern is security: we don't want to give our clients access to other clients' uploads. However, the sysadmin who originally provisioned the server advised us to upload our website over SSH rather than FTP, claiming that SSH is much more secure. What is your opinion? A:: Fedora ships with Vsftpd, which is a robust FTP server that makes chrooting users a breeze. Chrooting means that the user will be tied down to their home directory and no other. To enable FTP chrooting, uncomment the following two lines from /etc/vsftpd/vsftpd.conf: --- chroot_list_enable=YES chroot_list_file=/etc/vsftpd.chroot_list ,,, The list of FTP chrooted files should be in the file /etc/vsftpd.chroot_list. Also, as you would not want to give FTP users shell access to your server, /bin/false should be specified as the shell when creating the users: --- # useradd -s /bin/false -m -k /dev/null someuser ,,, If you're adamant about using SSH rather than FTP, you can install scponly to enable users to scp or sftp into the server without having the right to interactively ssh into the server. The scponly project lives at www.sublimation.org/scponly. Installation is well documented in the Install file included in the tarball. Scponly users can also be chrooted, making the installation marginallymore complex. Back to the list ****** Configure Shorewall to block port 113 with SpeedTouch modem Q:: I have set up the firewall in the Mandriva Control Center by unchecking all boxes, which should stop anything getting through. I then went to Steve Gibson's Shields Up! site (https://grc.com/x/ne.dll?bh0bkyd2) and ran the Common Ports test. Everything was then 'stealthed' apart from port 113 (IDENT). This was using my Alcatel SpeedTouch USB modem. However, if I connect using my Netgear combined router/firewall, everything is stealthed, including port 113. So how do I configure Shorewall to stealth port 113 when using the SpeedTouch? I know there are arguments that port 113 shouldn't drop network packets as this can cause problems, but I use the router for hours on end and never experience any connection slowdowns, even though it does stealth this port. And of course, if you reject packets, crackers know that your computer exists... A:: You have discovered the disadvantage of GUI control panels: you can only control the options for which a button has been provided. As you hint at, Mandriva uses Shorewall as its firewall, Shorewall is a capable system with a lot of options, but the Control Center barely scratches the surface. To make Shorewall stealth port 113, you'll need to edit the file /etc/shorewall/ rules as root. Immediately before the last line, add the following: --- DROP net fw tcp 1 13 ,,, Now go into the System > Enable Or Disable Services part of the Control Center, stop Shorewall, then start it again to load your new settings. Go back to Shields Up! and you should find port 113 is stealthed. If you want more control over your firewall settings than the Mandriva Control Center offers but do not want to delve into Shorewall's rules, you may find Guarddog (www.simonzone.com/software/guarddog) more suitable. Both of the programs are front-ends to build rules for the Linux kernel's own firewalling, but Guarddog does it through a GUI. The choice is yours. Back to the list ****** Verifying hardware on a remote server Q:: We've rented a number of dedicated Linux servers from a hosting company. I've confirmed that the disk and memory allocated are as requested, but have been denied physical access to the data centre. How can we verify what make and model of components have been used within our servers? This is a requirement set by our external consultants as part of our disaster recovery programs. A:: As the kernel boots up, polling the system's hardware and loading the appropriate modules, there's usually a wealth of information output to the console. You can retrieve this information by dumping the kernel's ring buffer using dmesg. The kernel can be further queried by manually inspecting /proc, the interface to kernel data structures. Of particular interest are: --- /proc/cpuinfo CPU information. /proc/ide/ IDE bus and disc information. /proc/scsi/ All SCSI devices. /proc/ioports Registered IO port regions. /proc/pci PCI buses, installed devices and drivers. ,,, The utility Lspci displays information about PCI buses and what is connected to them. This is often sufficient to identify the likes of VGA, network and SCSI adapters. For more about BIOS and motherboards, visit the Dmidecode project at www.nongnu.org/dmidecode. Assuming your BIOS follows the SMBIOS/DMI standard you may be able to list the system manufacturer, model name and BIOS version. Finally, Red Hat Enterprise Linux includes the Kudzu library for hardware discovery and configuration. It is possible to call Kudzu to probe and report on the installed hardware. Invoking as kudzu -s -p initiates a 'non-disruptive' poll without modifying any existing configurations. Back to the list ****** What is the best laptop for Linux? Q:: Which 'notebooks' (the things that used to be called laptops) are best suited for running Linux? I'm looking to buy a machine and will only be able to afford a one-off, so it will have to be a little forward-looking in terms of hardware, even if the software has some catching up to do. I know that I want a 64-bit processor and systems, but I also know that I must be able to afford it! I also need to know if there are any worthwhile speech recognition programs in Linux - I had a stroke recently and am getting tired of one-finger typing! A:: Until you mentioned 64-bit, I was going to recommend an IBM ThinkPad. IBM notebooks are built to last and have good Linux compatibility. However, they all currently use the Intel Mobile Celeron processor range. The difficulty with recommending a specific computer is that it is the individual components that are the source of compatibility frustrations. I could tell you to buy a Milliard Gargantuan, only to find that Milliard Inc have changed the wireless networking chip to one that doesn't have a Linux driver. Your best option is to try out various notebooks with a Live CS distro. As you are looking for a 64-bit computer, I would recommend either Ubuntu or Kubuntu, depending on whether you prefer Gnome or KDE. They both have 64-bit versions and you can download them from www.ubuntulinux.org. Trying the computer out is doubly important inview of your physical restrictions. There appears to be very little available in the way of usable voice recognition software for Linux. IBM discontinued ViaVoice a few years ago, the last version was bundled with Mandrake 8.1. There are some other projects, but none of them are really ready for the end-user's desktop yet. CVoiceControl (www.kiecza.net/daniel/linux) will allow you to control your computer with commands, but text input is not currently practical. You could cut down on your typing by using a keyboard with plenty of extra keys and using KHotkeys or Xbindkeys to assign commonly used commands or phrases to these keys. Most distros support using a second keyboard with a laptop. Back to the list ****** Connecting Palm Pilots with KPilot Q:: I am having a couple of problems with ports. First, as root my Pilot is available on ttyS0 in KPilot, but when I'm logged in as myself KPilot tells me that it is not read/writeable and automatic detection will not find it. It seems that I cannot access ttyS0 as myself, only as root. How do I change permissions if that is the problem? Second, when I'm using Kino it tells me in the preferences that the IEEE1394 subsystem is not responding, whether I am logged in as root or as myself. The modules raw1394, ieee1394, ohci1394 and dv1394 are all loaded. A:: You do not have permission to access /dev/ttyS0 as your normal user. While it is possible to change the permissions of the devices, this is not the way things are meant to be done, and will probably result in your having to make the changes each time you reboot. The first thing to do is check who can access each device with --- [nelz@localhost ~]$ ls -l /dev/ttyS0 crw-rw---- 1 root uucp 4, 64 Oct 4 00:14 ,,, This shows that ttyS0 can only be accessed by members of the uucp group, so you'll need to add your user to that group with --- su gpasswd -a yourusername uucp ,,, You will have to log out and back in for your membership of the uucp group to be recognised. Your Kino problem appears to be due to a missing module. The Lsmod output you gave in your forum post shows ieee1394, raw1394 and ohci1394 loaded, but not video1394. You do have dv1394, but that is used in high-end professional equipment, not consumer level digital video cameras. Once you have the correct modules loaded, you will hit a similar permissions problem as with /dev/ttyS0, but the solution is a little more complex as the devices are owned by root only. To fix this, create a video group and add yourself to it (as root): --- groupadd video gpasswd -a yourusername video ,,, Then ensure the devices are owned by the video group by adding these lines to /etc/udev/10-udev.rules (create the file if it doesn't exist): --- KERNEL=="raw[0-9]*", NAME="raw/%k", GROUP="video" KERNEL=="video1394*", NAME="video1394/%n", GROUP="video" ,,, After a restart, all your permissions should be fixed - you only need to be sure that the correct modules are loaded for KPilot and Kino to work fully. Back to the list ****** Get a TV card working using TVTime and cx8800 driver Q:: I bought a cheap TV card, installed it - and guess what? No go using TVTime. Rather than curse and swear, I did a quick check of dmesg, which showed that bttv had picked up the card and detected it OK but registered it to /dev/video1 rather than video0. Easily fixed but still no pictures. Googling found that the tuner had been detected at a type 5 and should have been 38. It all runs well now. My question now is how do I get the remote control running? The remote control provided connects to the TV card and not to a USB port. I can find a lot on how to use the remote with Lirc [Linux Infrared Remote Control] but I am stuck as I cannot find the device remote0 anywhere. I get the following putput from dmesg: --- bttv: driver version 0.9.15 loaded bttv0: detected: Leadtek TV 2000 XP [card=34], PCI subsystem ID is 107d:6609 bttv0: using: Leadtek WinFast 2000/ WinFast 2000 XP [card=34,insmod option] ,,, Any suggestions on how to proceed? I have Mandriva and Debian 3.1 dual booting. A:: I have the same card, and it took a bit of digging to get things running. First, bttv is the wrong driver for this card. You need to use the cx8800 driver. Both of your distros have this driver installed with the 2.6 kernels (but not the default 2.4 kernel of Debian). The cx8800 driver also handles the remote, sending its signals to /dev/input/eventN, where N is a number. If you have more than one such file, --- cat /proc/bus/input/devices ,,, will tell you which is the right one. Copy http://linux.bytesex.org/v4l2/linux-input-layer-lircd.conf to /etc/lircd.conf and start Lircd (the Lirc daemon) with the following options. --- --driver dev/input --device /dev/input/eventN ,,, with the correct value for N, of course. Your remote should now work fully, with any Lirc-aware programs responding to the remote. Back to the list ****** Linux viewer for .ecw survey map files? Q:: I have been supplied with some *.ecw files containing detailed survey maps. They came with a viewer called ER Viewer 2, which allows zooming and traversing of the maps. It is very basic, but it only works on Windows. Is there an alternative 'viewer' out there that is open source and works on Linux (32-bit or 64-bit SUSE 9.3 Pro)? I also need to be able to work with the data from these files; to enter data, transfer it to a database and work with GPS coordinates. I have looked on SourceForge for suitable software and Geomview and SciGraphica look like they may be of use, but I was hoping you could make some recommendations. I would prefer to work within a GUI as I'm not very familiar with the command line - if not, well, I'll just have to get familiar with it! A:: This has got to be one of the most specialised questions I have received (no readers, that is not a challenge). There is a Linux viewer for .ecw (Enhanced Compressed Wavelet) files but it is not open source. The program is XnView (it has a companion program, Nconvert) and is available from www.xnview.com. For a more complete solution to handling this and your other data, it seems you need a GIS (Geographic Information System) program. There are a number of these available; Grass GIS (http://grass.itc.it/index.php) would be a good starting point. Originally developed for the US Army Corp of Engineers, Grass is now used by academic, commercial and government organisations across the world. This means that not only is development active and wide ranging, but there is a large body of users and knowledge for you to draw upon when trying to apply the software to your particular needs. Grass supports .ecw files through its use of Gdal, a library for translating geospatial data formats. If Grass is not suitable for your needs, you may also consider the following projects: --- UDig http://udig.refractions.net/confluence/display/UDIG/Home Quantum GIS http://qgis.org Saga www.saga-gis.uni-goettingen.de/html/index.php ,,, Back to the list ****** Set up Postfix to allow relaying Q:: I run a local mail server that acts as my mail gateway and storage. This works well when connected to my LAN, but when I connect from elsewhere (I have a static IP address), I can receive mail but not send it. I am using Postfix and Dovecot on Gentoo. I found something on this in the Gentoo documentation, but it was part of a virtual mail hosting setup, and I don't need anything that complex. A:: Postfix is set up to deny relaying by default. It can only accept mail either to or from your domain, to avoid being used by spammers. You need to configure it to use SASL (Simple Authentication and Security Layer) to allow remote users to log in with a password to send mail. First, ensure that Postfix has SASL support. If you don't already have sasl in your USE flags, add it and re-emerge Postfix (users of binary distros don't have to worry about this step). This will also install SASL for you. Next, edit /etc/sasl2/smtpd.conf and change the pwcheck_method line to --- pwcheck_method:saslauthd ,,, With some distros, this file may be /usr/lib/sasl/smtpd.conf. You should also edit /etc/conf.d/saslauthd to tell SASL how to authenticate users. Change the SASLAUTHDOPTS line to ONE of the following: --- SASLAUTHD_OPTS="${SASLAUTH_MECH} -a pam" SASLAUTHD_OPTS="${SASLAUTH_MECH} -a shadow" ,,, depending on whether or not you use PAM (pluggable authentication modules for Linux). Now you need to make a couple of changes to Postfix's configuration. Edit /etc/postfix/main.cf and add these lines to the end: --- # SASL SUPPORT FOR CLIENTS # # The following options set parameters needed by Postfix to enable # Cyrus-SASL support for authentication of mail clients. # smtpd_sasl_auth_enable = yes smtpd_sasl_security_options =noanonymous smtpd_sasl_local_domain = $myhostname smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,check_relay_domains broken_sasl_auth_clients = yes ,,, The last line is only needed to accept connections from older versions of Outlook Express and Exchange. Finally, start Saslauthd, set it to start on boot and tell Postfix to load the changed configuration. --- /etc/init.d/saslauthd start rc-update add saslauthd default /etc/init.d/postfix reload ,,, If you want more information, there is a detailed Howto at http://postfix.state-of-mind.de/patrick.koetter/smtpauth. Back to the list ****** Red Hat install problems Q:: I recently bought an Intel 865 desktop board. I have a Seagate 120GB SATA hard drive. I tried installing Red Hat 9 on it, but with no success. Can you tell me which Linux flavour I should use? Will Mandrake 10.1 detect my Seagate SATA hard disk? A:: Red Hat 9.0 lacks support for the SATA chipsets that are used by the current motherboards, so a more recent Linux distribution will be necessary. If you want to stick to the Red Hat line of distribution, Fedora is a great choice and continues to use the RPM packages that anyone who has used Red Hat will be used to. As another option, Mandrake or SUSE will also work with SATA if a current release is used. Back to the list ****** USB flash drive performing slowly in Linux Q:: My USB flash memory has recently become very slow, giving only around 5Kb/s on writes. I run Gentoo Linux and this problem seems to have started since I upgraded to a 2.6.12 kernel. A:: This is due to a change in the way the kernel handles the sync option with FAT filesystems. Previous kernels only kept the data in sync and updated the FAT table at the end of the operation. The later revisions update the FAT table each time a block is written. This slows down writing substantially, but far more importantly, it also seriously shortens the life of the drive with so many writes to the same place. I managed to destroy a 1GB memory stick (and three weeks' work) in a few weeks before learning of this. It is normal practice to mount removable devices with the sync option, to reduce the chances of corrupting data by unplugging them without unmounting, but this new kernel 'feature' means you should now mount your device without this option. If you mount manually, this is easy: just change the entry in /etc/fstab to remove the sync option. If you use HAL to mount drives automatically, you can change the default mounting policy from sync to async by saving the following in a file called /etc/hal/fdi/policy/storage-policy.fdi: --- <match key="volume.size" compare_ lt="2147483648"> <match key="@block.storage_ device:storage.hotpluggable" bool="true"> <merge key="volume.policy. mount_option.sync" type="bool">false</merge> <merge key="volume. policy.mount_option.noatime" type="bool">true</merge> </match> <match key="@block.storage_ device:storage.removable" bool="true"> <merge key="volume. policy.mount_option.sync" type="bool">false</merge> <merge key="volume. policy.mount_option.noatime" type="bool">true</merge> </match> </match> ,,, If you use Ivman, find the line in /etc/ivman/IvmConfigActions.xml that reads --- <ivm:Match name="hal.volume.fstype" value="vfat"> ,,, and add --- <ivm:Option name="mountoption" value="async" /> ,,, immediately after it. You will need to restart HAL/Ivman after making these changes, and don't forget to unmount devices before unplugging them! Back to the list ****** Restoring /bin/bash after deletion Q:: While experimenting with a Red Hat Enterprise Linux 3 installation and trying to break and fix things, I unintentionally deleted /bin/bash. The operating system obviously crashed but I found mysel at odds with what to do next. I was unable to boot the machine into runlevel 1 as this still, apparently just like everything else, uses Bash. I would appreciate any insight on what could be done should a similar situation ever happen on a live machine. A:: Most of EL3's startup scripts require /bin/bash or /bin/sh, which is no more than a symbolic link to the former. At the kernel selection menu in Grub, hitting 'A' allows for parameters to be passed to the selected kernel. Adding init=/bin/bash would typically bypass all the init scripts and drop you straight into a shell. This is not much use if Bash has been deleted. However, most distributions include a number of other shells, including Ksh and Tcsh. In this case init=/bin/ash is a valid option. With the system booted straight into the chosen shell, the / partition finds itself mounted as read-only. The first task is to remount as read/write: --- # mount -o rw,remount /dev/hda2 / ,,, If the missing files are available they can be copied over and the right ownership and permissions set. Alternatively, mounting the installation media, the Bash RPM can be forcibly re-installed (the RPM with EL3 came on the second CD). Run --- # mount /mnt/cdrom # rpm -Uvh --force /mnt/cdrom/RedHat/RPMS/bash*.rpm ,,, A somewhat more elegant way is to boot the server off the installation media into the Red Hat rescue mode (linux rescue). The rescue mode tries to mount the root partition as /mnt/sysimage. The Bash RPM could then be copied to /mnt/sysimage/tmp and installed form an alternative shell chrooted to /mnt/sysimage, thus: --- # chroot /mnt/sysimage /bin/tcsh # rpm -Uvh --force /tmp/bash*.rpm ,,, Back to the list ****** How to mount LVM partitions Q:: I'm working on upgrading our school computers to Fedora. Up until now we were on Fedora, which seems to use ext3. Basically, I set up one computer on FC4 and do all the upgrades. Then I do a minimal install on the other drives (to correctly partition them) before transferring the files across by mounting the drives as secondary in my main computer for the operation (I can put both drives in my computer at the same time, performing the main file swapping operation using a Knoppix CD) Problem: while the /dev/hdb1 directory (Grub boot partition) can be mounted, the /dev/hdb2 cannot. Attempts to mount /dev/hdb2 result in reports that /dev/hdb2 is already in use or cannot be mounted. When I launch Parted it does not show a filetype for the /dev/hdb2 partition. Launching Fdisk shows that it is of type 8e - 'Linux LVM' format. How do I mount a partition that has been written in LVM format? Is it simply a -t option to the mount command? Please give me an example if it is. A:: You do not mount an LVM partition directly. This is a container holding the data for the logical partitions, which are what you need to mount. Knoppix does not support LVM, so either use Recovery Is Possible (RIP, www.tux.org/pub/people/kent-robotti/looplinux/rip) or a Gentoo install disc. After booting from the RIP disc, you need to type --- sh /etc/rd.d/rc.lvm2 start ,,, Type lvdisplay to see a list of your logical partitions. Each one will have a device name in the form: /dev/volume-group/volume-name. Use that to mount the logical volume, eg: --- mount /dev/vg0/vol1 /mnt/otherroot ,,, However, if both disks have been prepared by the Fedora installer, they will both have the same volume group name of VolGroup00, so you will not be able to access both at the same time. Remove the slave disc or disable it in the BIOS, then type --- vgscan vgchange --available n vgrename VolGroup00 VolGroup01 ,,, Reconnect the slave drive and type --- vgscan vgchange --available y ,,, Now your master drive's LVM partitions will be in /dev/VolGroup01 with the slave's partitions in /dev/VolGroup00. You could use a more descriptive name for the master drive's volume group if you prefer. You will need to edit /etc/fstab to reflect the changed names so that you can still boot into Fedora. Back to the list ****** Get a Winmodem working with slmodem drivers Q:: I'm entirely new to Linux, and am not particularly great with computers in general - any technical terms I've learned have all been in the space of a week! I was enticed by Debian 3.1 and decided to have a go at installing it over WinXP. I've managed to get to get to the GUI, and I've had a look around KDE and it looks brilliant. However, my modem doesn't work. I've discovered that it is a Winmodem, and that apparently I can install a driver called slmodem to make it work. But it is just so complicated! There are swathes of information but it's technical. I had a go at installing slmodem, following the advice on www.laclinux.com, and things seemed to be going OK until I needed to find a file /etc/modprobe.d/slmodem, which didn't exist. Here is some info about my system. --- Machine IBM Thinkpad G40 OS Debian 3.1 Kernel 2.6.8 Modem Agere Systems AC'97(COM3) ,,, lspci gives the following: --- '0000:00:1f.6 Modem: Intel Corp. 82801DB/DBL/DBM(ICH4/ICH4-L/ICH4-M) AC'97 modem controller (rev 01)(prog-if 00[Generic])'. ,,, I'd really hate to go back to Windows having looked at Debian Linux. Please help, because I am almost converted! A:: The instructions you refer to are not particularly verbose; they presume a reasonable familiarity with installing from source and configuring by hand. Fortunately, you don't need to do any of this, as the slmodem drivers are already available for Debian. You can install them easily with Synaptic, but first you need to add the standard Debian repositories to your sources list. Add the following lines to /etc/apt/sources.list: --- deb http://http.us.debian.org/debian/stable main contrib non-free deb http://non-us.debian.org/debian-non-US/ stable/non-US main contrib non-free deb http://security.debian.org/ stable/updates main contrib non-free ,,, Now fire up Synaptic, search for 'sl-modem', right-click Sl-modem-daemon and mark it for installation, then press Apply to install this and all its dependencies. Synaptic will also take care of configuration for you, including creating the files mentioned in the Readme you read. Back to the list ****** ADMtek wireless card won't connect in Ubuntu Q:: I'm trying to get online. I have an ADMtek 802.11b wireless card, and Ubuntu has found a driver for it. I'm trying to connect to our LAN, which has WEP encryption. The settings are: --- Mode Managed ESSID Eleven Key *** (number 4) Interface eth1 Channel 1 ,,, Would it be possible to walk me through the steps? So far (in a terminal window) I have: --- iwconfig eth1 mode Managed iwconfig eth1 essid eleven iwconfig key restricted *** [4] iwconfig eth1 channel 1 ifconfig eth1 broadcast ,,, It won't connect, so have I forgotten anything here? I would prefer to do it using a terminal, as I'm not too sure what the tools do. A:: It looks like you have done just about everything needed, except bring up the interface. Does your LAN use DHCP? If so, you need to drop the ifconfig command you are using and run --- dhclient eth1 ,,, after the iwconfig commands. This will cause the interface to go online and fetch its IP address along with DNS and routing information from the network. Otherwise you will have to do this with the ifconfig and route commands, for example: --- ifconfig eth1 192.168.1.3 up route add default gw 192.168.1.1 ,,, Obviously, you will need to replace the addresses of your computer and the Internet gateway/router with the correct values. You will also need to add the addresses of your LAN or ISP DNS servers to /etc/resolv.conf. None of this is necessary if you use DHCP. Most wireless access points and routers have an option to provide DHCP services. It is usually turned on by default, so running the iwconfig commands you gave followed by dhclient should be enough. However, there is a much easier way. Select System > Administration > Network from the Gnome menu bar and enter your settings. The only drawback with this method is that it only allows you to set one WEP key and it defaults to open mode. This is easily fixed by editing the entry in the /etc/network/interfaces file to look like this: --- iface eth1 inet dhcp # wireless-* options are implemented by the wireless-tools package wireless-mode managed wireless-essid eleven wireless-key restricted wireless-key4 123456789ABCDEF ,,, Now you should find the interface starts as soon as the card is detected with no need for any action on your part. Back to the list ****** Backup script not working in cron Q:: I have been writing a small script that makes a backup of some files from one server to another over FTP. The script works fine when executed at the command line over SSH. However, when scheduled to run every two hours something is not working and the script gets run repeatedly. --- * */2 * * * /bin/bash /home/aport/backupfiles.sh >/dev/null 2>&1 ,,, To work around the problem I modified the script to repeat itself every two hours in an infinite loop. This worked when signed in interactively over SSH. Yet disaster struck again when I started the script in the background as /bin/bash ./backupfiles.sh >/dev/null 2>&1 & All works well until I try to exit the SSH session where my SSH client, Putty, inexplicably hangs and has to be closed manually. If I log in again I see that the script is still running through ps. Help! A:: Since the minute field in the above crontab has not been populated, the scheduled job above finds itself running the script every minute for every second hour. The amended crontab should run the script at five minutes past every two hours: --- 5 */2 * * * /bin/bash /home/aport/backupfiles.sh >/dev/null 2>&1 ,,, The second issue seems to relate to the fact that when the script was executed in the background it was not fully detached from the controlling terminal. The following recipe will allow you to background the script and exit the SSH session. --- $ nohup /bin/bash backupfiles.sh >/dev/null 2>&1 >/dev/null & ,,, Besides having both standard output and standard error redirected to /dev/null, standard input is redirected from /dev/null while nohup makes the executed command immune to any hangup signals. If output is not redirected, nohup will write any output to the file nohup.out, which may be desirable to audit the outcome of your backup script. Back to the list ****** Unicode character problems on Linux server accessed via Putty Q:: Whenever I connect to my Linux server (Fedora) over Putty I get funny French-looking characters replacing some punctuation characters when viewing man pages. This does not happen when I log in at the console. A:: Unicode is a character coding system designed to support more than just the Latin alphabet and the various European accentuations and other eccentricities. UTF-8 is an 8-bit encoding form for the Unicode character set and is quickly becoming the encoding scheme of choice across the board. For a good read on Unicode and UTF-8 check out their entries in Wikipedia and Paul Hudson's PHP tutorial on page 92. Most recent Linux distributions use UTF-8 as the default encoding for most locales yet your SSH client may be defaulting to the Latin-1 character set (ISO-8859-1). This will surely garble up more than just man pages, making Ncurses-based applications that use pretty borders and so forth look impressively hideous. To set Putty to use UTF-8, follow the menus through Window > Translation > Character Set On Received Data = UTF-8. While there is nothing to gain in permanently changing the default character set away from UTF-8, it is easy to switch to Latin-1 when logged in by changing the LANG environment variable: --- $ export LANG=en_GB.ISO-8859-1 ,,, Back to the list ****** Burning an ISO in Linux with K3b Q:: I have tried all I can to get a Debian and Fedora dual-booting DVD working. But all my efforts proved abortive. The Smart Manager Bootloader could boot any other CD but not my ISO CD. Could you tell me anything else I can do to boot the CD? There is nothing wrong with the disc itself, the problem is with the BIOS of my machine that is not recognising the ISO on the CD. A:: I assume you mean Smart Boot Manager for Smart Manager Bootloader. Your mention of ISO and CD leads me to believe that you have created CD ISO images for the DVD, using mkiso or winmkiso, and written them to CD-Rs. Especially as you also mention Fedora, which wasn't on the CDs. The usual cause of this problem is the way you write the ISO to the disc. If you copy it as a file, the disc will not boot. Look at the contents of one of the discs. If all you see is the single ISO file, you have copied the file instead of using it as an image. You should see a lot of files on each disc. This is because each ISO file is a complete image of the contents of the CD, all the files plus the booting information, so it needs to be copied to the disc in a different way. In K3b, select Tools > Burn CD Image, pick the image file and click Start. If you want to burn the images in Windows, Nero is probably the best choice. Go to File > Burn Image from the menu and select the ISO image file in the dialog. When the Write CD dialog opens, go to the Burn tab and select Write then Finalise. Press the Write button to create the CD. To use Easy CD Creator, select File > Record CD From CD Image from the menu. When the file dialog opens, set the Files Of Type drop-down to ISO Image Files. Select the ISO image you wish to burn and click Open. In the Record CD Setup window, choose Track-at-once and Finalise CD. Click OK to create the CD. You can find more information on using (and obtaining) ISO images on the LinuxISO.org website. The most relevant information is at www.linuxiso.org/viewdoc.php/howtoburn.html and www.linuxiso.org/viewdoc.php/isofaq.html. Back to the list ****** Installing a Gnome Live CD to the hard drive Q:: I have an old laptop, previously loaded with Windows 98. I have used the Gnome 2.12 Live distro. This has proved ideal and it works very well, except that starting it each time is a slow process. Is there any way this distro can be loaded to the hard disk? I presume this is not possible due to a lack of installing software. The problem with Linux is that it seems to grow more and more complex, whereas this distro seems ideal for people trying to make use of an old low-spec computer. A:: You are correct in presuming that this particular CD cannot be used for installation. The disc is essentially a showcase for Gnome 2.12 and is based on an Ubuntu Live CD. The good news is that Ubuntu is also available in an installable version. Ubuntu is an excellent distribution that has come a long way in a very short time. You can download the installation CD from www.ubuntulinux.org. If you do not have broadband, you can request a CD copy be sent to you for free. You need the i386 install version for your laptop. The appearance of Ubuntu's Gnome desktop is different, but this is purely down to the theme used (which you can easily change), but it works in exactly the same way. It is natural for software to become more complex as new features become available and new hardware makes more things possible. This is particularly true of the 'big two' desktop environments, Gnome and KDE. However, there are plenty of lighter alternatives for those that either do not want or are unable to run the latest bells and whistles. Take a look at IceWM, Xfce 4 and Fluxbox, all of which are available via Ubuntu's Synaptic package manager. Back to the list ****** Updating the BIOS on a HP OmniBook laptop via Linux Q:: I have an HP OmniBook 6000 on which I run Mandrake 10.0. When rebooting, the machine freezes. I searched the web and found out there is a fix for the problem: I have to update the BIOS with a certain file from an HP customer care web page. My first problem is that the update is an InstallShield executable file that needs to be run on Windows to create an update floppy- and I only run Linux. That brings me to my second problem: I don't have a floppy drive on this laptop, only a CD/DVD drive. How can I extract the floppy image from this file, and is it possible to make a bootable CD from it? A:: Some executable file installers are self-extracting zip files, but this one is not. The only safe way to extract it is to run the program on another computer. This will copy the BIOS update to a floppy disk. Then use the read function of rawwritewin.exe to create a disc image file. Copy this to your laptop. The second part of the problem is remarkably simple, because the original method of making a bootable CD was to embed a floppy disc image in the boot sector of the CD. Assuming your disc image is called bios.img, create a directory called biosupdate and put the image file in it. Then run the following command: --- mkisofs -b bios.img -c bios.cat -o biosupdate.iso biosupdate ,,, This will create a bootable CD image. Use Cdrecord or your favourite CD -burning GUI to write this to a CD, which will boot and run the BIOS updater. It is also possible to create the ISO image with K3b, by selecting the disc image file as the boot image. An alternative method is to use the Ultimate Boot CD, from www.ultimatebootcd.com. This is a bootable CD containing over 100 floppy-based diagnostic tools and utilities. It does not contain copyrighted files, like your BIOS update, but the website contains clear instructions for adding your own images. Back to the list ****** How to use Yahoo Messenger on Linux Q:: After finally getting my Internet to work in Mandrake 9, which I'm happy about, I want to install Yahoo Messenger. However, I ran into a problem. On the Unix site (http://messenger.yahoo.com/messenger/download/unix.html). I'm not sure which option to choose because there isn't a Mandrake one. I know Mandrake was built on Red Hat but Mandrake has probably changed a lot since then and I don't have a clue where to start as I'm a newbie! Could someone please help me with this? A:: A great way to use Yahoo with Linux is with Gaim (http://gaim.sf.net), which provides access to Yahoo, MSN, AIM and other instant messaging protocols. Mandrake is so very far separated from Red Hat at this point that the only common feature between the two is the use of RPMs. As such, it's rare for packages for Red Hat to work with Mandrake due to the differences in libraries. Back to the list ****** Fixing DCOP_SERVER not working error message Q:: Having heard so much about PCLinuxOS and its foolproof installation, I thought I'd give it a go. I tried version 89a and it ran sweetly from the CD, so I clicked Install To Hard-Drive. The installation appeared to proceed absolutely fine: no hangups, awkward questions I couldn't answer or anything like that. It finished normally, as far as I could tell. However, on reboot from the hard drive and after I'd logged into KDE, an error message appeared, saying it couldn't start KDE and suggesting that I check my DCOP_SERVER was running. What is my DCOP_SERVER and how do I check if it's running and make it do so if not? Shouldn't it be running anyway if the PCLinuxOS installation is so foolproof? Although I have had to revert to my Mandriva setup for the time being, this isn't quite right any longer as some settings seem to have been altered by PCLinuxOS. A:: It appears that you have tried to use the same home directory for your user on both distros. Sharing /home is fine, but using the same home directory in different distros is asking for problems. Although you may have the same user name, the numeric user and group IDs are often different. As the system uses the numeric IDs to determine who owns what, it is likely that your user in PCLinuxOS is not able to create files in the home directory. Since the DCOP server tries to create sockets in ~/.kde, and fails, KDE thinks the DCOP server is not running, so it cannot start up. DCOP is the Desktop COmmunication Protocol. It is an inter-process communication system, whereby programs can exchange messages and data. It is fundamental to the working of KDE, which relies heavily on embedding one program in another, such as KMail in Kontact or KPDF in Konqueror when you click a link to a PDF file. The safest approach is to use a different home directory for each distro. You can use the same username, just change the home directory. For example, you could be 'fred' on each distro and have home directories of /home/fred-pclinuxos and /home/fred-mandriva, respectively. To make it easier to access the other distros' home directory, set your user and group IDs to be the same. I found Mandriva gave the first user a UID and GID of 500, whereas PCLinuxOS starts at 501, because the guest user for the Live CD uses 500. The files you need to edit, as root, are /etc/passwd and /etc/group. The line in /etc/passwd should be like: --- username:x:UID;GID:Real Name:/home/username:/bin/bash ,,, and in /etc/group, --- groupname:x:GID: ,,, Change them so that your PCLinuxOS files have the same UID and GID values as in Mandriva, and reboot. You should also make sure that all files in your home directories have the correct IDs with --- chown -R username: /home/ username* ,,, Back to the list ****** Set up remote desktops for KDE using Krdc/Krfb Q:: I have just built a new PC running SUSE 9.3 for my mum. As she lives 80-odd miles away, can I use Krdc/Krfb to help her if she has a problem? We both have 2MB ADSL via Ethernet routers and static IP addresses. Could you point me to a HOWTO? I have Googled for VNC [Virtual Network Computing programs] but they all seem to be for LAN setup with Windows. A:: It is possible to use KDE's Remote Desktop connection over the internet, but your routers will block this by default. If possible, set up and test the Krdc/Krfb connection with a direct Ethernet link between your computers (that is, if you have a laptop, take it to your mum's house). Make sure you set up a secure password for connection. It is possible to set up a connection with no password, which may be acceptable for use inside a firewalled LAN. This would be a bad idea - a really bad idea - when exposing your computer to the internet. VNC uses network ports starting at 5900 for display 0, 5901 for display 1 and so on. You will only need display 0, so open up port 5900 on your mum's router and direct it to the LAN IP address of her computer. In the firewall of the router or the computer (or both), block access to port 5900 from any public IP address but your own. This will stop script kiddies trying to crack your password. Now you should be able to connect with Krfb using an address of the form a.b.c.d:0, replacing a.b.c.d with your mum's public IP address. You can usually read this from her router. Alternatively, visit http:// pcsupport.x-host.uni.cc/ip.php. Back to the list ****** Convert Microsoft Word documents with Antiword and Bash scripts Q:: I have a bunch of directories with some Microsoft Office Word files on a Gentoo system, and I need to use Antiword to change them to text files. I have written a script that does it for a given directory: --- for i in 'ls *.doc' ; do antiword $i >${i/doc/txt}; done ,,, There are probably some bugs in the line (like going down subdirectories) but I will iron them out. My main problem is that some of the files have a space in their name, such as 'file 1.doc'. I end up with errors like: --- file 'file' does not exist, cannot convert file '1.doc' ,,, How I could turn around this problem? It would also be useful to be able to delete the DOC files once they are successfully converted. A:: You need to put quotes around the variables, so bash treats 'file 1.doc' as a single file and not as two files ('file' and '1.doc'). They must be double quotes, not singles. Bash interprets the contents of single quotes as literal, whereas it will expand the values of variables within double quotes. You do not need to use 'ls', as '*.doc' will match on files in the current directory by itself. It is also best to add '-i 1' to prevent Antiword outputting image data into your text file. Your command then becomes: --- for i in *.doc ; do antiword -i 1 "${i}" >"${i/doc/txt}"; done ,,, To recurse though directories, use find: --- find . -name '*.doc' | while read i; do antiword -i 1 "${i}" >"${i/doc/txt}"; done ,,, You could also use find to remove the DOC files afterwards, thus: --- find . -name '*.doc' -exec rm "{}" \; ,,, This would remove all DOC files, even if Antiword failed to convert them. To convert the files and remove them after successful conversion, use this: --- find . -name '*.doc' | while read i; do antiword -i 1 "${i}" >"${i/doc/txt}" && rm "${i}"; done ,,, Find outputs a list of matching files, one per line, which are read by read; then Antiword converts each file. The && means that the rm command is only run if the previous command (antiword) ran without error. Back to the list ****** Monitor Apache HTTPD server in real-time Q:: I have recently started a small design business and am now hosting a number of sites for my clients on my dedicated Red Hat Enterprise Linux 3 server. As I have numerous access_log files scattered all over the filesystem, what's the easiest way to keep a real-time view of what's going on with the HTTPD web server? If I use top, I can see several HTTPD processes consuming a fair bit of CPU, but I don't know how to associate these processes with a particular website. A:: The HTTPD server on RHEL 3 comes pre-packaged with mod_status, which is an Apache module for monitoring how the web server is performing. To enable it, open up /etc/httpd/conf/httpd.conf and uncomment the following lines: --- <Location /server-status> SetHandler server-status Order deny,allow Deny from all Allow from desktop.ip </Location> ,,, To obtain a full status report, also uncomment this line: --- ExtendedStatus On ,,, After restarting the HTTPD server, you can browse http://server.ip/server-status?refresh=5. This will display an HTML page that refreshes every five seconds, providing you with the following information: The number of workers (threads) serving requests. The number of idle workers. The status of each worker, the number of requests that worker has performed and the total number of bytes served by the worker. A total number of accesses and byte count served. The time the server was started/restarted and the time it has been running for. Averages giving the number of requests per second, the number of bytes served per second and the average number of bytes per request. The current percentage CPU used by each worker and in total by Apache. The current hosts and requests being processed. (Quoted directly from http://httpd.apache.org/docs/2.0/mod/mod_status.html.) It is also good practice to keep this information restricted to specific hosts, as a lot of information is revealed about your HTTPD server through this module. Leaving the default Deny From All and then opening up access with Allow From Desktop.ip will ensure that only authorised hosts are permitted to view this information. Back to the list ****** Get a Zip Plus drive working in Linux Q:: I have recently tried many flavours of Mandrake and others (plus many Live distros) and not one of them will detect my faithful old Zip Plus drive, for which I have an archive of 42 100MB disks. In desperation I decided to install it manually, and looked up the mini HOWTO, which referred me to a David Campbell at www.torque.net. Unfortunately, this website does not respond and there is no re-direction. I've done several web searches to try to track down the file but without success. Can you please help? The Zip drive is paralleled with an Epson printer and is closest to the PC as recommended by Iomega. The printer is detected and installed correctly. lsmod does not detect the imm module. I know some might say, "Why not transfer your archive to one DVD?" - but think of the work involved. Most of the archive consists of my own engineering programs which have to be updated from time to time and this is a straightforward and reliable process on Zip disks, which is why I want to keep it operational. It is interesting to note that Windows XP has no difficulty at all in detecting and installing it. This is one of the things preventing me from making more use of Linux. A:: The Zip Plus drive has both parallel and SCSI connectors. The easiest way to connect it is to fit a SCSI PCI card. Even a cheap, sub-£10 card is likely to perform better than parallel, with no need for special drivers. If SCSI is not an option, you will need the imm module. The website you refer to is indeed defunct, but the imm module is now part of the standard kernel source tree, so you don't need to install it. Mandriva (2005 and 2006) has this module included with the kernel, as do the Knoppix 4 Live discs. Just type, as root --- modprobe -v imm ,,, This should report the modules loaded: imm plus any it may depend on. If the drive contained a disc when you loaded the module, it will be recognised, usually as /dev/sda. If the drive was empty, the driver will detect when a disc is loaded. If this is your only drive using the SCSI sub-system (USB memory drives use that too) it will appear as /dev/sda. If you have another drive, or need confirmation of the device, type --- tail -f /var/log/messages ,,, and insert a disc. You will see something along the lines of --- scsi0 : Iomega ZIP Plus drive scsi : 1 host. Vendor: IOMEGA Model: ZIP 100 PLUS Rev: J.66 Type: Direct-Access ANSI SCSI revision: 02 Detected scsi removable disk sda at scsi0, channel 0, id 6, lun 0 SCSI device sda: hdwr sector= 512 bytes. Sectors= 196608 [96 MB] [0.1 GB] sda: Write Protect is off sda: sda1 ,,, In this case the drive is /dev/sda with a single partition at /dev/sda1. Back to the list ****** Audit directory permission changes in Red Hat Enterprise Linux Q:: I am a Windows sysadmin at a marketing company. The company employs a number of developers who work on in-house marketing campaign software that runs on Red Hat Enterprise Linux. The other Windows administrator with Linux experience who used to manage these servers resigned, and the development team took over the administration of all Linux servers. After a conversation with my ex-colleague I've realised that the development team and their "let's get the job done" attitude often meant changing execute rights on root-only applications and allowing some restricted directories to be accessed by everyone. Now that I have to start involving myself with these servers, is it easy to audit all these changes? A:: The only way to pick up all system modifications is to revise disaster recovery procedures and bringing up a replica of the production system from bare metal and a clean copy of the operating system. It should be something that upper management may be keen on backing up too. As a starter, however, RPM can help you determine which files on an installation have been modified. Running rpm -Va will show all files in all RPM packages installed that have been modified since installation. It is normal for some configuration to be changed but watch out for files and directories that report any of the following failures: --- M The permissions have been changed. 5 The file has changed. U/G Ownership of the file has changed. ,,, RPM is flexible enough to allow permissions and ownership to be set back to the original. For example, to recover 'M, U & G' failures reported for a particular package, run --- # rpm --setperms <package> # rpm --setugids <package> ,,, Back to the list ****** Access FoxPro .dbs Xbase databases under Linux Q:: I am a recent convert to Linux and have managed to find alternatives to most of my Microsoft programs. One thing I still haven't managed to do is read or update FoxPro .dbf files. I have a legacy database system that uses them and in Windows I would connect through an ODBC connection. Is there a way to do it in Linux? I am running Ubuntu and would prefer to use Python or PHP. A:: There are a few ways to access Xbase databases, as created by [the programming language] FoxPro. Rekall is probably the most complete. This is a database front-end, available in commercial and GPL variants. Access to Xbase databases is through Rekall's XBSQL library. The GPL version is available at www.rekallrevealed.org. Rekall can be scripted with Python. Another option is Knoda, from www.knoda.org. Once again, this is a database front-end that connects to various database servers. Alternatively, you could use XBSQL and the Xbase library directly to build your own PHP or Python based front-end. However, a better long-term solution may be to use Rekall or XBSQL to export all your data to a MySQL or PostgreSQL database. Both of these database servers are well supported and have a wide variety of web or GUI front-ends available, and allow command line or script access. XBSQL is available for Ubuntu through the Universe repository. Select this repository in Synaptic and install libxbsql-bin to give yourself SQL command line access to your FoxPro databases. Back to the list ****** Mandriva failing to boot - hanging at Initialising Cryptographic API Q:: I've been trying to get Mandriva to run, but alas, all the bad press I've heard about Linux has been proved true. It doesn't work and there's little help available in deciding even where to begin when it doesn't. There were problems installing Mandriva, with a fatal error when I tried to get it to initialise from the first CD. When it finally loaded it went through all the (incredibly slow) process of installing. Now it won't boot - hasn't booted once.The routine gets to 'Initialising Cryptographic API' then hangs. I have tried booting from the CD again and using the rescue option but am getting fatal errors again. No amount of internet searching gives me any pointers to where the problem is or what to do in any sort of plain language. Now that I have this useless system on my laptop's hard drive, how do I get rid of it if the system won't even run? I've just lost 10 gig of space to a dud system! A:: I am sorry you are having so much trouble with what is usually a straightforward installation. Despite what you have heard, Linux does work and there is help available, but nothing is perfect and some people have difficulties. Mandriva should not be "incredibly slow" to install. I installed Mandriva 2006 and Windows XP on to a laptop last week and the installation times were within five minutes of each other, despite Mandriva installing a lot more software. This, and your other errors, indicates that there may be a problem with your hard disk. Not necessarily a fault, but possibly an incompatibility between the controller and the default Mandriva drivers. Fortunately, this is usually easy to deal with. When the Mandriva disc boots, press F1 at the splash screen. This gives a boot prompt where you can make changes to the way it starts up. Some laptops require you to type --- linux noapic ,,, If you had told me which laptop you are using, I may have been able to give a more specific recommendation: try following up on the LXF forums at www.linuxformat.com. If you want to remove Mandriva Linux and reclaim the disk space for Windows, you can either use something like Partition Magic on Windows to remove the Linux partitions and resize the Windows partition to fill the whole disk, or you can do it from the Mandriva installer. First you will need to remove the Linux bootloader (Mandriva 2006 uses Grub, but some earlier version use Lilo). Boot from the CD, type rescue at the boot prompt and select the option to restore the Windows bootloader. Now reboot to start the installer and proceed to where you are given choices for partitioning. Select Custom Partitioning, delete all but the Windows partition then select the Windows partition, click Resize and drag the slider to the far right. When the process is finished, click Done and you will see a warning about creating a root partition. Ignore this and reboot reboot, eject the CD, let Scandisk do its stuff then Windows will start. Back to the list ****** Find network hosts that are using up all the bandwidth Q:: I work in the IT department of a small hospital. More and more, we have PCs going out into our wards and doctors' areas - all of which have internet access. Some time ago, I installed Squid and DansGuardian and they're working really well. The thing is, our network really isn't very fast - the main hospital still runs on 10MB Ethernet and some of the cabling infrastructure is over 15 years old. Sometimes our network slows down to a crawl, and I think it's because someone out there is downloading a lot of large files (some of the medical PDFs can be huge). Can you recommend any software to monitor the network for me and show me any hosts that are using up all the bandwidth? A:: Ntop (www.ntop.org) is a free, portable traffic monitoring tool, and should be your first port of call. Designed to be the network equivalent of top, it collects network metrics and can report on network traffic by interface, protocol and host. Or try MRTG (http://people.ee.ethz.ch/~oetiker/webtools/mrtg), a daemon that generates a visual representation of SNMP variables changing over time and has traditionally been used to graph bandwidth utilisation in and out of an interface. You may have to install an appropriate SNMP daemon if the monitored interface is on a Linux host, while most routers and managed switches have SNMP capabilities that can be enabled. MRTG becomes very resource-intensive when polling a large number of devices as, by default, it generates all image files every five minutes. However, you can use rrdtool to store the data collected by the polling engine and a third-party CGI script such as 14all.cgi to generate reports only on demand. Finally, Ethereal (www.ethereal.com), a free utility for sniffing, filtering and decoding network traffic, is invaluable for thorough traffic investigations but would be overkill for your everyday monitoring of network usage and capacity planning. Back to the list ****** Splitting the display in screen Q:: Every time I try to split the display in screen, it just freezes the session and the only way I have found to get around this problem is to shut the session down, which means I am unable to re-connect to the session. I am current running Fedora with Screen version 4.00.02. A:: Screen's keybindings are case sensitive. The command to split a screen is Ctrl+a S, with a capital S. Ctrl+a s, with a lower-case s, sends a control-s (xoff) to the terminal. This is the command to stop any output. You can do the same in a normal terminal with Ctrl+s. Ctrl+s effectively freezes the terminal, which is exactly what you are seeing. Now that you know the difference it should not happen again, but if you do forget to press the Shift key, Ctrl+a q sends a control-q (xon) to resume the terminal's output. If you want to use the split function regularly, it may be better to bind it to an easier key combination. Add these lines to the file .screenrc in your home directory: --- bind ^S xoff bind s split ,,, Now 'Ctrl+a s' will split the screen and 'Ctrl+a Ctrl+s' will send an xoff to pause the terminal. Back to the list ****** Oracle install problems Q:: I recently installed Oracle on my fresh Linux OS. The database was successful, although there were problems with my DBCA. In the process of fixing the above problem, someone suggested that I put this line at the top of my .java_wrapper in the jre directory of JRE:LD_PRELOAD=/etc/libcwait.so". That was when all hell broke loose! My system came back with this message: "/etc/libcwait.so: cannot open shared object No such file or directory". I decided to take the line out or try to find libcwait.so and put it in the right directory, but my system wouldn't allow me to do this. I then decided to logout and reboot. Big mistake! During the reboot, the system froze with this message: init: error while loading shared libraries. /etc/libcwait.so: cannot open shared object No such file or directory. Kernel panic: attempted to kill init". It wouldn't go any further after this. Could anyone help? How do I load Linux or do safe-mode loading so I can take this offending LD_PRELOAD=/etc/libcwait.so out of the .java_wrapper? A:: Booting the system from a rescue disk will allow the root filesystem to be mounted and /etc/ld.so. preload to be removed to avoid the system attempting to load /etc/libcwait.so. /etc/libcwait.so is a strange place for a library, so verifying the documentation from Oracle to ensure that the path is correct would be a great first step to solving the problem. It will most likely be in /lib or /opt rather than /etc, although running a find' across the disk would find the exact library path quickly. Back to the list ****** Access multiple home directories with a single FTP account Q:: I've set up a number of FTP accounts restricted to their respective directories. On our dedicated server running RHEL I managed to do this by setting the users to chroot(). These accounts are used by our clients, who upload spreadsheets and other data that is then downloaded and processed by our management consultants. This was very popular as originally all information was exchanged over email. The consultants have now made my task more challenging by refusing to log in to each of their clients' FTP acount, insisting that it should be easy to set up the FTP server in a way that they can log in with one username and password and see all their clients's as subfolders. I have to be careful not to allow one consultant to be able to see information pertaining to another consultant's clients. Can you help? A:: Assuming that you are using the stock vsftpd server that is bundled with RHEL 3 and 4, a bit of reconfiguration on how the accounts are created can take you a long way. For a consultant called John Doe, an account without a login shell can be created as follows: --- # useradd -d /home/jdoe -s /sbin/nologin jdoe ,,, John's clients can now have their home directories created under /home/jdoe. To allow the consultant to descend to and manage files within the client's home directories, the accounts can be created with 'jdoe' as the default group and full group permissions assigned thus: --- # useradd -g jdoe -d /home/jdoe/client1 -s /sbin/nologin client1 # chmod g=rwx /home/jdoe/client1/ ,,, The FTP server will not be able to transfer the client into his home directory unless execute permissions are set on all the parent directories: --- # chmod g+x /home/jdoe ,,, FTP users created will also have to be configured to chroot(). Back to the list ****** How to enable port forwarding in Mandriva Q:: I have four computers at home and one of them is acting as a router. This computer has Mandriva Linux 2005 and two network cards installed. It connects to the ISP with eth0 using a static IP address, while eth1 is sharing internet with the other three computers through the internet connection sharing utility in the Mandriva Control Center. I would like to enable some services on the client computers that require port forwarding from the router - for example, forward port 22 access to the public address to port 22 on a particular local address. How is this done? Are there any easy step-by-step instructions on how to do this, and continue using Mandriva's easy internet connection sharing utility? A:: Mandriva's internet connection sharing setting in the Control Center only allows for basic connection sharing, not for running a full router. here are alternatives that will do what you want, though: Firestarter is probably best for your needs. This handles connection sharing, port forwarding and a firewall, all from a simple GUI. Firestarter is in the Mandriva contrib repository. If you have not already added this to your sources in Mandriva Control Center, go to http://easyurpmi.zarb.org and follow the instructions to add a contrib repository (add a PLF repository while you are there). Now install Firestarter from the Control Center and fire it up (sorry). If there is not a menu entry for it, run it from a terminal as root. The Firestarter wizard will offer to set up internet connection sharing for you, which you should accept, so disable the Mandriva connection sharing first. Now click on the Policy tab, click in the bottom section of the window and select Add Rule. For standard services, like SSH, simply select it from the drop-down at the side of the Name box and give the IP address you wish to forward to. Click Apply Policy and it's done. Finally, go into Mandriva Control Center > System > Services and make sure Firestarter is set to run on boot-up - this should have been done when you installed - and your port forwarding will always be available. Check the Firestarter documentation for other options. As you have it installed, you can use it for your firewall too, it provides more control than the Control Center firewall. Back to the list ****** Red Hat not detecting SATA hard disk Q:: I (a newbie) want to dual boot with XP and Red Hat 9.0 but have encountered a problem: RH9 cannot detect my SATA hard disk. Where can I find a device driver to load it, and how do I do that if the Red Hat disc is in the CD drive and I have no floppy drive? My system is an MSI Neo2 Platinum motherboard with s939/AMD6 3000+ chip, Nvidia 6800GT-AGP graphics card and 2GB of RAM. On another IDE [Integrated Drive Electronics] system, I can load Quake 3 on to RH9, but when installing (running the sh file) on Fedora, it gave me some kind of 'trap error' statement. A friend told me it's got something to do with the Glibc-something - can you help? A:: Red Hat 9.0 is several years old, older than your motherboard. This is why it fails to recognise your SATA controller. Are you sure this is the right distro for you? You are clearly not running a server, not with that graphics card, and there are much better and more modern alternatives for desktop use. I would suggest you try a more modern distro, one with support for your hardware and one that comes in a 64-bit version to make the most of your processor. Something like SUSE 10.0, Mandriva Linux 2006 or Ubuntu 5.10 would be far more suitable. You can find a comprehensive listing of distros at www.distrowatch.com. It is impossible to answer your second question without knowing the details of the error given by the 'trap' message. If you send us the exact error message, we should be able to help. This applies to all help requests - the more information you give us, the better the chance of our being able to help you. Back to the list ****** Install Nvidia drivers in SUSE via the command line Q:: I've just upgraded my graphics card to an Nvidia MX440 (128MB). It works great in Windows after I installed the Nvidia drivers, but now I need to install the Linux drivers for SUSE 9.3, which I downloaded into my home partition. The problem is that I have to install them on the command line without the X Server running. How do I navigate to my home directory and run the driver setup routine? A:: Press Ctrl+Alt+F1 to switch to a virtual console and log in as root. Then type the following: --- init 3 cd ~carl #or whatever your username is sh NVIDIA-Linux<tab> ,,, The Tab key will complete the name of the Nvidia installer file for you. Answer the questions it asks (the defaults are usually fine) and it will install the drivers for you. Once the drivers are installed, you need to edit /etc/xorg.conf (as root) to tell X to use the new drivers. Full details are in the Readme file you should have downloaded from the same place as the installer. Before you edit the file, save a copy of it so you can reinstate it in case the Nvidia drivers do not work for any reason. If you are not comfortable with using a console-based text editor to edit files (although this is a good thing to learn if you want to go further with Linux), type init 5 to restart the desktop, select System > File Manager > File Manager - Super User Mode from the SUSE menu, navigate to /etc/X11, right-click on xorg.conf and select Kedit from the menu. After you have edited and saved the file, type init 3 to quit the desktop. Whichever method you used to edit xorg.conf, now type init 5 to start the desktop with the new drivers. You should see the Nvidia logo before the desktop loads, confirming that you used the correct drivers. Alternatively, you can install the Nvidia drivers from Yast by following the instructions at (deep breath) www.suse.de/~sndirsch/nvidia-installer-HOWTO.html#5, although you may not get the latest version. Back to the list ****** GRUB error 18 after installing SUSE Q:: This my third attempt at installing Linux and I am at my wits' end with this system. I bought the DVD/CD edition [of SUSE] because of the advertised back-up from Novell. The installation went according to a couple of reports I have read until the first boot from the hard disk. I have tried every suggested way to install this system and always end up with the same results, namely --- Grub loading stage 1.5 Grub loading please wait Error 18 with a flashing cursor ,,, That is as far as it goes. What this has effectively done is rendered my computer unusable as I cannot now get into Windows and have had to bring an old computer back into use for this email. A:: Because Grub has to fit in a small space on the disk, there is no room for helpful error messages, but 'Error 18' translates as 'Selected cylinder exceeds maximum supported by BIOS'. In other words, your BIOS - which initialises hardware - is unable to handle a hard drive this large. Windows is able to boot because the Windows partition is at the start of the distro's disk, within the area handled by the BIOS. This is not a limitation of Linux, which hasn't even started to load, but your hardware. You would see the same problem if you tried to install two versions of Windows, say 98 and XP. There are a few ways to deal with this. You could work around the problem by making your Windows partition smaller (it is impossible to say how small without knowing details of your BIOS and hard disk) and telling the SUSE partitioner to create a separate /boot partition. This ensures that the files Grub needs are at the start of the Linux partitions, hopefully within the area handled by the BIOS. Once the bootloader has started, the BIOS limitations are irrelevant. A better option is to check your motherboard manufacturer's website for an update to the BIOS, which could make this problem disappear. A third solution, which isn't ideal but would give instant access to your operating systems, is to boot from the installation CD. The first option on the initial menu is to boot from hard disk, which will take you to the Grub bootloader screen, bypassing the need for the BIOS to boot the disk. To restore the Windows bootloader, booting from the Windows CD in rescue mode and run --- fdisk /mbr ,,, for Windows 9x, or --- fixmbr ,,, for Windows XP. Back to the list ****** SUSE installer stopping with 'storage modification failed' Q:: I have just tried installing SUSE Linux 10.0. It stopped at the partitioning table stage with error 3027, 'storage modification failed', while shrinking partition /dev/hda1 to 12.6GB. I am using an old Intel Celeron-based PC running Windows XP Home SP2 on a 20GB hard drive with no partitions at present - and I'm completely new to all this. A:: By "no partitions at present" I take it you mean no Linux partitions there must be a Windows partition or the installer would not be trying to shrink it. The usual cause of a failure when resizing is that the partition has not been sufficiently defragmented. While in Windows, go to My Computer, right-click on Drive C and select Properties. Now run Error-Checking in the Tools tab, followed by Defragmentation. Then put the SUSE disc in the drive and reboot. The Windows XP defragmenter is not particularly effective, so you may need to run it more than once before the disk is in a suitable state for resizing. While you are in the Properties windows, check whether the disk has been given a volume name in the box at the top of the General tab. This has been known to cause problems for the resizer, so delete it. Back to the list ****** Get Tiscali broadband with Sagem Fast 800 modem working on Linux Q:: My setup uses hard drive caddies that enable me to swap to and from Windows XP and Mandriva Linux. I am using Tiscali broadband successfully on Windows XP. My problem is that I cannot set up this connection on Linux and Tiscali doesn't appear to have the answers, though I have asked. Do you know how I set up a Linux connection using the Sagem Fast 800 modem? I have the Power On light lit and sometimes the Signal light, but the error message tells me that a modem is not connected. Sagem's website suggests that I plug in a USB hub that has its own power supply, but since the power supply light is on I do not believe that this is the problem. What I really want is an idiot's step-by-step guide to solve this problem. Can you help? A:: The best solution is to replace the USB modem with an Ethernet-based ADSL modem/router, which can be bought for around £25. The USB modems supplied by ISPs are minimal devices, leaving much of the work to the host computer and only barely working on Windows. A hardware modem/router will give better performance on Windows and Linux, as well as being easier to set up. It also allows you to connect more than one computer to your ADSL connection, should you wish to do so. Sagem's point about using a powered hub is valid. The USB spec only requires the port to deliver 500mA (milliamps), and most USB ADSL modems are borderline in this respect. It might light the LEDs but not be enough to run the modem properly. Even if the modem connects, insufficient power may cause it to hang or drop the connection later. I don't have one of these modems, but I borrowed one to try to solve this on my Mandriva-powered laptop. It worked, but I was shocked at how much slower than my normal Ethernet connection it was to connect and access web pages. Plug the modem in and wait a few seconds for the LEDs to steady. Start the Mandriva Control Center, go into Network & Internet > Set Up A New Network Interface and select ADSL Connection. There should be an entry for the Sagem USB modem in the list; select this. The drivers for the modem will be installed here, so have your installation discs handy. Choose your ISP from the list. For UK users there are only two choices, but the BT option will work with everything except AOL, since the ISPs all use BT lines. Go with the default on the next page then give your login and password (these are case-sensitive). Choose whether you want the connection started when you boot - you use startadsl otherwise - and allow it to test the connection. Check your login details if it doesn't connect. Back to the list ****** Creating a backup email server on RHEL 4 Q:: I've got a dedicated server running Red Hat Enterprise Linux 4, which is used to host my company's website and email. When I first got the server I decided to use Sendmail as I only host a single domain, and the Sendmail configuration worked straight out of the box. All I had to do was add my domain to /etc/mail/local-host-names and restart Sendmail. My business is growing quickly and I am becoming more and more reliant on the mail that gets sent through the server - so if my server ever dropped offline unnoticed I would lose significant business. I have a DSL line at home and a PC running Fedora - can you outline how I can use that machine as a backup mail server in the event that my dedicated server cannot accept mail? A:: Let's say your domain is example.com. In DNS, add the following two MX lists which hosts will accept mail for a given domain): --- example.com. MX 10 primary.example.com. example.com. MX 20 secondary.example.com. ,,, As primary.example.com has a lower priority (10), it will take precedence over secondary. example.com. However, if primary.example.com becomes unavailable, mail servers will attempt to contact secondary.example.com. Once the DNS record has been saved and the name service reloaded, try to dig the domain to see if both MX records are visible. As you are already receiving mail for example.com on the dedicated server, there is no need to adjust the Sendmail configuration there. On your Fedora PC, all you need to do is create /etc/mail/relay-domains containing your domain example.com. Once you save the file, restart Sendmail. Ensure the domain is not added to /etc/local-host-names on secondary.example.com, as this will cause mail to get delivered locally. Now, I suggest testing your configuration by stopping Sendmail on the dedicated server and sending records (Mail Exchanger: yourself a message from a third-party mail server. If DNS is set up correctly, you should see the message hitting the Fedora box by running tail on /var/log/maillog. Don't be alarmed by the deferred message - that's actually Sendmail trying to get the message back out to primary.example.com. The Sendmail instance on your Fedora PC should try to resend the mail every hour, so it might take a while after primary.example.com comes back online before it receives the mail queued by secondary.example.com. Back to the list ****** D-Linux network card dropping connection in Mandriva Q:: I am having problems with my wireless connection using Mandriva Linux 2006. I am using a D-Link Airplus G+ laptop card (with NdisWrapper) to connect to my D-Link G604T wireless router on bootup. Everything starts OK and if I check /etc/resolv.conf the name server is set to 212.30.8.150. All is well for about 20 to 30 minutes and then I find I am unable to connect to any web pages. The network is still shown as up but when I check /etc/resolv.conf again it now reads nameserver 192.168.1.1' and I have to set up my wireless connection using Mandriva Control Center all over again. This happens regardless of whether I have WEP encryption set. I had a similar problem using Mandriva Linux 2005 and overcame this by setting the permissions to resolv.conf as read-only, but this doesn't seem to work with 2006. A:: Mandriva is using DHCP to get web address and routing information from the router. It would appear that your router is running as a DHCP server but not a DNS server/cache. This router, like most, provides both services, so it is likely that DNS is either disabled or misconfigured. In fact, the router is telling your computer to use it as the DNS server, which should work. Your router's manual covers this in detail, but the most common solution is to go into the DNS section of the router's web configuration and set it to Auto Discovery. If this fails, you can set the servers manually on the same page. Alternatively, you can prevent Mandriva from updating the DNS servers via DHCP. Go into Mandriva Control Center & Network & Internet > Reconfigure A Network Interface, select your interface, go to the DHCP tab and turn off the option to Get DNS Servers From DHCP. Setting /etc/resolv.conf to read-only will not help if the DHCP client is running as root, since root is still able to modify write-protected files. Back to the list ****** Configuring Linux as a domain controller and groupware server Q:: I have a client who needs me to get him a collaborative mail server such as Microsoft Exchange. I can easily do the project 100% in Windows (ie Windows 200x and Exchange 200x). However, I know that I can configure Linux as a domain controller. I am sure I can handle that but I need to know, is there a mail server in Linux that can work like Exchange still using Microsoft Outlook as the client? It would be great if there is a total Linux solution. A:: There are a number of options, depending on how much your client is prepared to pay or how much work you are prepared to do. OpenGoupware.org (www.opengroupware.org) is an open source groupware server that works with clients on all major platforms. It isn't a mail server itself, but it provides the groupware functions and works with standard mail servers, with which you are probably already familiar. OpenGoupware.org can be used under the GPL or LGPL licence, so there is no licensing cost, but there would probably be a fair bit of work in setting up and supporting the system. If the server is running on a separate machine, the SUSE Linux Openexchange server provides a Linux equivalent to MS Exchange, working with Microsoft clients like Outlook. This is a complete OS install, so it cannot be run alongside other software on an existing system. You can find more information and an online demo at www.novell.com/products/openexchange note that this solution has a price tag. A third alternative is the similarly named, but unconnected, Open-Xchange from www.openexchange.com. This is another commercial offering, available for Red Hat and SUSE. As with the SUSE product, it is intended for use as a direct replacement for MS Exchange. Which of these is most suitable depends on your client's circumstances and budget, but one of these three should provide what you and they need. Back to the list ****** Updating an old hard drive with a new OS Q:: I've put a hard disk in my old computer. It did have a damaged hard disk but now I've reformatted it and partitioned it to a primary DOS partition. However, Windows 95 is old and I can't do anything with it because it's a new computer. How would it be possible to install another operating system, ever Windows ME or Linux, onto the old drive that's been re-partitioned, or am I going to have to install Windows XP instead? A:: You could very easily install Fedora or Mandrake onto the disk, or optionally do an install of both Linux and Windows XP onto the disk. Both will repartition the disk when you install them, removing the old Windows 95 filesystem. Back to the list ****** Track memory usage over time Q:: I have recently rented an entry-level Linux server with a single SCSI disk and 1GB of RAM to host one of my websites. The server has crashed twice over the past two weeks and the hosting provider let me know that the server had to be rebooted because it had completely run out of memory. We were also told that we can investigate the contents of our sar log files for a history of memory use. We have since revised our scripts and hopefully the code has been made more memory-efficient. We've also created a 1GB swap file in addition to the existing 1GB swap partition as a precaution. Now we are thinking of writing some PHP scripts to process the sar logs to help us visually track memory usage over time and help us with capacity and upgrade planning. Is there a tool that does just that? A:: MRTG (the Multi Router Traffic Grapher, http://people.ee.ethz. ch/~oetiker/webtools/mrtg) is typically used to graph network bandwidth use but can be easily extended to plot other metrics Graphs are generated for the past day, week, month and year, making MRTG an excellent lightweight tool for visualising trends. To monitor memory and swap utilisation I use the following target in the MRTG configuration file: --- Target[srvmem]: '/usr/local/sbin/memstat.sh' Title[srvmem]: Mem and Swap Usage PageTop[srvmem]: Mem and Swap Usage MaxBytes[srvmem]: 100000000000 ShortLegend[srvmem]: B YLegend[srvmem]: Memory LegendI[srvmem]: Swap LegendO[srvmem]: Mem Legend1[srvmem]: Swap Legend2[srvmem]: Mem Options[srvmem]: gauge,growright, nopercent kMG[srvmem]: k,M,G,T,P,X Colours[srvmem]: RED#bb0000,B LUE#1000ff,GREEN#006600,VIO LET#ff00ff ,,, This calls the following script (/usr/local/sbin/memstat.sh) to get the amount of RAM and swap used: --- #!/bin/bash /usr/bin/free -b | /bin/awk ' \ NR==2 { ramUsed = $3 } \ NR==4 { swapUsed = $3 } \ END { print swapUsed "\n" ramUsed "\n0\n0" } ' ,,, Memory utilisation will be shown as a nice blue line and swap utilisation as a red line. There are many resources on the internet to help you set up MRTG, including www.linuxhomenetworking.com/linux-hn/mrtg.htm. It's easy to get carried away with MRTG, turning any possible aspect of a network into a graph. In such cases MRTG performance can be improved by using RRDtool as a logger and a third-party script as documented in the RRDtool Integration section of the MRTG documentation. Back to the list ****** Monitor open ports and listening processes Q:: One of our Linux servers was hacked recently and a backdoor shell installed. This resulted in considerable downtime as our co-location provider, with whom we host a rack of eight servers, unplugged the compromised server until it was repaired by our engineers over a remote console. The compromise had gone undetected for over a week until a third party filed a complaint, which prompted the hosting provider to pull our server offline. We adopted ideas for looking for signs of a compromise and since the incident we have scripted various checks to run daily on each server. Could you please recommend an easy way of monitoring which ports are open on each of our servers to help alert us of any unwanted listening processes on any of our servers? A:: A hardware firewall or an Iptables configuration on each host should be your first line of defence, configured to block traffic to all ports on a server except for the services a particular host is configured to listen on. Instead of having each server portscan itself, it may be a good idea to designate one of the servers on the network switch to do all the scanning, thus giving a true third party perspective. Nmap (www.insecure.org/nmap), would be my tool of choice for, among other things, scanning an IP for listening ports. For example, a basic scan to 192.168.100.100 for all listening TCP ports in the range 1-8,000 could be: --- $ nmap -p 1-8000 192.168.100.100 ,,, To simplify handling the results, you could use a script such as --- nmap-audit', http://heavyk.org/nmap-audit/ ,,, In conjunction with Cron, nmap-audit can be used to email the administrator details of just those ports that have been newly opened. Back to the list ****** Back up Cyrus IMAP email and configuration Q:: I am running a mail server at home (SUSE 9.3) with Cyrus as an IMAP server. It's the best thing there is. It fetches my emails, runs them through two virus scanners and sends them to my Cyrus server. I can read my mails on my desktop and my notebook, and using SquirrelMail I can read my emails from all over the world. But... I accidentally deleted a full mailbox and I didn't have a backup. How do I back up my email and configuration, and how do I restore them? I could back up the /var/spool/imap directory, but that would only back up my emails. To restore them, I have to make a new subdirectory using my IMAP client, copy the emails in there and reconstruct the mailbox. But that doesn't backup the configuration. So how do I do this? A:: Backing up the emails and backing up the configuration are two separate tasks. The configuration is the easy part, as it is all stored in /etc/imapd.conf and /etc/cyrus.conf. As long as you keep a backup of these files you can copy them back at any time. Backing up the mail should be as simple as creating a copy of /var/spool/imap - this contains the emails and their status information. I use rdiff-backup for this sort of task. It will back up a directory to another location, either locally or over the network. It also allows you to roll back to a previous version, although this is more appropriate for configuration files than mailboxes. You could also use rdiff-backup to maintain backups of your configuration. Put these lines in a script and call it from Cron: --- $BACKUP_DIR="/backup" rdiff-backup --terminal-verbosity 2 /etc $BACKUP_DIR/etc rdiff-backup --terminal-verbosity 2 /var/spool/imap $BACKUP_DIR/imap ,,, Set $BACKUP_DIR to wherever you keep your backups. If you want to keep archived copies of your emails, you could use a procmail recipe to copy a duplicate of the mail to another mailbox, providing you are already using procmail of course. Something like --- :0c: $MAILDIR/${LOGNAME}-bak/ ,,, will create a backup directory for each user and place a copy of each mail in there. Back to the list ****** Access standard input and output as files Q:: I'm trying to write a couple of Bash scripts using utility programs that take keyboard input. For example, --- update-alternatives --config xxx ,,, needs a choice from the keyboard. I want to automate it from a parameter passed when the script is used. At the moment my best effort writes a file using the input parameter, runs update-alternatives, redirecting input from the newly created file, then deletes the file. There must be a better way. How can you pass a parameter rather than use keyboard input without writing it to a file first? A:: Remember the Linux (and Unix) creed: "Everything is a file." This includes standard input and output. They have the special file handles &0 for stdin and &21 for stdout (&2 is stderr). This should do what you need: --- echo "A" | update-alternatives -- config xxx <&0 ,,, Where A is the input parameter. echo sends the command to stdout. The pipe (|) sends the stdout to stdin for the next command. &0 is the file handle for stdin, so <&0 redirects it to the command. Another Linux truism applies here too: "There are always at least two ways to accomplish a task." Instead of &0, &1 and &2 you can use /dev/stdin, /dev/stderr and /dev/stdout. The & versions are easier to type for quick shell commands, but the /dev versions will be a little more readable when you look at the script six months from now. Back to the list ****** How to install Wireshark (formerly Ethereal)? Q:: I have just installed SUSE Linux on a redundant PC, as I really would like an understanding of Linux. The install was easy to follow. However, I am a network engineer and would now like to install Wireshark [a network protocol analyser]. Because I am new to Linux and have very little experience, could you advise me on what to download and how to build and install it on my Linux PC? A:: While it is fairly easy to build Wireshark, or most other programs, from source, one of the benefits of a distro like SUSE is that the bulk of what you are likely to need is available to install from the discs or a central repository. To install Wireshark the easy way, run Yast from the System section of the SUSE menu, go in to the Software section and click on Software Management. Now you only need to type 'wireshark' in the search box, select the program from the results list and press Accept to install it. If there are any dependencies - other programs or libraries needed by the software - these will be installed automatically, so don't worry there. By default, Yast only knows about packages on the installation media. You can add extra installation sources (or repositories) by selecting Installation Sources from Yast's Software section. There is a list of SUSE mirrors at www.opensuse.org/Mirrors_Released_Version. Pick one of these and add it to Yast to make sure you have access to the latest updates. Back to the list ****** Blocking attacks on port 22 Q:: I am going to work abroad for a couple of months and I want to have remote access to my network indoors. So I installed FreeNX on SUSE 9.3 and forwarded port 22 on my Netgear router to the machine, and with no effort at all I was able to bring up my desktop by connecting through the internet to my local computer, look at my emails and start any application available on the box. This morning looking at the /var/log/messages file I saw that someone is attacking port 22. There were hundreds of messages from sshd for different users saying Invalid user <xxx> from :: ffff:195.90.196.20'. There are only two registered users on my system that can log in: root and my user ID, which looks nothing like anything a hacker can guess. I also use strong passwords with upper-and lower-case letters as well as numbers, and no dictionary words. Should I be worried about the attacks? Is there a way to tell sshd to refuse connections after x failed logons in y seconds, or should I just monitor it and drop packets on an IP address basis? A:: Such attacks are commonplace if you expose port 22 to the world at number of steps you can take to reduce the chances of someone getting in. Strong passwords are the first step. As you are using SSH for remote desktop use, you don't need root access, so disable that in /etc/ssh/sshd_config. Find the line --- PermitRootLogin yes ,,, and change the yes to no to block root access. You can still have root access if you need it by connecting as your user and using su to switch to root, but a cracker would have to first crack your username, then your password and then the root password. Alternatively, change the yes to without-password. This allows root logins, but only if you have an authorised key. See the man pages for ssh and ssh-keygen for details on generating and using keys like this. You could require all users to have a key, but this would mean copying your key to any computer you needed to use to log in. This is the best option if you will be using your own laptop via whatever internet connection you have available, large. But there are a but won't be much use if you plan to use other computers. You enable this in the configuration file with --- PasswordAuthentication no ,,, You could also run SSH on a non-standard port, something above 1024, by changing the 'Port 22' line in sshd_config and passing the new port number to nxclient or knx. This provides an extra layer of complication for the crackers to work through, and significantly cuts down on the number of logged access attempts. There are a number of programs that will monitor log files and block IP addresses that attempt brute force attacks on SSH or other ports. You could look at http://breakinguard.sourceforge.net, http://daemonshield.sourceforge.net or www.csc.liv.ac.uk/~greg/sshdfilter. Back to the list ****** Connect securely with encrypted access to Dovecot email server Q:: In our office we have an internal Dovecot-based email server. We would like to offer our employees encrypted access to it, as some of them want to connect from home, but we are worried about the security implications of allowing this. Please could you tell us how we can let them connect securely, using secure email protocols? A:: Securing these basic services is not hard, even though the mathematical concepts of cryptography can be very difficult to grasp. All we need to do is create an SSL certificate and make sure that the email server uses the certificates that we have created. You could also buy a certificate, but if it is just for internal usage, the expense may not be justified. If it were for publicly accessible services I would say you would need a certificate from a vendor that is recognised by most popular email clients, or a warning will display each time. In order to create the certificate we will be using the OpenSSL (http://openssl.org) program, which idistributions. As the openssl command can be extremely obscure, there is a simple interactive interface which can be used to generate most certificates that you will need. We could use a script that comes with Dovecot mkcert.sh, but if we use OpenSSL files we can make other types of keys and certificates. As root, change to the /etc/pki/tls/certs (/usr/share/ssl/certs for SUSE; /etc/ssl for Mandriva) directory. You can type make at the command prompt to get a reminder of what certificates you can create. Normally we create a key first and then create the certificate from the key; however, if we just specify make dovecot.pem it will create a key and certificate in the same file for use with Dovecot. All you need to do is fill in the information when prompted; the defaults are listed in square brackets. The most important field that you need to fill in is the Common Name, for which you should give the domain name of your server. All the others should be filled in as appropriate. Now we just have to copy the file we have created to the required location specified in the Dovecot configuration file (/etc/dovecot.conf) with the two parameters ssl_cert_file and ssl_key_file, being the certificate and key file respectively. So we just copy the joint key and certificate file to the location specified /etc/pki/dovecot/dovecot.pem and /etc/pki/dovecot/private/dovecot.pem removing the automatically generated localhost files that had been created. We have now generated a unique certificate and have a secure Dovecotmail server. Back to the list ****** Sync NTP server at specific times Q:: I have an old box running Mandrake 8.2 as a home server for about six other PCs in the house. We are still on ISDN dial-up as there is no pressing need to upgrade to broadband. I have been using NTP fine for the past few months to update the server's clock once a week. I have a simple Cron job that connects to the internet, calls ntpdate to sync the time with one of the NTP servers at uk.pool.ntp.org, then disconnects from the internet. The next stage is to get the client PCs to sync to the server's time. I cannot use ntpdate on the server as it is a one-off command rather than a daemon, so I have to use ntpd, the actual NTP daemon. Now that I've finally figured out to set it up - the NTP documentation reads like an astrophysics PhD thesis rather than a user's guide - I can indeed sync the client PCs to the server, but the NTP daemon tries to sync to a server on the internet every few minutes, and most of the time this fails as the net connection happens to be down. Basically, how can I use ntpd but force it to only sync with a server on the internet at specific times? Alternatively, is it possible to use my ntpdate Cron job but run a separate NTP server that takes the current system time and serves that to the client PCs? A:: Most NTP servers, including ntpd and openntpd, are designed to be used with a permanent internet connection. Liaising with other time servers is an integral part of the way they work, making them unsuitable for your needs. Chrony is designed to provide time services to a network with an intermittent, or even non-existent, internet connection. It consists of two programs: chronyd is the daemon, providing time services based on the system clock; chronyc is a command line program that can be run from your cron script to synchronise the clock with another time server. Get the latest version at http://chrony.sunsite.dk. You will need to compile it from source, but this is straightforward and clearly documented in the INSTALL file. The documentation is verbose, but setting up a basic server is quite simple. Put the following lines into/etc/crony.conf, replacing each nnn.nnn.nnn.nnn with the IP address of a server. --- server nnn.nnn.nnn.nnn offline server nnn.nnn.nnn.nnn offline server nnn.nnn.nnn.nnn offline keyfile /etc/chrony.keys commandkey 1 driftfile /etc/chrony.drift allow 192.168.1 ,,, You can obtain a list of suitable servers with --- netselect -s 3 pool.ntp.org ,,, The 'offline' parameters stop chrony trying to synchronise with the servers, which is what you want. The allow command indicates the IP range allowed to get time for the server. Set up a password with --- echo >/etc/chrony.keys '1 somepassword' ,,, Then start the daemon with the supplied init script. Now it will serve time to your network based on its system clock. To update the system clock, amend your Cron script to do this after connecting: --- /usr/local/bin/chronyc EOF password somepassword online EOF ,,, Repeat it before disconnecting, changing online to offline. Back to the list ****** Mapping IntelliMouse buttons in xorg.conf Q:: I have an IntelliMouse, with seven buttons: 1 = left, 2 = middle, 3 = right, 4 and 5 = wheel, then 6 and 7 as extra buttons by my thumb. I want to use those last two buttons for something like volume control or maybe track-skipping in Amarok. As far as I can tell, this can't be set up in the xorg.conf file. Can you help? A:: First you need to make sure that all seven buttons send events to X. Run xev from a terminal and click the various buttons while the window is active. If the button is recognised, you'll see something like this: --- ButtonRelease event, serial 31, synthetic NO, window 0x3600001, root 0x5a, subw 0x0, time 191458267, (86,1 1),1 root:(91,162), state 0x1 button 4, 10, same_screen YES ,,, If you get no events from the extra buttons, edit the mouse section of xorg.conf. You should already have a ZAxisMapping line, so change this to the two highest numbered buttons and add a Buttons line to indicate the number of buttons. This is how it looksfor my seven-button mouse: --- Section "InputDevice" Identifier "USBMouse" Driver "mouse" Option "Protocol" "auto" Option "Device" "/dev/input/mice" Option "Buttons" "7" Option "ZAxisMapping" "6 7" EndSection ,,, Restart X and run xev again to make sure the buttons work. Your extra buttons will be 4 and 5 - the wheel is now 6 and 7. Now you need to map these events to actions. A useful program for this is XBindkeys, available from http://hocwp.free.fr/xbindkeys or possibly in your distro's package repository. XBindKeys uses a simple config file to map keyboard and mouse events to commands. For example, you might want to do --- "firefox" b:4 ,,, in ~/.xbindkeysrc. This will cause it to start Firefox when you press button 4. To control Amarok, or any other KDE application, you'll need to investigate DCOP (Desktop Communications Protocol). Run kdcop and look at the commands that Amarok accepts. Execute the commands from kdcop and it shows the command line DCOP call that will do the same thing from a script, or XBindKeys. You will need to experiment to find what you need, but for starters, this will skip to the next track in Amarok: --- dcop amarok player next ,,, Back to the list ****** Relay mail with cron jobs and Sendmail Q:: I'm currently running Fedora on my desktop and was wondering if it's possible to relay mail sent via Cron jobs etc. through my mail server running Sendmail. It would save me having to run another instance of Sendmail on my desktop. A:: Although there is next to no configuration required to setup Sendmail or Postfix on Fedora, you can use ESMTP or SSMTP to relay your desktop's mail through an external mail server. I've used SSMTP in the past, but it appears that only ESMTP is currently available via Yum. To install, run --- yum install esmtp cat > /etc/esmtprc << "EOF" hostname = mailserver:25 mda "/usr/bin/procmail -d %T" EOF ,,, This basic configuration will route mail through a server named mailserver, on port 25. You can man esmtp and man esmtprc for more information on esmtp and the configuration file. If Sendmail is currently your default MTA (run alternatives --display mta to check), you can issue the following to switch ESMTP to the default: --- alternatives --config mta ,,, This will bring up a basic menu that allows you to switch the default MTA. Finally, if you intend to relay mail through your Sendmail server destined for a mailbox that's not local (ie redirecting Cron output to an @gmail. com address), ensure you configure /etc/mail/access on the Sendmail server to permit your desktop to relay through it. As I've suggested, SSMTP is another option that can be used instead of ESMTP. It's not available via Yum, so you'll need to install it manually. Here is a basic outline of how to get it up and running: --- cd /root wget ftp://ftp.debian.org/debian/pool/main/s/ssmtp/ssmtp_2.61.orig.tar.gz tar -xzvf ssmtp_2.61.orig.tar.gz cd ssmtp_2.61 make make install ,,, This will prompt you for a few pieces of information and will install the SSMTP binary to /usr/local/sbin and ssmtp.conf to /usr/local/etc/ssmtp/ssmtp.conf. The mail line that needs to be adjusted in ssmtp.conf is mailhub=mail', where 'mail' is your Sendmail server's hostname. For more information, run man ssmtp and view the default ssmtp.conf configuration file in the ssmtp_2.61 source directory. As this is a manual install not using rpm, you will need to use the alternatives command to add SSMTP to the alternatives system. This can be done with --- alternatives --install /usr/sbin/sendmail mta /usr/local/sbin/ssmtp 10 ,,, Finally, ensure that SSMTP is the default MTA: --- alternatives --config mta ,,, Again, this command will ask which MTA to use, and don't forget that if you plan on relaying to externaladdresses, you should configure the Sendmail server to permit relaying from your desktop's IP address. Back to the list ****** Adding an extra disk to Linux Q:: I have a Linux PC, running Red Hat 9.2. I want to add an additional disk drive. I know this sounds like the most basic of tasks, but having only done this with Windows, I don't really know what to expect. After I've added the hardware and rebooted, what do I do next? I assume that I need to format the drive but where would I complete this task? Am I right in thinking that Linux will automatically recognise the addition of the drive? Will I see it as an additional drive or just continuous disk space? Any clues that you could give me would be a real help. A:: When adding an extra disk to Linux, you'll have to partition it using fdisk and then build filesystems on the partitions you create. Once created, you can mount them in the appropriate location and use them. To maintain a mount across a reboot, adding an entry to /etc/fstab for the new filesystem will ensure that it's mounted in the correct location when the system comes back up. If your new disk is hdc, you can do: --- fdisk /dev/hdc mke2fs -j /dev/hdc1 mount /dev/hdc /home2 ,,, You could also copy the contents of /home onto /home2 using 'cp -fra /home/* /home2' once it's mounted, then modify /etc/fstab to mount /dev/ hdc1 onto /home at boot time. Back to the list ****** Network multi-pathing on two Ethernet cards Q:: Running Red Hat, I have two Ethernet cards plugged into a single switch and a single static IP address. Is it possible to set up network multi-pathing on these two network interfaces so that if one link dies, it fails over to the second without setting up a virtual IP address? A:: Yes, IP multi-pathing (bonding) allows a host to be redundantly connected to a network by two independent paths. There are other bonding methods, but as you want high availability I'd suggest IP multi-pathing is your best bet. Unlike the floating virtual IP method of multi-path redundancy, bonding creates a floating virtual interface. Under Red Hat, to configure bonding you need to associate the two physical interfaces with a new virtual bonded interface, 'bond0', within the standard network configuration files. Thus, ifcfg-eth0 and ifcfg-eth1 will need to contain the following: --- /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes /etc/sysconfig/network-scripts/ifcfg-eth1 DEVICE=eth0 ONBOOT=yes TYPE=Ethernet MASTER=bond0 SLAVE=yes ,,, Now, create a new bonded network interface file called bond0 that contains the network specific information that your previous network configuration file (ifcfg- eth0) contained, like this: --- /etc/sysconfig/network-scripts/ifcfg-bond0 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static TYPE=Ethernet IPADDR=x.y.z.a NETMASK=x.y.z.a ,,, You've now configured the network information on the new virtual interface and associated it tothe two physical ones. Next you'll need to configure bonding to initialise on boot. Set the polling interface and bonding method, in this case Active/Standby. Add the following to /etc/modprobe.conf: --- alias bond0 bonding options bonding mode=1 miimon=100 primary=eth0 install bond0 /sbin/modprobe eth0; /sbin/modprobe eth1; /sbin/modprobe bonding; /bin/true mode=1 is active/standby miimon=100 is the polling interval is the network polling interval in milliseconds - 100ms. ,,, Now you'll need to load the bonding module and restart networking. As root, execute the following commands: --- modprobe bond0 service network restart ,,, The only thing left is to update any config files, such as Iptables, that reference the physical interface with the new bonded interface. Back to the list ****** Installing OpenOffice.org: circular RPM dependencies Q:: I attempted to install OpenOffice.org. Unarchiving the z-file gave no ./configure directory, only the RPM files. Using rpm from a shell revealed a circular dependency in the files. CORE01 fails to install because it depends on CORE02 to CORE08, and CORE02 to CORE08 fail to install because they depend on CORE01. I am running Mandrake Linux 9.1. I ran rpm straight from my home directory where I'd saved the RPM files. A:: How were you running the rpm command? If you try to install each file separately, it will fail because, as you have discovered, the RPMs are interdependent. The rpm command is capable of handling this situation, but only if you pass it all the files at once. rpm -Uhv *.rpm will install all the RPMs at the same time. Make sure there are no other RPM files present - it is safest to copy them to their own directory. Back to the list ****** How to install QuickTime plugins for Firefox on SUSE Q:: I installed SUSE 10.0 OSS: it was easy to install and you get free office stuff, but I cannot find an Apple QuickTime plugin for Firefox. I did try the Quicktime4Linux plugin, but it didn't work (it did until they released QuickTime 7.0). The Apple site assumes you either have Mac OS or Windows. Anyhow, I find it really hard to believe Apple does not port a plugin across, since OS X basically runs on a Linux kernel. Unless you know of an alternative site to the QuickTime trailer site, I find myself in a little bit of a pickle. A:: The MPlayer plugin for Mozilla works with Firefox too (http://mplayerplug-in.sourceforge.net). This lets you view any file that MPlayer can handle in the browser, including many (though not all) QuickTime files. Many of the later movies use the Sorensen codec, which is a proprietary codec that will not be supported by any open source project - unless it can be reverse engineered. Alternatively, try CrossOver Office. This development of Wine enables you to use Windows plugins in Linux browsers, as well as run various Windows programs directly on the Linux desktop. CrossOver Office is available from www.codeweavers.com/site/products/cxoffice and the standard version costs $39.95. Incidentally, Mac OS X is not based on Linux but a BSD variant. Back to the list ****** Promise SATA card not working in RHEL Q:: I'm using CentOS (based on RHEL), and I cannot get my Promise SATA card (with attached 500GB disk) to work properly. The latest driver on Promise's website is for an older kernel and doesn't work. I have emailed Promise 20- 30 times and I always get a default email reply, but no help at all. Also, I'm planning to buy an IBM laptop, but I want to scratch away Windows XP and install Linux. Can you advise me on which distro I should use - which distro supports IBM laptops with all its, drivers and so on? A:: I have a Promise SATA controller in this computer, so I know they work. The reason Promise only has drivers for older kernels on its website is that they were incorporated into later kernels, so a separate driver is no longer needed. The chances are that your kernel has been compiled without support for your particular controller, so you may need to recompile it. If you have never compiled your own kernel before, it may seem a daunting task, but it's really quite straightforward. The main thing to remember is that you should install the new kernel alongside the old one, not overwrite it, so you still have your old setup as a fallback option. There are various HOWTOs on kernel compilation, such as the one at www.digitalhermit.com/linux/ Kernel-Build-HOWTO.html. As for which distribution to install, IBM laptops are about the best supported in Linux, so any distro should work with your hardware - that means you can make your choice based on which one you prefer to use rather than which one works. Most distros have some form of Live CD or DVD available, so you can try out a few before you decide which one to install. See the Distrowatch section on page 34 for details of what's new. Back to the list ****** Cannot connect Dell Precision M60 to external monitor Q:: I have a Dell Precision M60 running SUSE 10.0. Unfortunately, I (and apparently everyone else posting to Google-able sites) cannot connect to an external monitor. I have tried an older Dell analogue tube monitor, as well as a brand-new Sony KLV-S23A10 flat panel TV with a PC connection. I have tried direct connection (no extension cords) from PC to this monitor as well as to an older analogue unit at work. The laptop uses Fn+F8 to switch between internal and external video; invoking this typically hangs the machine. I've examined the BIOS settings (F2 on boot) and tried both video source settings, System and Dock... I have followed various HOWTOs and examined my xorg.conf, to include adding new Monitor and Display and Screen modules... I have changed my resolution, HSync and Vsync settings to match the acceptable range of the monitor (1280x768, 47.4H and 60V being the recommended values)... No success. A Yast hardware scan does not find the monitor anywhere. A similar unit (Dell Latitude) at work that runs Windows XP is able to find an external monitor without problem, so it must be a problem with the Linux kernel, or at least the SUSE implementation thereof. A:: Does trying to switch between the video outputs actually hang the machine, or does it just appear to hang because all output is now going to a non-existent device? If the latter, Fn+F8 should switch back to the internal display. X.org has a way to use two monitor outputs at once, called MergedFB. This can be used in the same way you'd use Xinerama to span a desktop across two monitors, but it also has a 'clone' mode, which puts the same display on both monitors. This is the mode to use when you want to output a laptop's display to a desktop monitor or projector. You'll need to add the following lines to the Device section of /etc/X11/xorg.conf: --- Option "MergedFB" "auto" Option "CRT2Position" "clone" Option "MetaModes" "1024x768 800x600 640x480" ,,, The first line saves switching X.org configurations whenever you want to use an external monitor - the system switches to MergedFB if a monitor is detected when X starts. The second line makes the display on the external monitor a clone of the first. The third is a list of supported display modes, so should be the same as the Modes lines in the Screen section. If you don't have any Modes lines, just enter the mode that you usually display at to get you started, provided the external monitor also supports this mode. You may need to set your BIOS to send video both to internal and to external displays. Your Precision laptop uses an ATI Radeon chipset, and I know that the X.org driver for this definitely supports MergedFB as I use it on my iBook. There is a lot more information on setting up and using MergedFB, for cloned or dual-head displays, in the dual-head tutorial at www.winischhofer.at/linuxsispart2.shtml#mergedfbmode. Back to the list ****** How to reset VNC password Q:: I set up a remote desktop (through Xandros Control Centre) on a now headless box, and now I can't remember the password. Some Googling showed up nothing useful (the VNC website says to use Vncpasswd - however, it doesn't exist on the box). I cannot attach a monitor to the box, so have no way of launching Xandros Control Centre to reset. I think Xandros just uses standard KDE remote sharing; how do I reset the password from the command line? A:: Provided the headless box is running an SSH server, there are a couple of ways to reset the password. If it is using the standard KDE desktop sharing, you could edit ~/.kde/share/config/krfbrc and remove any password lines. This will reset it to password-less operation. Then you can connect and set the password through the KDE Control Centre. Alternatively, you could connect to the box using ssh with the -X option, then run kcontrol. Provided SSH on the headless box is configured to allow X -forwarding, the KDE Control Centre will open on your desktop and you can change the password as if you were working directly on the computer. If you run KDE on the computer you use to connect to the box, I'd recommend you allow it to store the password in the KDE Wallet Manager (Kwalletmanager) the next time you connect. Then you won't have to worry about forgetting it again... unless you go and forget the wallet password. Back to the list ****** Fedora repositories: what does Livna do? Q:: I am about to give Fedora a try as it looks extremely interesting. However, I am at a loss as to what repositories exist for it. I know about the Fedora Extras repository and the Livna one, but I've read multiple sources on the web saying that others are available. Some people have said that you should avoid mixing these other repositories with Livna and Extras. Could you tell me which repositories are the most commonly used and contain the most packages available for FC 4, the 386 version? A:: You have already mentioned the Livna repository at http://rpm.livna.org. Another one worth using is FreshRPMs at http://freshrpms.net. While most repositories contain compatible packages, it is true that there are some clashes between them, caused by different methods of packaging the same software. The safest approach is to install a tested Yum configuration that contains only repositories known to work together. You can get one such from the Unofficial Fedora FAQ at www.fedorafaq.org/#yumconf. Follow the instructions on this page and Yum will be configured with several compatible repositories, including Dag, Dries and redhat-kde in addition to the repositories you've already mentioned. This should give you the widest possible choice of software while avoiding any clashes. Back to the list ****** Incorporate Mcrypt support in PHP without recompiling each update Q:: I administer a Red Hat Enterprise Linux 4 server that is used to host mail and web for ten production domains. I use all stock RPMs to avoid complications with updates coming in through our systems management platform, Red Hat Network (RHN). Recently I've been asked to recompile PHP with Mcrypt support, but doing so would mean recompiling PHP every time Red Hat releases updates on these RPMs. Is there any way to incorporate Mcrypt support in PHP without having to constantly rebuild my own RPMs or add 'php*' to up2date's pkgSkipList? A:: You're in luck! I recently came across a project called PHPRPMs (http://phprpms.sourceforge.net), which provides PHP RPMs for little used or non-GPL extensions. The project's RPMs are currently available for Fedora, RHEL 3, and RHEL 4 (i386 and x86_64). Once you've downloaded the appropriate php-mcrypt RPM for RHEL 4, simply install the package using the rpm command. A restart of the httpd service would be normally be required, but the installation of the php-mcrypt RPM will do this for you. If you don't have libmcrypt installed (libmcrypt is required to use php-mcrypt), you can download the latest RPM for RHEL 4 from http://dag.wieers.com/packages/libmcrypt. This way, instead of having to forego updates via RHN or having to rebuild PHP RPMs when new updates are released by Red Hat, you can simply check these two sites every once in a while for libmcrypt and php-mcrypt RPM updates. Back to the list ****** How to install Autopackage Q:: I have been wanting to install Autopackage 1.0, but don't have a clue how to do so. I have it in my home folder and the command tells me it is a directory, but what next? Do you treat this like a tar file or is there some other wizardry I have to do? I would so like to use Autopackage as it seems to be an answer to my prayers for easy installs. I can install RPMs OK but can never get a tar to work. My distro is Mandrake10.1 PowerPack. A:: Autopackage is designed to be so easy to use that you don't even need to install it. As soon as you try to install something from a .package file (an 'Autopackage'), it will download the latest files it needs before installing, first asking your permission. You don't even need to copy the Autopackage to your hard disk. To see how it works, open a terminal and type su - to become root, then --- bash /mnt/cdrom/Magazine/HotPicks/ Autopackage/autopackage-qt-1.0.x86.package ,,, It will ask whether you want it to download Autopackage support files. Answer 'Y' and it will do it this one time, installing the support files for future use. Then it will proceed to install the package. Installing from tarballs can require a little patience the first time, but it gets easier. Unpack the tarball with one of these lines, depending on the type of tarball (gzip or bzip2): --- tar xzf somepackage.tar.gz tar xjf somepackage.tar.bz2 ,,, You may find files named README and INSTALL in the directory containing the unpacked files. These will normally explain how to go about installing the software. The most common stumbling block when installing from source on an RPM-based distro like Mandrake, is that the ./configure stage throws up errors about libraries not being installed, when your package manager clearly shows that they are there. This is because RPM packages are normally split into two files: a standard RPM containing the program or library, and a '-devel' RPM containing the library header files. These are not needed to use the program, but you will need them if you want to compile new software against it. So if configure complains about missing libfoo, check that libfoo-devel is also installed. Back to the list ****** Whether to use sudo or just su Q:: I have been trying to create a script to automate various processes. However, I can't figure out how to run certain parts of the script as root, and other parts as my normal user. I don't want to run the entire script as root, just the odd section. I tried just using the su command, and then realised that I was now a totally different user and no longer executing my script. I realise su isn't the best idea, but for testing purposes it's fine. Is there a way to do this? Am I nuts for even thinking about using su in a script? My second idea was to start another shell as root; however, I'm not entirely sure how to do that from a script. A:: The su command starts a new shell process as a different user, so the script running it stops until that shell is closed. Using su in a script is a bad idea, and is often blocked because of the security risks. The safer option is to use sudo. This allows individual commands to be run by specified users, without them needing to know the root password. By default, sudo requires the user to enter their own password, but it is possible to allow some commands to be run without giving a password, which may suit your script. Specify the full path to the commands that you want the user to be able to run in the /etc/sudoers file, and specify 'NOPASSWD' if you do not want the script to stop to prompt for your password. Here is a typical entry that allows one user to mount and unmount filesystems without giving a password: --- fred ALL = NOPASSWD: /bin/mount,/bin/umount ,,, Note the comment at the top of the /etc/sudoers file - it should be edited with the visudo command, not loaded directly into an editor. Run visudo as root and it will load the file into whatever program you have defined in $EDITOR. You can change this at the time you run visudo with, for example --- EDITOR=kate visudo ,,, The reason for doing it this way is that visudo copies /etc/sudoers to a temporary file, loads that into your editor, then checks that your syntax is correct before copying the altered file back. It stops typo-inserting pixies breaking your system, which is considered by most experts to be A Good Thing. Back to the list ****** IP addresses and DNAT Q:: I've been banging my head against this one for weeks now. Four years ago I managed to get a machine to DNAT and now I can't do it at all! At the most basic level, I'm trying this code: --- Internet external ip on firewall = 10.x.x.5 Machine on inside of firewall = 192.168.1.2 ,,, The firewall can access the http server on the internal machine via port 80 without any problems, so I tried this: --- insmod iptable_nat iptables -F INPUT iptables -F OUTPUT iptables -F FORWARD iptables -P INPUT DROP iptables -P OUTPUT ACCEPT iptables -P FORWARD ACCEPT echo 1 > /proc/sys/net/ip_forward iptables -t nat -A PREROUTING -d 10.x.x.5 -p tcp --dport 80 -j DNAT -- to 192.168.1.5:80 ,,, And nothing happens. I've tried many variations of source IP, interfaces and so on, but none of them seem to work. Can you tell me how to get things working? A:: The first stage in any DNAT configuration is to ensure that the IP configuration on the firewall is correct, and in this case, 10.x.x.5 should be bound to the outside interface on the firewall as either an interface or an alias. Opening up ICMP traffic on the firewal and pinging the outside IP from a system will help in ensuring that the IP layer is happy. Of course, because the outside address is in the 10.0.0.0/8 range, it won't be available from the other side of the Internet, in which case the appropriate routable address should be used. The simplest way to debug any DNAT problem is to run 'tcpdump' on the outside interface of the firewall and review the packets that are dumped from the connections from the outside host. This will ensure that packets are being routed back and forth properly, and if a packet is seen going into the firewall but not back out again, you can work through the firewall configuration. Your information detailed the inside address as 192.168.1.2. However, you were DNATing to 192.168.1.5. Hopefully this is just a typo, although it's always a good idea to double-check all of the firewall rules to ensure that the IP addresses are correct. Back to the list ****** How to manage logs Q:: I have a few scripts that I run, and I want to generate debug logs that I can occasionally turn on and off. Do you have any suggestions for me? A:: Have you considered using Syslog to generate and manage your logs for you? PHP, C and Perl all contain a library for sending Syslog messages to the configured log host. From there, you can configure Syslog to log specific messages to a separate file and update Logrotate to rotate them for you. Firstly, add a new selector (left-hand side) and destination log file (right -hand side) in /etc/syslog.conf and restart Syslog. The selector is made up of two items: the facility and severity. The facility can only be one of local1-local7 not already defined.In this case, I have chosen local3 and want to log all messages. --- # newprog local3.* /var/log/newprog.log root$ service syslog restart ,,, If I wanted to only log error (err) messages and worse, syslog.conf would instead contain --- # newprog local3.err /var/log/newprog.log ,,, Now confirm that Syslog is working by running the following and checking the output of the log file: --- $ logger -p local4.err test message $ tail -f /var/log/newprog.log Jan 31 1 1:46:40 host user: test message ,,, Once you have confirmed Syslog is working, you can now configure Logrotate to log rotate your new log file using the previously defined Logrotate rules by updating the configuration file to include the new log file. Under Red Hat Enterprise Linux, you will need to update /etc/logrotate.d/syslog, adding /var/log/newprog.log' to the first line of the config file. Now all you need to do is call Syslog within your code, using the selector that you previously added to syslog.conf (remember we used local3) providing a severity level. At a later stage, you can then turn off partial logging by adding a higher severity to the Syslog configuration as described above. More information on Syslog and adding Syslog calls to PHP, C, Perl and Bash can be found in the following documentation. Man pages: Sys::Syslog (3pm) Unix::Syslog (3pm) logger (1) syslog (2) I'd also recommend you look at http://php.net/syslog. Back to the list ****** Sync Bluetooth headset with Linux Q:: I have acquired a new mobile phone with Bluetooth. It also has a Bluetooth hands-free headset. Can I sync this with my computer? To make me really happy, can you tell me if it is possible to use the headset with Skype, or any other internet phone software? And if so, how? A:: The first thing you will need is a Bluetooth adaptor, unless your computer has this built in. These are available for a few pounds from most computer dealers, or eBay of course. You will need to install the bluez package, available for most distros, to provide Bluetooth drivers and tools. If you are running the KDE desktop run kbluetoothd, which provides an icon in the System Tray that shows when Bluetooth devices are connected. Clicking this icon gives you access to these devices. There are a few programs that will sync with a mobile phone to back up/ restore your contacts. If it is a Nokia phone, you are probably best served with Gnokii, also available in most distros or from www.gnokii.org. For a more brand-independent approach, you could try KMobileTools from http://kmobiletools.berlios.de.package for your distro, the as a While this may be available most recent packaged version is quite out of date. To get features such as backing up and restoring phonebooks, you need to build the latest version from the project's Subversion repository. Don't worry if you haven't done this before - it is a simple procedure and the KMobileTools website has a step-by-step HOWTO. Using your Bluetooth headset with Skype is also possible, but once again the software you need is unlikely to be included in your distro. The Bluetooth-alsa project - http://bluetooth-alsa.sourceforge.net - provides a way to use a Bluetooth headset as an ALSA device. That is, it appears to the system as a soundcard. You can then tell Skype, or any other program, to use this 'soundcard'. You can even listen to your MP3 collection via your Bluetooth headset, but don't expect much in the way of quality. Download and install the software as described on the project's website. Then put your headset into pairing mode and run the following commands to connect the headset: --- modprobe -v snd_bt_sco esdctl stop hcitool scan btsco [address] ,,, The address for the final command is the address printed by hcitool scan. It will be something like 00:13:EF:00:09:44. Now you can test the headset with --- ls -l /proc/asound aplay -D plughw:Headset somesound.wav ,,, The first command should show an entry for Headset, and the second will play the specified file through it. Once everything is working, you can automate most of this. Add the module to your distro's modules configuration file (usually /etc/modprobe.conf or /etc/modules.conf). Find the line that starts 'alias snd-card-0' and add this after it: --- alias snd-card-1 snd_bt_sco ,,, The esdctl and btsco commands can be added to a short script you run whenever you want to pair your headset, like so: --- #!/bin/sh esdctl stop btsco 00:13:EF:00:09:44 ,,, Though it should use the address of your headset, not mine! Back to the list ****** Secure my web and FTP server Q:: I am responsible for a web and FTP server running Red Hat Enterprise Linux 4. I have been administering the server and recently configured Logwatch to send me reports. I found entries in the reports that worry me, mainly authentication failures and invalid users. I get these entries every single day, but the IP address and number of attempts change each time. It seems to me that these are attempts to log in to my system using different combinations of usernames and password. Is there anyway to stop these annoying attacks on my server? What actions would you recommend to secure my server? A:: You are right! The entries you see in Logwatch are automated break-in attempts that try to find a valid user name/password on your server in order to gain local access. There are numerous security configurations that you can use to harden your sshd server. The configuration file of the OpenSSH server is /etc/ssh/sshd_config. Let's take a look at it. While you could set the AllowUsers parameter to allow only a limited number of users to log in, this is hard to manage when you have lots of users on your system, as is the case with a typical FTP server. Attackers can still try to guess the password for any users that are allowed to log in, but if using this option on your server is feasible, then I recommend you do use it. Also, you can disable root logins via ssh by using the option: --- PermitRootLogin no ,,, Use strong passwords, and to prevent passwords being guessed I'd recommend not using password authentication at all. You can generate private/public keys for your system users using ssh-keygen; manage keys with ssh-agent/ssh-add and disable password authentication. There are other configurations; you can, for example, reduce the number of connections your sshd server gets by changing the default port. Most automated attacks will only check port 22, so changing to a different port will decrease the number of hits you get on the Logwatch report - try Port 222 in the config file. You should only allow version 2 of the SSH protocol: version 1 has known vulnerabilities and should not be used. And make sure that no one can log in using an empty password, by amending the file to 'PermitEmptyPasswords no'. After you've saved changes to the sshd configuration file, the sshd server needs to be restarted for the settings to take effect. These suggestions will thwart most attacks. However, they are static rules that do not adapt to the changing nature of the attack. Also, there might be specific reasons that prevent you from using public key cryptography. There are numerous open source solutions (Sshdfilter, Blockhosts and so on) for this problem, using different tools to do the job: PortSentry, Iptables, Tcpwrappers etc. I will focus on DenyHosts (http://denyhosts.sourceforge.net) since it uses Tcpwrappers, which is available in most Unix systems. In order to use Tcpwrappers, sshd needs to be compiled with libwrap support. Almost all sshd servers deployed are compiled this way but you can verify your specific version using a command like --- # ldd /usr/sbin/sshd | grep libwrap libwrap.so.0 => /usr/lib/libwrap.so.0 (0x00140000) ,,, DenyHosts is written in Python, so you will need the Python interpreter installed on your system. In Red Hat you can accomplish this by running --- # up2date -i python ,,, Now you should download the actual DenyHosts package. Since you are running RHEL 4, you can install the RPM version with --- rpm -ivh http://kent.dl.sourceforge.net/sourceforge/denyhosts/DenyHosts-1.1.3-python2.3.noarch.rpm ,,, DenyHosts comes with a simple configuration file (/usr/share/denyhosts/denyhosts.cfg-dist), which you can use as a template for your system. By default, it is properly configured for Red Hat-based systems. The relevant options you may want to edit are: --- PURGE_DENY = 1w ,,, Specifies how old blocked entries need to be when DenyHosts is invoked with the --purge flag. --- DENY_THRESHOLD_INVALID = 5 ,,, Number of failed login attempts for invalid users to trigger blocked connections. --- DENY_THRESHOLD_VALID = 10 ,,, Number of failed login attempts for valid users to trigger blocked connections If you want to be notified by email of new blocked hosts, you can specify your address in the ADMIN_EMAIL = webmaster@example.com' line The configuration file is extremely well documented and is not hard to interpret. Once you have decided on your particular options, copy this file to /etc/denyhosts.cfg. Now we need to edit the init script (/usr/share/denyhosts/daemon-control-dist) to reflect our system settings. In this case we just need to indicate which configuration file to use: --- DENYHOSTS_CFG = "/etc/denyhosts.cfg" ,,, And install the init script with --- # cp /usr/share/denyhosts/daemon-control-dist /etc/init.d/denyhost # chmod +x /etc/init.d/denyhost # chkconfig denyhost on ,,, Now it is just a matter of running the DenyHosts daemon with --- # /etc/init.d/denyhost start & ,,, If you have any break-in attempts on your current /var/log/secure log file, DenyHosts will populate /etc/hosts.deny accordingly, blocking out the offending IP addresses! --- #tail -n3 /etc/hosts.deny sshd: 66.34.205.1 sshd: 64.34.193.58 sshd: 220.1 17.241.3 sshd: 218.85.1 19.83 ,,, A notification email will also be sent to the ADMIN_EMAIL that you specified above. Server security, and sshd security in particular, is widely dealt with online and you may want to do a web search on "defending against brute force ssh attacks" for extra information. Back to the list ****** Xfishtank not appearing in KDE Q:: I know it's rather sad but I should like to be able to set up Xfishtank to act as a background. I use Free Mandriva 2006 with KDE and have installed the Xfishtank software from an RPM. However, the best I have achieved is a fleeting glimpse of the fish tank when I switch the computer off. All I get otherwise is my usual KDE background or screensaver. How do I get to see the little fishies? A:: KDE runs its own root window on top of the normal X root window (the desktop background), so programs that normally display their output on the root window are hidden. The glimpse you see is the brief interval between KDE shutting down and X quitting, when the X root window is visible. Fortunately, there is an extremely easy solution. Right-click on the desktop and select Configure Desktop from the menu that appears. Go into the Behaviour section and enable Allow Programs In Desktop Window. This assumes that you have not changed the default action for right-clicking on the desktop. You can also change this setting from the Desktop > Behaviour section of the KDE Control Centre. Back to the list ****** Getting a graphical desktop on Debian Q:: I have this day installed Debian 3.1. After some mucking about I finally got the first disc installed, and all I would like now is to be able to open the system up so that I can continue. At logon I put in my username, press Enter, then password, Enter... and all that happens is that a new line with 'david@debian~$:' appears. What have I done wrong or forgotten to do, and what do I do next? What's the magic word? Obviously I am new to this system and hope I have done the right thing by bothering to install it. A:: You have only installed the basic package set, which does not include a graphical desktop. During the second stage of the installation, after the reboot, you are asked to choose from a selection of software collections. The first in this list is called Desktop Environment. If you're following the installation procedure it will look like this is pre-selected, because the cursor is in the selection box to the left of the name, but it is not - package groups are only installed when there is a star in the box. You need to explicitly select the groups you want by moving the highlight bar over them and pressing Space. If you simply press Enter at this stage without selecting anything, you will end up in exactly the frustrating situation that you describe. However, there is no need to panic or reinstall. Log in as root - using the root password you gave during installation - and type aptitude to load the Debian package manager. Highlight Tasks and press Enter; move down to End -user and press Enter; then highlight Desktop Environment and press '+' to select it. You can press G to see what will be installed and G again to begin installation. This will install both the KDE and Gnome graphical desktops; you will be able to choose which you use the first time you log in. There are a few basic configuration questions to answer (the defaults are fine if you are unsure) then you will also be asked some questions to help configure the graphical display. These questions are the same as you would have been asked during installation, had you selected the desktop option. Once the installation (which will take several minutes) has finished, your desktop should load the next time you boot up. Back to the list ****** Making new hard drive bootable in GRUB Q:: I've been trying to duplicate a Linux system drive - it's a SCSI drive, if that makes any difference. I have a new drive, which I've partitioned in broadly the same way with swap and root partitions; the root is ext3. Another drive, which I'm not concerned with, is mounted in /home. I booted Knoppix, mounted both drives and successfully copied everything over using dump piped into restore. Everything compares nicely and I have a reliable duplicate. The last bit is to make the new drive bootable, and this is where I'm getting stuck. The old drive is /dev/sda (sda1:swap, sda2:ext3) : grub: hd0. The new one is /dev/sdb (sdb1:swap, sdb2:ext3) : grub: hd1. When I then remove the old drive and try booting, I get 'GRUB boot disk error' or something similar. I have tried a variety of things - too many to list - but it seems to me that the problem could be linked to the fact that I install Grub on an address that it isn't booting from. Thus I'm booting from /dev/sda and installing Grub on hd1 and then removing /dev/sda, so hd1 becomes hd0 and Grub can't find it. Is this possible? If so, how do I get around it? A:: Grub numbers the drives in the order in which they are discovered by the BIOS. When you remove the first drive, the remaining drives move up a number, so hd1 becomes hd0, as you suspected. There are two ways you can deal with this. The easiest involves using a Live CD that includes Grub, such as Knoppix or a Gentoo minimal installation CD (which is a much smaller download than the standard Knoppix install). After removing the old hard disk, boot from the Live CD. If you're using Knoppix, type knoppix 2 at the boot prompt to take you straight to a root console. Then install Grub on your hard disk, as you did before, with --- grub-install /dev/sda ,,, If grub-install gives problems, you can install manually from the Grub shell with --- grub root (hd0,0) setup (hd0) quit ,,, I take it you have already edited /etc/fstab and /boot/grub/menu.lst to use /dev/sda instead of /boot/sdb. The other way of doing it is to use the feature available in some BIOSes to swap the boot order of the hard disks, so that your new drive is discovered first. As the new disk doesn't have Grub installed yet, the computer would still boot from the second (old) disk, but when you run grub to install it to the new hd0, this disk would be booted from next time onwards. You could then swap out the old disk and set the BIOS back to its previous boot order. This method is more fiddly than the first and is not guaranteed to work on all hardware, so only try this if you cannot use a Live CD for any reason. Back to the list ****** Generating an SSL certificate for Apache Q:: Can you point me in the right direction for generating an SSL certificate and applying it to an Apache web server on a Red Hat Enterprise Linux 4 server and a Fedora server? A:: Configuring secure connections on the Apache web server on RHEL4 and FC4 is one of the most useful things you can learn to do with your Apache server. The majority of commercial public websites should be using a certificate that has been signed with a trusted key from a recognised certificate authority to indicate a higher level of trust than is required for internal company or personal websites. You can create such a key with OpenSSL (www.openssl.org), which I'll assume you have installed as it's a standard component. First, create a private key. You could secure it with a pass phrase, but depending on how security-conscious you are I would recommend removing it, as it will mean delaying or disabling your entire web server if you do not manually enter the pass phrase when the web server restarts. You'll use the openssl package's help files to create the certificate with the root user. Before you overwrite any current certificate, move it out of the way with --- mv /etc/httpd/conf/ssl.*/server* /root/ ,,, Next cd /etc/pki/tls/certs (FC4) or cd /usr/share/ssl/certs/ (RHEL4) and run make testcert. This will ask you for a pass phrase, which we will remove later. Fill out the other information it asks for. The most important bit is 'Common Name []', where you should put the domain name that you want the secure site to run off. Generating the key should put the files in the correct place. You should then make sure the default configuration Apache mod_ssl file (/etc/httpd/conf.d/ssl.conf) has the correct information - the two parameters SSLCertificateFile and SSLCertificateKeyFile, the certificate and key file respectively, should correctly reflect the location. Now remove the pass phrase if you want the site to restart without manual intervention and make sure that Apache starts when the machine does with the chkconfig file. Do --- cd /etc/httpd/conf/ssl.key/ openssl rsa -in server.key -out server. nopassphrase.key mv server.key server.key.orig mv server.nopassphrase.key server. key chkconfig httpd on ,,, This is how you configure Apache on RHEL4 and FC4 to serve HTTPS requests from the default DocumentRoot. Bear in mind that due to the way TLS/SSL works you need one IP address per TLS/SSL site. Back to the list ****** Best VPN solution for Linux Q:: Let me first say that I am a Windows administrator who can 'do' Linux, and at this point I am extremely sick of the cost and maintenance associated with Windows. I'm looking for a solution to replace our Windows virtual private network [VPN] and want to go the Linux route. I was hoping to use an open source SSL VPN that can be run over a browser, but am having trouble finding one. Can you enlighten me and tell me what is hot now in the Linux VPN market? I know Freeswan is popular, but that is IPSec. OpenVPN seems to be another high-ranking product; it's SSL but won't run over a browser. I have looked at commercial products too (namely SmoothWall), but I wanted to do this myself (and I am a cheapskate). My co-worker, who is a 20-year veteran of the Unix world, wants to use SSH for the VPN, but I have heard that the overhead is too high and performance suffers. A:: This does seem to be a field that is dominated by large commercial applications, which is not that surprising considering that they are aimed at enterprise users. But one open source project stands out: SSL-Explorer. This appears to offer what you need - SSL VPN accessible from any standard web browser. SSL-Explorer is available from http://sourceforge.net/projects/sslexplorer. While the free version may appeal to your cheapskate tendencies, if you are using it to provide access to a commercial network, you should consider the security and financial implications of incorrect installation or configuration. If you have any doubts about your experience in this area, it may be prudent to consider SSL-Explorer Xtra ($490 for one to five users). This provides some extra software and, most importantly, commercial support. As is so often the case with open source, the choice is yours. Any form of encrypted communication is going to impinge on performance. This affects both SSH and SSL, and you need to ensure that your server is capable of handling the expected loads. One advantage of using SSL is that the use of certificates ensures that you are connecting to the correct server, which safeguards against anyone redirecting traffic to another server to harvest passwords and other data. Back to the list ****** Restoring data in RAID setup when a disk dies Q:: I have recently set up a file server using SUSE Enterprise Server 9. There are three hard disks in the system: an 80GB disk and two 120GB disks. The 80GB disk contains the OS. The two 120GB disks are formatted as two RAID 1 partitions, primarily to store user data. The RAID is software implemented via SUSE, not hardware RAID via a controller. The filesystem is ReiserFS. Everything is working fine and hopefully will for a long time. However, at some stage, one of these mirrored disks may fail and will have to be replaced. What are the processes involved in replacing a crashed mirror and restoring the data from the other drive? Are there any methods or utilities to determine the health of a RAID system? It seems to me that there is much discussion regarding the merits of RAID and implementing it but nothing, or very little, on maintenance or recovery. A:: Here's a quick overview. There are several ways of examining the status of an array. The following code, --- cat /proc/mdstat mdadm --detail /dev/md* ,,, gives a quick overview of the status of any RAID array. The mdadm program also has a daemon mode that will run in the background. You'll need to edit /etc/mdadm.conf and test it on the command line first, then set mdadmd to start at boot in Yast > System > System Services. It will send you an email if it detects any problems. With RAID 1, if a disk fails the array carries on working using just the good disk. To replace the broken disk, first remove it from the RAID with --- mdadm /dev/mdX --fail /dev/hdYn --remove /dev/hdYn ,,, where mdX and hdYn are the array and partition device nodes respectively. Then you can power down, replace the disk with a new one, reboot, create the necessary partitions on the disk as you did when setting up the array in the first place, and add it to the array with --- mdadm /dev/mdX --add /dev/hdYn ,,, The array will be rebuilt automatically. There will be a slight reduction in performance while the rebuild takes place. Either of the two commands given for examining an array can be used to tell when the rebuilding is complete. You can use the raidtools package instead of mdadm for these tasks, but mdadm is my preferred choice - it is newer and more consistent to use. You may also consider running smartmontools to monitor the disks themselves. Back to the list ****** How to run Argonium Q:: I've been trying to get the Argonium game working on Ubuntu 5.10 but for some reason it won't. I've extracted it and gone into the directory but when I run ./argonium it gives the following errors among its output: --- couldn't exec config.cfg /dev/dsp: Broken pipe LoadLibrary("./refresh.so") ref_gl version: GL 1.0 ./libGL.so: cannot open shared object file: No such file or directory Segmentation fault ,,, Now, being new to Linux I have little to no idea what's going on. I looked for config.cfg but I couldn't find it. I don't know if this is the problem but please, please help! A:: There are a number of errors and warnings here, not all of them critical. The first, about config.cfg, has no effect. It just means no config file has been found, because you haven't run the game and changed the settings yet. When you do, this file will be created in .argonium/data in your home directory. The next one, about /dev/dsp, is a little more important. The warning means you won't have any sound, as /dev/dsp is the sound device for OSS, the old sound system still used by some programs. ALSA, the current sound system, can emulate OSS. For this to work, the relevant module has to be loaded with --- sudo modprobe snd_pcm_oss ,,, To do this automatically when you boot, add the module name to /etc/modules with --- echo snd_pcm_oss >>/etc/modules ,,, The next error is more significant. Argonium is trying to load libGL.so from the current directory, when it is actually in /usr/lib. A symbolic link will fix this - see the Quick Reference box on page 95 for more information. --- ln -s /usr/lib/libGL.so.1 libGL.so ,,, This should get you past all the errors and warnings you have seen. Note, however, that you will need a graphics card with 3D acceleration and suitable drivers, such as an Nvidia card with the drivers from www.nvidia.com/object/unix.html. Back to the list ****** Setting up SSL with Apache Q:: I've been using Apache on my web server for some time. I must admit I found it quite difficult to configure from the command line but I eventually got it done, thanks to the help of a lot of kind-hearted Linux folk on the Internet. I now need to add a secure area because our developers have made a members-only section. They want this to be SSL encrypted and I need to get an SSL certificate. I'm not sure how to proceed from here though. I've had a look on Google and I can't find a guide that's on a basic enough level for me. Everything I want to do should be standard - I don't need to know about all the options and that's where I think I'm getting confused. Thanks in advance. A:: Setting up an SSL-enabled website isn't nearly as complex as it seems at first. This can be divided into two tasks: getting the SSL certificate and configuring Apache. To set up the SSL certificate, you first need to generate a private key. Once generated, make sure you keep this key in a safe place because you'll need it if you ever need to regenerate your certificate or move your site to another server. --- # cd /etc/httpd/conf # /usr/bin/openssl genrsa 1024 > ssl.key/mydomain-com.key ,,, With this key you can generate a Certificate Signing Request (CSR). This needs to be sent to an SSL certificate provider (Thawte, Verisign and so on). The following command willgenerate the CSR: --- # /usr/bin/openssl req -new -key ssl.key/mydomain-com.key > ssl.csr/mydomain-com.csr ,,, Enter your details as appropriate, taking special care to enter your domain name exactly as it will appear in your URL for the 'Common Name' -in other words, secure.mydomain.com or www.mydomain.com. Also, be sure to leave the 'Challenge password' blank. If you enter a password here, you'll need to enter this each time Apache starts up. You can now head over to Verisign/Thawte and purchase a certificate. Be sure to enter the details you give them exactly as you entered them for the CSR you just generated. It will take them some time to verify your company and get back to you with your actual certificate. When you receive your certificate, save it to you server under /etc/httpd/ conf/ssl.crt/mydomain-com.crt. Lastly, we need to tell Apache that this certificate exists and how to use it. Every certificate will require a dedicated IP address to listen on. Make sure that Apache is configured to use this IP address and is listed on port 443, then add a new Virtual Host block for your secure site. Simply copy the details from the non-secure block and change the IP and port and add the following lines: --- SSLEngine On SSLCertificateFile /etc/httpd/conf/ssl.crt/mydomain-com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl.key/mydomain-com.key ,,, At this stage, restarting Apache should bring your SSL site up. Verify this at https://mydomain.com by looking for the secure padlock icon in your browser. Back to the list ****** Cannot get Motorola SB4200 modem working in Linux Q:: I managed to install the KDE desktop environment version of SUSE 10.0 but I cannot get the Motorola SB4200 modem supplied by Blueyonder, my ISP, to connect to the internet via the USB port. I know the forums all suggest using the NIC connection, but would you be partial to any information that would allow me to get a working connection between SUSE Linux and my modem? A:: There are good reasons why so many people recommend using the Ethernet connection rather than USB. The main three are: Ethernet is faster Ethernet is trivial to set up Ethernet is faster Yes, there is a huge speed difference between the two. I haven't tried it with this modem, but on my ADSL line the superior performance of Ethernet over USB modems is striking, particularly in terms of responsiveness. This is hardly surprising, as it's just what Ethernet was designed for, whereas USB is a universal system originally designed for low-speed devices. If your PC does not have an Ethernet port already, a PCI card can be bought for less than £5 and SUSE will take care of its configuration. You may also need to register the MAC address (a unique hardware identifier) of your network card with Blueyonder; the company uses this to validate your login. To find this, start the Yast Control Centre and go to Hardware > Hardware Info. Click on your network card and then Resources > Hwaddr. Call Blueyonder's support team and give them this number. To use the USB connection, you need the CDCEther driver. This is compiled into the standard SUSE 10.0 kernel, so the modem should 'just work'. Does SUSE detect the modem when you connect it? If so, but there is no network interface for it, you will need to set this up from the Network Device section of Yast. The type should be USB and you should select Automatic Address Setup. Back to the list ****** Slackware installation fails on Advent 7081 CELM350 laptop Q:: Since I first heard of it, I've been very keen on open source in principle, but I felt that it required too much technical wizardry for me to benefit from. Computers are expensive and I didn't want to take a wrong turn and end up ruining one. As I have access to a PC emulator on a Mac G4, I thought I had nothing to lose if I did a bodged job, so I went for it, installed Slackware, and it worked like a dream. I got a bit lost when the machine asked me for 'darkstar login' so I went to one of the forums and found the members welcoming and very helpful. So far so good. Emboldened by this I then took the big step of trying to install it on my wife's Advent notebook PC (model 7081 CELM350) as a dual boot, in the hope of weaning us both off Windows altogether. Panic ensued as the install failed to take. I think it has something to with some drivers or files that the model PC needs and were not included in the default installation. I don't know how to get around this. To cap it all, when I go to boot up now, I get an OS missing' message. The impact of this on my marriage is not positive. Are you familiar with this type of situation? What can I do to get the Slackware up and running? I'm so impressed with the little I have seen of the open source distros and community that I'm really reluctant to abandon the project. A:: It is unusual for a failed installation to leave the computer unusable. The bootloader is normally set up at the end of the installation, and until this happens rebooting will take you straight back into Windows. The most likely explanation is that something failed the during bootloader setup. To fix bootloader so you at least have Windows back, boot from your Windows CD and select the rescue option. Type fixmbr at the prompt and all should be well. This is for Windows XP, for 98 the command is fdisk /mbr. Installing Linux on laptops is notoriously tricky because of the amount of custom hardware they use. Emulators, on the other hand, tend to emulate bog-standard hardware. The safest option is to use a distro that has a Live CD. You can run the distro from the CD before installing anything, which gives you a chance to check that your hardware is supported. Suitable distros include PCLinuxOS (www.pclinuxos.com), SimplyMepis (www.mepis.org), Kubuntu (www.kubuntu.org) and, of course, Knoppix (www.knoppix.com). All of these Live CDs allow you to run the full distro from CD/DVD before committing to an installation. You could then revisit Slackware, knowing what hardware you have and what drivers you need. Back to the list ****** Best Linux distro with 64-bit support Q:: I am interested in getting away from Windows and running Linux. I need to be able to design websites, edit photos, use MS Office, email, use the internet and play flight simulators. I know there are some Office replacement options and things like VMware for running Windows programs. I am OK with that because I could run some Windows programs and save them to a storage drive then take them into Linux. I am not a very literate programmer, and am looking for something that's easy to install and use. My specs are: Motherboard Asus K8V SE Deluxe, CPU AMD 64-bit 3000, RAM 1GB, and Graphics card BFG 6600 OC GeForce 128MB. I would like to use an OS that will allow me to use my 64-bit processor and achieve my operational needs without running Windows, with the exception or maybe using VMware. What OS would you suggest that I try? The last time I tried Linux there was no USB support. Does VMware support gaming? I am fed up with Windows, but I so want to be able to view all websites and use all of my hardware. A:: There are several distros available in full 64-bit versions that are well suited to an inexperienced user. In no particular order, Mandriva (www.mandriva.com), SUSE (www.suse.com) and Ubuntu (www.ubuntu.com) are all well worth considering. These systems all have Live CD variants. Live CDs are distros that boot and run from a CD or DVD, requiring no installation. They run a little slower and can't be customised, but provide an excellent way to evaluate a distro before installing it (see page 34 for more). If the only Windows software you want to run is MS Office, you don't need a full-blown (and expensive) virtual machine like VMware, CrossOver Office (from www.codeweavers.com) allows you to run MS Office, and many other Windows programs, on Linux. An even better solution for most people is to use OpenOffice.org instead. This comes will all major distros and is as good as Office in many areas, better in some. Everything else you mention is more than adequately covered by Linux software, much of which will be included in the above distros. However, gaming in VMware is generally not that good - in fact gaming is one of the main reasons why people keep Windows on their hard disks. Back to the list ****** Vsftp: connections from behind a firewall hang Q:: I have Red Hat Enterprise Linux ES 4 running on my server. It uses Vsftp as an FTP service. FTP seems to work OK, but I have increasingly noticed that when I attempt to make a connection from a remote location that uses ADSL or I am behind a firewall, the connection occurs as I get prompted for a username and password, but I am unable to list directory content or upload files. There is no obvious error - it just hangs, whether I use an FTP client or a command line. I am using Iptables for firewall protection, which I have only recently enabled, and I think this might be related, because when I turn Iptables off the FTP works fine. A:: The problem here is with regards to the Iptables modules running on the server. You will need to enable two 'nat helper' modules for the Iptables. They are called ip_nat_ftp and ip_conntrack_ftp. Run them by typing --- modprobe ip_nat_ftp modprobe ip_conntrack_ftp ,,, Now lsmod will reveal: --- Module Size Used by ip_nat_ftp 4913 0 iptable_nat 23037 1 ip_nat_ftp ip_conntrack_ftp 72689 1 ip_nat_ftp ipt_LOG 6465 1 ipt_state 1857 1 ip_conntrack 40565 4 ip_nat_ftp,iptable_nat, ip_ conntrack_ftp,ipt_state iptable_filter 2753 1 ip_tables 16705 4 iptable_nat,ipt_LOG, ipt_state, iptable_filter ,,, Please note that running this modprobe will not keep these modules loaded, because when Iptables is restarted for any reason, it will not load the modules again, and you would need to run the modprobe again. To bypass this, you can edit the /etc/sysconfig/iptables-config and add the following entry: --- IPTABLES_MODULES="ip_nat_ftp ip_conntrack_ftp" ,,, Now when you restart Iptables you will see the following: --- 'Loading additional iptables modules: ip_nat_ftp ip_conntrac[ OK ]'. ,,, This should resolve any FTP issues through the firewall. Back to the list ****** SUSE Linux not recognising USB drives Q:: I have just installed SUSE Linux 10.0 on to my Toshiba L10 laptop. It went on a treat but it wouldn't recognise my LG USB drive, which is a removable 1GB, so I re-installed SUSE with the stick plugged in and it worked fine. When I took it out again, it disappeared. After SUSE was installed, My Computer showed my hard drive (hda2) and LG 1GB. After I'd restarted it, it showed CD-Recorder, hard disk hda2 and another hard disk, sda1. A:: As this is a laptop, it is reasonable to assume that it only has one hard disk, so the second hard disk you can see (sda) will be your USB stick. IDE hard drives are denoted hda, hdb and so on. Memory sticks, and other USB mass storage devices, are treated as SCSI hard disks and are denoted sda, sdb etc. The number refers to the partition number, so sda1 is the first partition on the first SCSI disk - in this case, the only partition on the memory stick. Look at the contents of this and I'm sure you will find it is your memory stick. Back to the list ****** How to set up VNC on CentOS Q:: I've installed a minimal CentOS 4 installation on a headless PC at home, which I plan to use for email, DNS, web hosting etc. I'm relatively new to Linux and really like the X-based system-config-tools that Red Hat has provided for system administration. I've read about VNC and have installed the vnc-server RPM but cannot get Vncserver to run. Can you please offer some insight to assist me in getting VNC up and running? A:: Installing Vncserver on CentOS/RHEL is straightforward, but it does require a few other packages to operate correctly. Provided you're running a minimal install, the first step is to install vnc-server, xorg-x11, gnome-session and gnome-terminal. The xorg-x11 and gnome-session packages have numerous dependencies, so if you're installing via Yum and you have a slow internet connection, now would be a great time to go grab a coffee. Alternatively, you can avoid installing the Gnome-related packages and use the default TWM window manager. This will need xorg-x11-twm. If you do choose to use TWM, you can leave ~/.vnc/xstartup with the default configuration. If you plan to use Gnome, you can use the following: --- #!/bin/sh vncconfig -iconic & gnome-terminal & gnome-session & ,,, Once that's been saved, ensure that xstartup is executable. Finally, to start the VNC server, use the vncserver command. This will first ask for a password to use when you connect with your VNC client. If you do not specify a display for Vncserver to use, it will default to the first available display number (which is usually :1). You should now be able to use your VNC viewer to connect to your IP followed by display number (ie 192.168.1.10:1 for example). To get Vncserver to start at boot, use the chkconfig command to enable the service to start in the default runlevel. Additionally, on Centos/RHEL there is a Sysconfig file for Vncserver, located in /etc/sysconfig. This file is used to tell Vncserver which user to run under and which display to connect to. Back to the list ****** Can't install VMware Player: 'no vmmon modules suitable' Q:: I tried to install VMware Player on a Dell Inspiron 5150 laptop. However, I got the message 'no vmmon modules suitable'. I then tried to uninstall the program, but the computer seemed to be in a loop and I aborted after 10 minutes or so. When I run the vm-install.pl program now it gives the message 'Previous installation of VMware software has been detected' and 'Execution aborted'. Any help in clearing the installation and providing a suitable module would be appreciated. A:: You don't tell us which Linux distribution you are using, so some parts of this answer will have to be generic rather than specific. Firstly, VMware Player is now installed, which is why you get the message when you try to install it again. Your problem is that it has not been configured, which is done with the vmware-config.pl program (the installer runs this for you when it has finished copying the files). It was vmware-config.pl that gave the 'no vmmon module' error. VMware Player comes with a wide selection of kernel modules, but it cannot cover every possibility. For example, it has a module for a default SUSE 10.0 installation, but not for Mandriva 2006. If a pre-built module is not present, vmware-config.pl will build one from source, but to do this it needs a C compiler and your kernel source code to be installed. Most distros install a C compiler, but many do not install the kernel source by default. Open your distro's package manager and search for gcc and kernel-source. On some distros, such as Ubuntu, the packages are build-essential and linux-source. Whatever the name, make sure the source package you install matches the version of your running kernel. You can get the version of the current kernel with the command --- uname -r ,,, Install whatever is missing and run vmware-config.pl again. This time it should build the modules and everything should be fine from there on. Back to the list ****** Best free web server log file analysis program Q:: We have got a dedicated server and are running multiple Apache virtual hosts. I would like to produce some simple statistics for each of the virtual hosts without having to buy an expensive statistics package that not all of our customers will need. Is there a way to add this without additional cost? A:: You're in luck: there's a free web server log file analysis program called Webalizer (http://webalizer.org), which you can use to generate detailed usage reports in HTML. The first thing to do is set up Apache so that each virtual host creates its own log files: --- CustomLog logs/domain.co.uk-access_log common ,,, The next step is to set up Webalizer to analyse each of the log files and generate individual reports. Create a central directory for your configuration files with mkdir -p /etc/webalizer/vhosts. Copy the /etc/webalizer.conf configuration file to /etc/webalizer/vhosts/ for each virtual host. Give the file the same name as the domain and end it with a .conf extension so you can easily tell what host the configuration is for. (For example, for domain.co.uk the file will be called /etc/webalizer/vhosts/domain.co.uk.conf.) The file will need to be edited - you should have at least the HostName, OutputDir and LogFile configuration directives set to something appropriate. You'll probably also want to specify other settings that are specific to a domain, such as HideReferrer, HideSite and maybe others as well. More information can be found in the man page (man webalizer). It should look like this: --- LogFile /var/log/httpd/domain.co.uk-access_log OutputDir /var/www/vhosts/domain.co.uk/usage HostName domain.co.uk ,,, Now, in order to process the logs for all your sites you need a simple script that you can just drop into /etc/cron.daily to be run once a day: --- #!/bin/sh for i in /etc/webalizer/vhosts/*.conf; do /usr/bin/webalizer -c $i; done ,,, Once this has been set up, all you need to do to add a new host is creat a new configuration file and put it in the central directory. It will automatically be picked up the next time the command is run KC Back to the list ****** Firefox and Thunderbird not connecting to the internet in Linux Q:: I am running a dual-boot system with Windows XP and various Linux distros using interchangeable caddies. My box is connected to the internet by means of a Netgear DG632 router, which has proved OK in most situations. My preferred programs for browsing and mail are Firefox & Thunderbird, and these work fine in XP - but in any of my Linux setups they refuse to connect to the internet. The only way that I can connect satisfactorily is by using Konqueror and KMail. I do not use proxies and I have made sure that all my settings are identical. A:: This is caused by Firefox and Thunderbird trying to use IPv6 to connect to the internet, while KDE programs default to the older, more widely supported IPv4 protocol. If your ISP does not use IPv6 and your router does not support it correctly, you'll see exactly the behaviour you describe. There are two possible solutions. The most elegant is to upgrade the firmware of your router. Some Netgear routers certainly benefit from this, correctly handling the fallback to IPv4 after a firmware upgrade. You can get firmware upgrades for Netgear products from www.netgear.co.uk/product_ support.php. The alternative is to disable Linux's IPv6 support, so Firefox doesn't even try to communicate with the router in this way. You disable IPv6 by adding or editing these two lines in your module configuration file. --- alias net-pf-10 off alias ipv6 off ,,, The name and location of the file varies between distros, and you don't say which you have used, so here are the favourites: /etc/modprobe.conf Mandriva, Slackware and SUSE /etc/modprobe.d/aliases Fedora, Debian & Ubuntu /etc/modules.d/aliases Gentoo Back to the list ****** How to install Sweep without the source code Q:: I want to try Sweep. I'm running Ubuntu (Breezy Badger) and all the help files just talk about the repositories. Having just a CD-ROM drive and no internet, I want to download from magazine coverdiscs, but it isn't obvious what to do! I pulled the files off the CD, unpacked the tarballs into a folder in my home directory, and just ended up with a pile of files and no idea what to do with them! Please make it easy so I can use your cover CDs! A:: Software repositories are the easiest way to install packages on most distros, provided your distro's repository contains the package you want and you have internet access. However, they're not the only way to go - you can also build the package from source code. After all, that's what the repository maintainers do. The files you see after unpacking the Sweep archive are its source code. Compiling most software from source is not particularly difficult and requires no programming skills, just a little care. Open a terminal and move to the directory that contains the source code, then run configure, which checks that your system has everything it needs to compile the source code --- cd sweep-0.9.1 ./configure ,,, If configure fails, it means that something is missing, which you can identify from the error message. Install, and try again. With a default Ubuntu installation, the first failure will be that a C compiler is not present, so fire up Synaptic from the System menu, click on Search, type in 'build-essential', select it from the search results and click Apply. Then run /configure again to see what else may be needed. Once ./configure runs without error, run these two commands to compile and install the software: --- make sudo make install ,,, Sometimes, ./configure may give an error about a package not being installed when it is. In this case you need to install the corresponding devel package, which contains information needed when compiling software. Some of the required software can be found in the dependencies directory on the DVD. Unfortunately, it appears that in this case, not everything you need in the way of -devel packages is included on the Ubuntu CD. If you have no internet connection, you would be better off with a distro that comes on several CDs, or a DVD, where there is space to include much more, including many more devel packages. SUSE 10.1 on this month's DVD would be a good choice. Back to the list ****** Apache 2 and PHP Q:: We're currently migrating some websites from a 2.1ES server onto a new 3.0ES server. The main problem seems to be that 3.0 is using Apache 2.0 rather than 1.3. Our websites all are all PHP based and receive substantial amounts of traffic. On the PHP website, there's a page that suggests you shouldn't really be using Apache 2 and PHP in a production environment: http://uk.php.net/manual/en/faq.installation.php#faq.installation.apache2 So, my question is, what are my options? I presume I'm going to have to downgrade to version 1.3, but what are the consequences of doing this with regards to the up2date program? A:: The questioner's comments here are rather dated and meant more for when Apache 2 was still brand new, less stable and had less (and less stable) module support. Also, the MPM model Red Hat uses is the default Prefork MPM, which is an order of magnitude more stable than the powerful but unstable worker MPM module. That being said, you can be further reassured knowing that there are thousands of Red Hat Enterprise 3 production web servers running httpd-2.0 with very active PHP sites, usually with dynamic content from backend MySQL too. It runs perfectly. As you're a Rackspace customer, if you have any specific code compatibility needs, please contact your support team about code compatibility issues between versions, or to ask about their code migration services, as well as bleeding edge options such as PHP5 and MySQL4.x. There are pre-built and tested packages that can be customised and installed for customers. If your situation does for some reason demand running Apache 1.3, this can be done because binaries are available in RPM format or can be compiled from source. You're quite correct in being concerned about up2date though - you'll need to add Apache to the package ignore list or it will be upgraded back to 2.0 as soon as up2date is run. Back to the list ****** Set up RAID level 1 at the command line Q:: I would like to use LVM and software RAID level 1 on my Red Hat-based server, but due to some administrative issues I'm unable to use the graphical user interface to set it up. Can I do it without going all graphical? A:: You can set up RAID and LVM using the text-mode installation, but it is slightly harder that way. A minimum of two disks is needed for RAID 1. Start up the boot process, and when you get to the disk partitioning screen, switch to the free console screen by using Alt+F2. Using fdisk to partition the drives, create a partition of 100MB for a RAID 1 /boot on each of the drives (/boot cannot be a logical volume), and create another partition using the rest of the disk for other filesystems and swap. Change all partition types to fd for "Linux raid autodetect" and don't forget to write the changes to disk. The devices to use with fdisk are /dev/sda and /dev/sdc for SATA devices; /dev/hda and /dev/hdc for IDE drives (IDE drives need to be master device on their own cable); and /dev/sd* for SCSI drives. Create the RAID 1 devices, using the correct partitions: --- mdadm --create --verbose /dev/md0--level=raid1 --raid-devices=2 /dev/sda1 /dev/sdb1 mdadm --create --verbose /dev/md1--level=raid1 --raid-devices=2 /dev/sda2 /dev/sdb2 ,,, Start the RAID devices: --- raidstart /dev/md0 raidstart /dev/md1 ,,, See RAID rebuild status: --- cat /proc/mdstat ,,, LVM creation; first create the physical volume: --- lvm pvcreate -M 2 --metadatacopies 2 /dev/md1 ,,, Next, create a volume group: --- lvm vgcreate -A y -M 2 VolGroup00 /dev/md1 ,,, Activated: --- lvm vgchange -a y VolGroup00 ,,, Finally, create logical volumes (/ (root); /usr; /var; /tmp; /home and swap). Appropriate sizes are indicated by G for gigabytes and M for megabytes. --- lvm lvcreate -L 512M -n lvroot VolGroup00 lvm lvcreate -L 10G -n lvusr VolGroup00 lvm lvcreate -L 5G -n lvvar VolGroup00 lvm lvcreate -L 128M -n lvtmp VolGroup00 lvm lvcreate -L 2G -n lvhome VolGroup00 lvm lvcreate -L 1G -n lvswap1 VolGroup00 ,,, Logical volumes can be reduced or extended, so they only need to be sufficient size for the installation. When you're done, press Alt+F1 to go back to the Disk Druid. Continue the configuration of the mount points and then the rest of the installation. Back to the list ****** Favicon problems in Firefox Q:: I use Mandrake 9.2 and Firefox 1.0.6. In my bookmarks there is a small icon next to each entry - some of the time it appears as a sensible picture with a logo or initial letters appropriate to the name of the website, but some of the time it appears as a broad backslash. Try as I might, I cannot change this icon or discover where the detail or description of it is. Now here is the silliness; next to the LXF entry in my bookmarks is the entry for Linux Magazine - and the icon is the same! How can this be? How do I change it? A:: These are favicons, which were introduced by Internet Explorer. If a site contains a file called favicon.ico, most browsers will use it to identify the site in the location bar and bookmarks. Your copy of Firefox seems a little confused as to which Linux magazine is which, but there are a couple of ways to fix this. The brute force approach is to type about:config into the location bar and search for the entries 'browser.chrome.site_icons' and 'browser.chrome.favicons' - right-click each one to toggle it to false. Exit and restart Firefox, and clear the caches with Tools > Clear Private Data. Then re-enable favicons in about:config, restart again and visit the sites to reload the correct icons. A more targeted solution is to install the Favicon Picker extension. This enables you to change or delete favicons for individual sites. You can delete the incorrect icon, then revisit the site to have it load the correct one. Your version of Firefox is rather old, and has some security flaws. Updating would be advisable. If you do, you will need a newer version of Favicon Picker too, which you can get from http://forums.mozillazine.org/ viewtopic.php?t=321562. Download and save the file, then press Ctrl+O and select the file you just saved to install the extension. Back to the list ****** Default permissions for sg0 won't let me access it Q:: I have a SCSI scanner, the node for which is /dev/sg0. I use KDE for my desktop and Sane and XSane to control the scanner. The problem is that the default permissions for sg0 do not permit me to access it. Whenever I click on the desktop icon for XSane, I get an error message that there is no device present. The device is owned by root and belongs to the group 'disks'. My user also belongs to that group. The default permissions are read+write for owner, but only read for group. Group requires read+write in order for XSane and Sane to see the scanner. So whenever I need to scan, I have to open a console window as root and run the chmod command to alter the permissions. Unfortunately, the alteration only lasts for the session, and it reverts to the default on the next boot-up. This has only occurred since upgrading to SUSE 10 and never manifested itself on any of the earlier versions of SUSE I was advised to write a new udev rule. Based on the information revealed by udevinfo I have made several attempts at writing a rule to set the mode - but have not (yet) had any success. A:: You are correct in thinking that udev is causing this, which is why it did not happen on earlier version of SUSE - they did not use udev. You already have a udev rule in place, in /etc/udev/rule.d/50-udev.rules. --- KERNEL=="sg*", NAME="%k", GROUP="disk", MODE="640" ,,, All you need to get your scanner working is to change the MODE setting to 660, giving write permission to the disk group. However, changing this file is not recommended, because any subsequent update to udev will overwrite it and you'll be right back where you started. Instead, you should copy this line to /etc/udev/rule.d/10-udev.rules and modify it there. The lower-numbered file is processed first. To prevent its changes being overridden by a later rule, make the rule read: --- KERNEL=="sg*", NAME="%k", GROUP:="disk", MODE:="660" ,,, The := assignations ensure that the settings will not be changed. Depending on how many people use your computer, and who you want to use the scanner, you may prefer to create a separate scanner group, either in Yast or with groupadd scanner and change the udev rule accordingly. Then you can add to that group only those users you want to be able to use the scanner. Back to the list ****** Restrict SSH access based on time, and printing usage message Q:: I have recently been given the task of running our internal Linux systems. We are planning to allow our developers to have remote SSH access. One of the requirements is that all users connecting from the exterior be presented with a message stating the terms and conditions of usage. Could you give me some hints on how I could get this configured in a RHEL4 operating system? Also, do you know if it is possible to prevent logins between 2 and 4 am? I have some Cron jobs running at this time that are quite resource intensive, and don't want people logging in and consuming more resources. A:: Restricting access to services is a common task that most system administrators need to do in the course of their work. There is more than one way to do this with Linux (see man motd and man issue), but it just so happens that PAM (Pluggable Authentication Modules) will let you do both of the tasks you are trying to accomplish. PAM is a powerful and versatile system that allows any program compiled with it to use its modules for authentication, accounting, etc. Each program has its own configuration file in /etc/pam.d. This is what /etc/pam.d/sshd looks like by default: --- #%PAM-1.0 auth required pam_stack.so service=system-auth auth required pam_nologin.so account required pam_stack.so service=system-auth password required pam_stack.so service=system-auth session required pam_stack.so service=system-auth session required pam_loginuid.so ,,, For consistency, Red Hat configures PAM so that all modules that provide system authentication use stacked authentication rules (/etc/pam.d/system-auth). Since we do not want the message to appear for any other service, we need to change /etc/pam.d/sshd only. We will also add the pam_time lines to prevent SSH logins from 2 to 4 am. This is what it would look like: --- #%PAM-1.0 account required pam_time.so auth required pam_stack.so service=system-auth auth required pam_nologin.so account required pam_stack.so service=system-auth password required pam_stack.so service=system-auth session required pam_stack.so service=system-auth session required pam_loginuid.so session required pam_motd.so motd=/etc/sshmotd ,,, Now all you need to do is put the message of the day in /etc/sshmotd and add the following to /etc/security/time.conf: --- sshd;*;*;!Al0200-0400 ,,, You should be very careful with PAM, as it is a very powerful authentication mechanism that can lock even root out of the system. I recommend that you first try any changes in a testing environment. Back to the list ****** How to edit QuickTime videos in Linux Q:: I want to get into a little video editing on Linux. My digital camera (Kodak DX6490), is not really a video camera, but it does take videos in QuickTime (*.mov) format, but these are not understood by Kino or Avidemux. I downloaded Cinelerra, but have seen other forums where this program has been called overkill for a video newbie like myself to be learning on. I tried rendering an AVI file from Cinelerra to port to other editing software and noted that the colours had changed (at least they appeared to have done when I played it in Xine). I don't know where to go from here. Is there a beginner's guide to simple editing to help me learn the basics of Cinelerra? Just enough to be able to cut sections from the videos, tie them all together and then save in whatever format? Also, is there a way that I can update my Fedora system so that Kino recognises the *.mov files (I think it is meant to, but there is something different about the ones made by my particular camera). A:: You need the latest version of Kino - 0.8.0 - which supports import of various files by means of FFmpeg or MEncoder. Fedora includes Kino 0.7.6. You can find a suitable RPM for FC4 in the Dries repository at http://dries.studentenweb.org/rpm/packages/kino/info.html, which should let you import any file that FFmpeg or MPlayer can handle. You are likely to hit a couple of gotchas when you first try to import files. You may get an error that reads: --- The playlist is empty and the default preferences for video creation have not been specified aborting. ,,, To fix this you need to go to the Defaults tab of the preferences window and set Normalisation to either PAL or NTSC (the default is None). If you have MPlayer installed, you might still get the error message: --- Failed to load media file. ,,, It appears that Kino doesn't always play nicely with MEncoder, which is Kino's default choice for loading most file formats. If this happens to you, open /usr/share/kino/scripts/import/media.sh in a text editor, as root, and change line 13 to --- which mencoderREMOVE > /dev/null ,,, This kludge makes Kino fail to find MEncoder, and use FFmpeg instead. You can change the MEncoder name to anything that the which command won't find; adding REMOVE makes it clear what you have done. Back to the list ****** Best Linux distributions to replace Windows in schools Q:: I am a network manager in a large secondary school. I am interested in trying to reduce ICT operating costs by using Linux and open source software. I currently use an FC4 server for our intranet and web server, but I would like to extend the use of Linux and open source software with additional servers for file sharing/printing and also eventually to classroom workstations. What distributions would you recommend for server and workstation? Do you know of any good sources of information for using Linux in education? A:: The best advice I can give is to download and test various different distributions until you find one that suits your needs and that you feel comfortable with. The website www.linuxiso.org has many Linux distros available for download and burning to CD/DVD, including FreeBSD, NetBSD (which are not strictly speaking Linux, but are free to download and try). Mix and match distributions in your environment until you find one you like. I'd recommend a distro that is compliant with the LSB (Linux Standards Base www.linuxbase.org), which should mean that you get some compatibility between distributions and enable software applications to run on any LSB conforming system. As to the question of which distribution is better to use in education, all the popular distributions that I have come across give opportunities to learn and develop the mind, so whichever one you pick for that purpose should be a useful asset to have in the classroom. And don't forget that you can legally copy and distribute all Linux distributions that are released under the GNU General Public License to your pupils, so they can continue to learn about the system at home. (For more on using Linux in schools, see our feature on p50.) Back to the list ****** SUSE screen resolution problems with Belkin KVM switch Q:: I have two computers with a Belkin KVM switch connecting them to a KDS Visual Sensation 190 monitor. Both machines are running SUSE 9.3 with no problems whatsoever. However, I have tried to install SUSE 10.0, and I keep running into the same problem. Once I have finished installing it, my monitor doesn't report its size and resolution to the OS. This isn't a problem on SUSE 9.3, because I just enter the size and resolution information from the manual and everything works fine. However, when I try this with SUSE 10.0, the system prompts me to test my settings (via Sax2), but doing so just seems to disable my monitor (the LED even alternates between green and yellow as it would during powersave mode) and I can't wake it up. Since it happens on both machines, I think that it is either the KVM switch or the monitor. A:: The most likely culprit is the KVM switch, which is easy enough to test. Remove it and connect the monitor directly to one of the computers. As the problem seems to be the KVM preventing the software from interrogating the monitor, running it like this once should sort it out. Then connect the monitor directly to the other computer and repeat the process. Once X has been configured with your monitor settings, SUSE won't ask about it again, and you can reconnect the KVM. Incidentally, the same thing happens to me when installing SUSE on a VMware virtual machine, because it cannot identify the monitor. Choose a suitable monitor from the list in Yast > Hardware > Graphics Card and Monitor sorts it out. Back to the list ****** OpenOffice.org won't recognise Canon MP 760 printer Q:: I am using SUSE 10.0, which incorporates OpenOffice.org 2.0 build 1.9.125.1.2. I cannot get it to recognise my Canon MP 760 printer as the default printer no matter how I try. If I set it up and test print, it is then recognised in OpenOffice.org. However, when I next restart my computer OpenOffice.org defaults to generic printer'. A:: If you are running the CUPS printing system (which is the default on SUSE), the generic printer should work. This is because OpenOffice.org is just passing the data to lpr, which uses the system default printer (which you have already configured in Yast). If you want to change how OpenOffice.org presents this, you need to run spadmin, as root. Select System > Terminal > Terminal program - Super User Mode from the SUSE menu, then type: --- /usr/lib/ooo-2.0/program/spadmin ,,, You may now rename your printer, or add a new one. If the 'generic printer' works, just rename it to something more meaningful. You can't delete the default printer, so add a new one and make it the default before you try to remove the generic option. Note that if you are running CUPS, spadmin will not allow you to add a new printer, because it expects CUPS to provide the information it needs. Press the New Printer button then Cancel in the Add Printer window to make OOo scan for printers and add them to its list. Back to the list ****** Mepis not recognising NFS network Q:: I'm enjoying Mepis, especially as it is the first Linux version I've tried in which Skype works well. However, I was disappointed that there was no sign of NFS, meaning I could not use my Linux network. The Mepis website was no help as others have the same problem. So I returned to Kubuntu, which does not seem too different and does most things well (though I regret that Midnight Commander is not available for either system). Can you give me instructions for getting NFS working in Mepis? A:: While Mepis comes with NFS enabled in its kernel, it does not have Portmap installed, which is needed to mount NFS partitions. Run Synaptic, go to Settings > Repositories and enable the first entry, the Debian one. Click on Reload to get the latest package lists, then use the Search button to find and install Portmap. You also need to check that the Portmap service is started when you boot - theinstallation should take care of this. You should now be able to mount the NFS export with the standard --- mount -t nfs hostname:/exported/dir /mnt/somewhere ,,, If you have already tried, unsuccessfully, to mount this export in the same session, you may need to reboot before it works. NFS can be a little quirky about things like that. Midnight Commander is available for Mepis, once you activate the Debian source that you needed to install Portmap. Again, search in Synaptic and you will find it. The package is called mc. Back to the list ****** Configure Apache to ignore WebDAV requests Q:: Our intranet runs on Debian Sarge, with around 1,500 Windows 2000 PCs that can access the web server (we're running Apache 2.0.54). A few of these PCs seem to have a WebDav service running that is trying to connect to the intranet web server, and it's filling up my logs. Is there any easy way of configuring Apache to simply ignore all requests made from this WebDAV service? The browser user agent is 'Microsoft-WebDAV- MiniRedir/5.1.2600'. A:: You can block (or allow) requests based on the browser's user agent with a combination of the SetEnvIf and Deny (or Allow) directives. These can be included in a [Directory] section of your httpd.conf or in a .htaccess file. As you want to block all requests for this user agent, I would recommend the [Directory] section corresponding to your document root setting. The directives to use to block this particular user agent are --- SetEnvIf User-Agent ^Microsoft-WebDAV-MiniRedir BegoneWebDAV Order Allow,Deny Deny from env=BegoneWebDAV ,,, The first line sets the environment variable BegoneWebDAV if the user agent begins with 'Microsoft-WebDAV-MiniRedir', which means it will still work when the version number changes. The next part denies access if this variable is set. The combination of SetEnvIf with Allow and Deny lends a great deal of control over who or what can access any part of your site. For more information, see --- httpd.apache.org/docs/2.0/mod/mod_setenvif.html#setenvif httpd.apache.org/docs/2.0/mod/mod_access.html#deny httpd.apache.org/docs/2.0/mod/mod_access.html#allow ,,, Back to the list ****** Permission denied with Samba file shares Q:: My problem is with mounting a network share from a Windows file server that I connect to from my Red Hat Enterprise Linux 3 system. I mount the share as root. --- [root@office root]# mount -t smbfs -o username=user,password=password \\\\fileserver.domain.com\\public /mnt/fileserver/ [root@office mnt]# pwd /mnt [root@office mnt]# ll total 16 drwxr-xr-x 2 root root 4096 Aug 15 19:31 cdrom drwxr-xr-x 2 root root 4096 Aug 15 19:31 floppy drwxr-xr-x 1 root root 4096 Nov 29 08:01 fileserver ,,, I can view, edit and delete anything as root, but as an user on the system I can't do those options as I just get this message: --- [dennis@office office]$ cd fileserver/ bash: cd: fileserver/: Permission denied ,,, I've changed the group and the permissions of the directory, with no luck. If you have any suggestions, they would be much appreciated! A:: All the credentials required to log onto the fileserver come from the command line you're using to do the mount. The permissions you have in place should be sufficient to allow the user to at least get a directory listing. One thing I can pick up from the information you've given me is that you're trying to cd into the fileserver directory from the office directory and not from the /mnt directory: --- [dennis@office office]$ cd fileserver/ ,,, Try the command again after running cd /mnt. If you're still having trouble, try getting a newer version of Samba - it's updated quite regularly on www.samba.org. There are binaries for Red Hat 9 that are fully compatible with EL3. You'll need to remove Samba and samab-common from the RPM database and install the single Samba RPM from ther site. Back to the list ****** Adding more information to the Bash login prompt Q:: How do I customise a login prompt? I want the penguin, CPU info, memory info, bogomips rating and an actual prompt. Can you help? A:: The text displayed on the console immediately before the login prompt is taken from the file /etc/issue. Put the text you want displayed, including any ANSI graphics, into this file. You can even use a Cron task to modify this file with time reminders, such as Mother's Day next Sunday' or 'Buy new Linux Format tomorrow'. If your artistic abilities fall short of creating ANSI penguins, you probably need the Linux_logo package, available from www.deater.net/ weave/vmwprod/linux_logo and possibly in your distro's package repository. The Linux_logo man page lists various options to control the output. The example that you found looks as though it was created with --- linux_logo -c -y -k >/etc/issue ,,, Good luck! The surrounding graphic is a framebuffer splash screen, by the way, which is completely separate from the prompt. Back to the list ****** Back up profile information in Firefox and Thunderbird Q:: I'd like to back up my bookmarks, account settings and address book from Firefox and Thunderbird. I used them for a while in Windows, where I had a dedicated backup program - very useful when Windows threw a wobbly. But in Fedora I can't find either profile. The paths suggested on the Mozilla website don't seem to point at anything resembling either my personal bookmarks (although I did find the Red Hat bookmarks) or my mail files. Searching the filesystem doesn't turn up anything either. Evidently I'm not asking the system the right questions... A:: Firefox stores its settings in ~/.mozilla/firefox/default.???, where ??? is an apparently random string. Thunderbird uses ~/.thunderbird/default.???. For example, my bookmarks, settings and address book are stored, respectively, in --- ~/.mozilla/firefox/default.yyh/bookmarks.html. ~/.thunderbird/default.piz/prefs.js. ~/.thunderbird/default.piz/abook.mab. ,,, It is probably easiest to back up the entire directories with something like --- tar czf FFandTBsettings.tar.gz ~/.mozilla/firefox ~/.thunderbird ,,, You could use Cron to make automatic daily backups by saving this script in /etc/cron.daily/mozbackup: --- #!/bin/sh tar czf /somewhere/safe/ FFandTBsettings-$(date -I).tar.gz home/john/.mozilla/firefox home/ john/.thunderbird ,,, Don't forget to set the executable bit or it will not run: --- chmod +x /etc/cron.daily/mozbackup ,,, Back to the list ****** Fedora not installing on partition created in Partition Magic Q:: I bought a copy of your Get Started With Fedora special and followed the instructions for installing the software, but I have run into a problem. I have Windows XP installed, and partitioned my hard drive with Partition Magic to give 10GB of hard disk space to Linux. The CD runs OK until I get to step six of the installation guide in the mag, and then it keeps coming back telling me it cannot install. I told Partition Magic that I wanted to install Linux - do I have to chop that partition into pieces to install the three directories that I need? A:: Linux partitions created with Partition Magic can sometimes be problematic. Many distros' installers have an option to resize your Windows partition and create the Linux partitions. When this is an option, use it. Fedora does not give you this choice so the best approach is to use Partition Magic to resize your Windows partition, as you have already done, but leave the space for Linux empty. To go ahead, simply delete the Linux partition(s) you have already created. Then run the Fedora installer and select Use Free Space at this point. The installer will take care of creating the partitions it needs in the empty space you have left. The important point to remember is that the free space it refers to is unpartitioned space on the disk, not free space within an existing partition. Back to the list ****** How to install software from tarballs Q:: I installed SUSE 10.0 three weeks ago, and I've not looked back since. I hope to eventually defect fully to Linux. One problem I cannot figure out comes with installing certain applications, particularly tarballs. I understand about untaring them and changing into that particular directory, but when I enter ./configure I get this: --- bash: ./configure: No such file or directory ,,, I'm a little confused by this, as I've followed lots of people's advice, tried carrying out this task using root and still can't get my head around it. I feel like I'm missing something really simple, but have to admit defeat. A:: It is often said that running ./configure is the first step after unpacking the tarball; in fact it is the third. The first two are: 1 Look for any files containing installation instructions. 2 Read those instructions thoroughly. Most source code tarballs contain files called README or INSTALL, which you should read to see how to install them. The standard installation method for source code of --- ./configure make make install ,,, applies to more than 90% of Linux open source programs, but there are exceptions. In some cases there is no configuration to do, particularly with very simple programs, and you only need make followed by make install. In other cases, the program uses a different installation method. In either case, you must read the instructions before you proceed. While it is unnecessary - and some say undesirable - to run the first two steps as root, make install generally needs root access as it copies files into system directories. As such, it is potentially dangerous, so you should not run it without looking at the instructions first. Another step worth taking when using ./configure is to run it with the --help option first. This gives you plenty of options for controlling where the program is installed (not something you should change lightly) and which features of the program should be compiled and installed. Back to the list ****** Restricting services to a single network interface Q:: I have a system that is going to be sitting on my internet connection and file sharing on my local LAN. It has two Ethernet cards for this purpose. I am going to configure Iptables, but also wondered if there were a way of restricting services (Samba, NFS etc) to only a single interface - in my case, the internal LAN connection. Is this done in each service or can it be set on a global scale? I shall be using either Fedora or SUSE. A:: There are three ways to handle this. The first is to set up each service to only listen on the LAN interface. If you only run a small number of services, this may be easiest solution and certainly offers the most control. Check the man pages for each service and add the appropriate lines to the configuration files. Assuming your LAN interface has an IP address of 192.168.0.1, and your other interface has an address on a different subnet, usually supplied by the ISP, you could do the following: --- Add 'Listen 192.168.0.1:631' to /etc/cups/cupsd.conf. Add 'socket address = 192.168.0.1' to /etc/samba/smb.conf. Add 'Listen 192.168.0.1:80' to /etc/apache2/httpd.conf (the location of this file may vary). ,,, NFS is slightly different in that you specify the client addresses allowed to connect, so for each export you would have a line in /etc/exports like --- /path/to/export 192.168.0.0/24(rw,sync) ,,, The second method is to use Iptables to block all access from the internet to the ports of the various services on the WAN interface. You can do this on a per-port basis, but if you are doing that you may as well configure the individual services as above. Alternatively, you could block all incoming access, which is the default setting for most Linux firewalls. If you take this route, you can then open up specific ports for any services you may wish to let through, such as SSH. While configuring Iptables by hand is possible, it is also possible to inadvertently leave a security hole if you are not totally familiar with it. The safest approach for anyone but Iptables experts is generally to use one of the GUI or script-based configuration tools, such as Guarddog or Shorewall. Fedora and SUSE both have tools for easily setting up Iptables to do this. The third option is to block access at your modem or router. This is in some ways the safest method, because you are stopping the traffic before it even reaches the computer, but it not always as configurable, depending on your modem or router. These three methods are not mutually exclusive - you could implement two, or even all, of them, to provide belt-and-braces security. Back to the list ****** Share with a Windows machine using Samba and CIFS Q:: I have a Linux machine at 192.168.1.1 connected to my wireless router, which contains a backup of my Windows laptop that I FTP up every now and then. Is there a better way to do this and perhaps store all my documents on my Linux machine and connect to them over the network? A:: An easy solution is to use Samba and set up a CIFS (Common Internet File System) server on your Linux machine. First install a recent copy of Samba (www.samba.org) and find the Samba configuration file: usually /etc/samba/smb.conf. The configuration file is split in two: global settings and share definitions. The global settings control how the CIFS server works and can be used to control anything - from what interface the server listens on, to Windows Active Directory domain controller settings. Here, the out-of-the-box global settings will be sufficient. Now set up a share for your files. Let's say that the files exist on the filesystem as /export/share. You will need a CIFS share name and description, which we will call myshare' and 'all my files' respectively Now, as we have multiple wireless users on the network, we want to lock down this share so that only Fred and Mary can access the share, giving them read and write access. Add the following to the smb.conf: --- [myshare] comment = all my files path = /export/share valid users = mary fred public = no writable = yes printable = no create mask = 0765 ,,, The main step is complete but we still need to add authentication credentials for Fred and Mary. To do this, use smbpasswd as root: --- # smbpasswd -a fred New SMB password: # smbpasswd -a mary New SMB password: ,,, Finally, make sure Samba is running - if it isn't, start it. On your Windows laptop, you will now be able to map your CIFS share at \\192.168.1.1\myshare using the credentials for either Fred or Mary. Back to the list ****** Mount LVM partitions from an external hard drive Q:: Is it possible, and if so how, to mount LVM partitions from an external hard drive? I'm thinking of my old Fedora system drive from which I would like to retrieve a single file without having to boot from it. A:: As long as you have the LVM tools installed on the distro you are booting, you can mount LVM partitions from any disk (I even did it from a USB key once). Run --- vgscan vgchange -a y ,,, as root and all the partitions should have devices created in the form /dev/volumegroup/logicalvolume, which you can then mount in the usual way: --- mount /dev/volumegroup/logicalvolume /mnt/somewhere ,,, Back to the list ****** External hard drive light constantly flashing in Mandriva Q:: I have Mandriva 2006 on a box with two hard drives. The second is a backup partition. After loading the KDE desktop all appears OK; however, the indicator LED for the hard drive keeps flashing on and off faster than I can count (several times a second). It also does it on an older box that I use just to try out any software. I was advised to remove Kat, but that did not stop it. I tried SUSE on the old box and there was no problem, but I would like to stick with Mandriva if I can. I don't know why Mandriva is trying to wear out my hard drive, but I want it to stop doing it! Any suggestions to this end would be most welcome. A:: Removing Kat makes sense as a first suggestion. Kat indexes all files in your home directory, which means that during its first run it has a good stab at making a fast 64-bit computer emulate the speed of a Sinclair Spectrum. However, there are other programs that will scan your hard disk from time to time, the most likely being Updatedb, which builds the database for locate. This is run as a Cron task - one that Mandriva defaults to running once a week, in the early hours, so you wouldn't normally notice the disk activity. If you have the anacron package installed, this will run any Cron jobs that were missed because the computer was turned off, which could account for the disk activity after you turn it on. Run top in a terminal while the disk light is flashing and nothing else is running. This will show the tasks using the most CPU time, which would usually include whatever is hammering your hard disk too. If the culprit is Updatedb, there's nothing to worry about. It normally only takes a few minutes to update its database, once a week, and it is well worth having. Try leaving the computer switched on for a while. If the light is still flashing after, say, half an hour, something would appear to be amiss and you should use top to find what it is. It is worth noting that your 'hard drive' LED actually monitors activity on the IDE bus, so it doesn't necessarily mean that your hard drive is being used. For example, checking whether a disc is in the DVD drive can cause this light to flash. It could be something as innocuous as Partmon, which warns you if a partition is close to full. You can disable this in the System > Services section of the Mandriva Control Center. Back to the list ****** Can't install VMware Player at the command line Q:: I'm relatively new to Linux. I got my first distro and am very eager to install VMPlayer to test other distros, but the installation instructions do not work for me. I've unpacked the file VMware-player-1.0.1-19317.tar.gz. The unpacking worked fine. But the next instruction (./configure) seems to fail. I believe I am in the correct directory (vmware-player-distrib), but the message received when I key in ./configure is: --- bash: ./configure: No such file or directory ,,, I think it has something to do with my SUSE 9.1 install. My guess is that I do not have a command called configure. A:: Well, configure is a script included with many source code tarballs - the './' refers to the current directory, so you are running (or trying to run in this case) a program from the unpacked tarball, not a command installed on your system. The majority of Linux open source programs use this to check your system for any dependencies the program needs and to set things up ready for compilation and installation. The reason why it doesn't work in your case is that VMware Player is a precompiled binary program, with a different installation method. The commands you need to information specific to individual run after unpacking the tarball are --- cd vmware-player-distrib ./vmware-install.pl ,,, You may need a C compiler and your kernel source installed for VMware Player to configure itself after installation, because it needs to install a module to match your kernel, and it will compile one for you if there is not a pre-built module that suits your system. You can install these from Yast. The packages you need are gcc and kernel-source. Back to the list ****** Best Linux distro for an IBM ThinkPad T22 Q:: Can you make any suggestions for a distro on a laptop, an IBM T22 with 256MB of RAM? I have tried the Live CD Damn Small Linux and it works great with my Winmodem. I would like to install it to the hard drive. Any suggestions? A:: IBM laptops are well supported by most Linux distros, because IBM provides the information and driver code that they need. Most Live CD distros have a hard disk install option, including Damn Small Linux, although this is probably not the best choice for you. DSL is really intended to be used as a Live CD, and keeping a hard disk version up to date would need work. I would recommend you try a number of Live CD distros and pick the one that detects all your hardware and you like best. Alternatives include: This gives you a modified Debian system when installed. www.knoppix.com Both are based on Debian: the former uses the Gnome desktop while the latter has KDE. www.ubuntu.com A Live CD that installs to hard disk and provides easy updates. www.pclinuxos.com Another Debian-based Live CD with one of the easiest installations I've seen. http://kanotix.com Back to the list ****** Safe for users to upload PHP scripts? Q:: My server hosts about 50 websites for a number of my customers. Most of them have some form of dynamic content, usually PHP based, while some use phpnuke and phpbb. I'm quite an experienced system administrator, if I say so myself, but I'm not a programmer. What level of risk is my server at by enabling my customers to upload their own PHP pages? Is there anything I can do to get better security from this? A:: Because Apache doesn't run as root, your system shouldn't be wide open. However, if you have some bad code on your system, an attacker could still get a shell access and run commands, albeit without any privilege. Usually when someone has some exploitable PHP/CGI code, it enables you to import your own snippet of code by using remote URL execution (called 'fopen' in PHP). Typically, this is fairly obvious when you manage to find the hack because there will be a backdoor process running as Apache. This will be listening on a high port and the binary will often still be left in /tmp or /var/tmp. Running through the access logs, you'll see where the hits were made and what commands they ran, usually wget'ing some C file and compiling it, then running it, thus spawning a backdoor. You could really bolt down PHP to not allow much command execution at all, but this may be counter-productive. Many PHP-based applications, such as phpnuke, phpbb and so on, will require some loosening of restrictions to work. Ideally, sysadmins are supposed to keep an eye out for outdated software being loaded onto their servers, such as exploitable phpnuke or phpbb. This doesn't scale very well though, and as you get more users, this can become more difficult. An alternative option is to set up a custom partitioning scheme where /tmp is a 'noexec' mounted partition, thereby preventing scripts from being executed when downloaded to /tmp. This can be implemented using a /tmp loopback file too (with /var/tmp symlinked) and it works really well. The only potential issues here are that tmp can fill up more easily since it doesn't have the full space allocation of the whole drive (this may be a feature though!), but if you start it at around 1GB, this should be large enough. Also, if /tmp is done as a mounted loopback file, the file size (partition) could be expanded to whatever size is necessary and then remounted. Back to the list ****** Getting an external CDR to work Q:: I have started to migrate from Win2K to SUSE Linux 9.0, but can't start to install personal data in my partition until I'm able to back it up. Though I have tried to test the CD software K3B, I have an external HP 8200 series (8220e/8230e) USB CD writer, for which I need a device driver. I've searched the web and checked HP's website, but without success. Where can I get a suitable driver? A:: There are two prerequisites for burning CDs under Linux: the device itself must be accessible, and the burning software must understand the protocols used by the drive so it can format the data to be written. In the case of an external CD drive, the first is taken care of by the USB mass storage driver. This will just work, in general, and Linux certainly shouldn't have any trouble recognising this device, which will be set up as an emulated SCSI drive. The second requirement is usually fulfilled with Cdrecord. It seems that your drive is supported with Cdrecord from version 1.10, so assuming you have that, it shouldn't be a problem. Tools like Xcdroast and K3b are merely front-ends to various tools, and will use the version of Cdrecord you have for actually writing discs. To check that your drive can be seen, open a terminal and type: --- cdrecord -scanbus ,,, You will hopefully see, among everything else, a line that begins with three numbers and includes some text identifying the drive. If K3b isn't recognising the drive, try running usbview to check that it is being picked up by Linux. Back to the list ****** Block Apache access from malicious user agents and spiders Q:: I run my own small site and it seems most of my traffic is from web crawlers. How can I control access to my Apache web server from potential malicious user agents, crawlers, spiders et al? A:: Web crawlers and spiders can be used to pirate content and gain information about the structure of your website that you may want to keep hidden, and have been known to bring sites down due to the load they can put on a server. These agents are also commonly used by search engines to catalogue the content of websites. This is all well and good but if you do not want your site to be searched in this manner it is a good idea to block the associated agents that do the searching, and take some load off your server at the same time. Most of the time well-behaved web crawlers will read the robots.txt file at the root of the website. But if they don't, we have to adopt strong tactics. One way to achieve this is to block using the HTTP header information. Though there are ways around this type of filtering, this is a good first step and in most cases is all you require to block this type of access. You need to change webcopier to a string that is being sent from the spider. Try --- setenvif User-Agent ^webcopier block <Limit GET POST> Order Allow,Deny Allow from all Deny from env=block </Limit> ,,, or --- RewriteEngine On RewriteCond %{HTTP_USER_ AGENT} ^WebCopier [NC,OR] RewriteRule ^.* - [F,L] ,,, Back to the list ****** Fedora software installation problem: 'unable to retrieve software information' Q:: I have installed Fedora, and all seems fine - with one exception. When I try to run Add/Remove Programs or Packager Updater, I am prompted for the root password. Then I get the error messages: 'Unable to retrieve software information' or 'Unable to retrieve update information'. The only deviation from the default installation is that I do not use the Logical Volume Manager; instead I manually partitioned the hard disk into /boot (100MB), / (38GB) and swap (1024MB). Also, this PC is not connected to the internet. I have tried installation both from CD and from your supplied DVD. But I still get the same errors. A:: The lack of an internet connection is the reason for these messages. Both programs try to read information from online software repositories to do their jobs. In the case of the Software Updater, this is inevitable and unavoidable: by their very nature, updates are newer than the packages on the installation media, so it is not possible to use this feature without an internet connection. To prevent this error with Add/Remove Programs, you need to edit the repository files to disable all online sources and add one for the DVD. You need to be root to do this. Load /etc/yum.repos.d/fedora-core.repo into your favourite text editor, find the section starting [core] and comment out the baseurl and mirrorlist lines by placing a # at the start of each line. Then add a new line reading --- baseurl=file:///media/disk ,,, This creates a new repository at /media/disk, where the DVD is mounted. You then have to edit the other .repo files and change any occurrences of enabled=1 to enabled=0. Now the only repository that is enabled is the one for the DVD, and running Add/Remove Software should allow you to install software from the disc. Back to the list ****** Qmail delays in sending mail Q:: I have been experiencing delays in sending mail through my Qmail-enabled mail server. I have tried to make things go faster, but to no avail. Could you give me a list of things to check that might be causing the delays? A:: The most common reason behind delays is DNS lookups. First and foremost, please make sure that the server's hostname is resolvable. Also, a PTR record should be created for the IP address that the server uses to send mail out on. You could even speed up lookups by running your own local caching name server. Optionally, you might want to disable DNS lookups altogether. If you run qmail-smtpd through the tcpserver wrapper script you should add the -H flag to its options, so that it doesn't look up the remote host name in DNS; and remove the environment variable $TCPREMOTEHOST. To avoid loops, you must use this option for servers on TCP port 53. If you're not, you're probably running it through inetd/xinetd; you might want to add the -Rt0 flags to your configuration, under server_args in your inetd/xinetd configuration file. That will prevent Qmail from performing ident requests when an SMTP connection is established. When it does, it manifests itself by causing a delay between the TCP connection being established and the banner being displayed. On a related issue, if you have a queue that is constantly filled to the brim, you might want to add the file /var/qmail/control/queuelifetime and set it lower than the default of seven days, which means that emails that soft bounce will be retried for a week. A value of one to two days is more reasonable. These steps should reduce the time Qmail takes to display the banner. Back to the list ****** Linux solution to automatically back up and restore hard drives Q:: A friend of mine has a small law firm (six users using Windows XP Pro, one Linux proxy/mail server). They often do 'bad things' to their machines, and he calls me to fix them. Usually this means a Windows re-installation after backing up every document they saved here and there. Thus, I am trying to find and set up a totally automated Linux-based disaster-recovery solution that will back up the whole disk once I install everything (such as Ghost or G4L) and every night automatically back up every workstation - so, in case the 'bad thing' happens all they should do is boot from another computer on the network or boot from a CD and have their system recovered by getting the image files from a local backup server. A:: There are two separate issues here. The first is a complete backup that can be restored from a CD or over the network for a complete reinstall in the case of a total disaster. The second is regular backups of data. For the first task, you can't really go wrong with Partition Image - www.partimage.org. This is a Linux program that has a client-server option. You could run the server on your Linux box and use a Live CD to create images of each of the Windows computers' hard disks for recover. You would need a Live CD distro, which could be used to restore the disk from an image file on the server. RIP (Recovery Is Possible) is good for this (www.tux.org/pub/people/kent-robotti/looplinux/rip). The documentation gives detailed instructions on modifying the CD image to suit your needs, so you could add a short shell script and call it from /etc/rc.d/rc.local to automate a full system restore when booting from the CD. For the nightly incremental backups, BackupPC (http://backuppc.sourceforge.net) is a good option. This will run on the Linux server, and requires no special software installed on the Windows PCs, as it accesses them via Samba. All you need to do on the PCs is set up shares, so BackupPC can get at the files. All the work is done from the Linux box, so a simple Cron task will run the nightly backups. BackupPC has a web interface, so users don't need to learn any arcane commands to recover files from the backups. This program is particularly good when backing up a number of similar PCs, because it stores single copies of files that exist on multiple computers. Combined with compression, this significantly reduces the space needed to back up a network. Back to the list ****** Finding desktop capture (screencasting) software Q:: I am trying to find an application that will record whatever I do on my machine so that I can make a small movie of what I am doing. Can you recommend any software for me? A:: There are a number of solutions to this, depending on what you want to do with the movie. If you want to publish your video on the web, Vnc2swf may be the best choice. This records a VNC session as a flash animation. You'll need VNC installed (or Tightvnc, from www.tightvnc.com). VNC is designed for running a remote desktop, but you can also run it on just one computer. Start a VNC session with --- vncserver -depth 16 -geometry 800x600 ,,, and you will see a line like: --- New 'X' desktop is yourhostname:N ,,, The last part is the hostname and display number. If your computer is not networked, you can use localhost. Now start recording the session with --- vnc2swf -startrecording -geometry 800x600 -depth 16 -framerate 5 demo.swf yourhostname:N.0 ,,, Make sure the geometry, depth, hostname and display match the VNC server you just started. The .0 at the end is compulsory. A new window will open containing the VNC desktop session and anything you do in here will be recorded to demo.swf. End the recording by closing the window. The program will output some suitable HTML for viewing the Flash animation in a web browser, which you can redirect to a file if you wish. This size and frame rate are suitable for web use, but to display a local demo directly on a monitor or projector you may wish to increase both. To generate a movie file, you can use Vncrec. This works in a similar way to Vnc2swf, but creates a file in its own format, which you can convert to AVI or MPEG with transcode. --- vncrec -record demo.vnc transcode -x vnc --use_rgb -y xvid -k --dvd_access_delay 5 -f 10 -i demo.vnc -o demo.avi ,,, As before, the geometry used here must match that with which the server was started. The -f option sets the frame rate of the video. The resultant file can be played with any video player, such as MPlayer or Xine. Whichever recording software you choose, if you want a program to be running at the start of the recording, start it from ~/.vnc/xstartup: --- ooimpress sample.pps ,,, An alternative approach is Istanbul, from http://live.gnome.org/Istanbul. This is a Gnome program, but works with other desktops. It puts an icon in the panel: click it to start recording and click it again to stop. The result is saved as ~/desktop-recording.ogg, as a Theora video. This can be limiting compared with the alternatives, but it is quick and easy to set up. Back to the list ****** File permissions changed to root after distro switch Q:: I recently switched distros from Xandros to Fedora. I transferred 3GB of data only to find that all the files in my home directory have root as both file owner and file group. Is there a script I can use to change all the permissions to my user name? A:: If you've copied your home directory (which will look something like /home/dave) from one machine to another, the easiest way to restore ownership on that directory is to do a recursive chown as root on /home/dave with the correct ownership, recursively changing the user and group. It should be safe to perform this on your home directory, as it usually only contains files and directories owned by the user and group of the user whose home directory it is. --- chown -R macdaddy:macdaddy / home/macdaddy ,,, If you have multiple files and directories owned by different users and groups, you will need to do a search and replace to change the ownership. So if user 'dave' has numerous files and directories throughout /var/www/html and you want to change the ownership of those files to user and group 'bigmac', you could chown -R directories to change ownership. The problem with this is that it may change ownership of files that you didn't want it to. In this case, use the find command to perform the search and replace on the ownership, ensuring that those directories not owned by Dave are left as they are: --- find /var/www/html -user dave -group dave -exec chown bigmac:bigmac {} \; ,,, This will find any files and directories within /var/www/html belonging to user and group dave, then change the ownership to bigmac. The {} gets replaced with the files and directories found matching the -user and -group criteria, and the \; is necessary to escape the ; to the shell and to tell find that the argument list has finished. So, assuming you have a standard home directory, the easiest way to change ownership in one go would be to use the chown -R command. Keep in mind that this method is not applicable for all locations on the filesystem though! Back to the list ****** Make KDEinit look in the right place for Thunderbird Q:: If I click on an email address in KDE, I get this error: 'KDEinit can not start /usr/share/application/thunderbird/thunderbird'. Thunderbird is installed at /opt/thunderbird. I used to run SUSE, but now I run Gentoo, and when I transferred the /home directories I must have transferred something over that Kdeinit uses but I can't work out what. Could you please tell me how I change this so that Kdeinit looks in the right place. A:: It looks like KDE is looking in the wrong place for Thunderbird. As with most KDE options, you change this in the KDE Control Centre. As with most KDE options, finding the right place in the KDE Control Centre can be tricky - there are so many options, and not always where you expect to find them. The Control Centre does have a search option, which usually helps - but not in this case (at least not with KDE 3.5.3) The option you want is in KDE Components > Component Chooser > Email Client. Select the Use A Different Email Client radio button, then click on the small icon to the right of the text box to open the applications selector. By choosing the program from here, you ensure that you have the correct path. This will open Thunderbird, but without the recipient address or any other data. To fix that, add the following to the string used to start Thunderbird: --- -compose "mailto:%t?subject=%s&body=%B" ,,, Hover the mouse over the text box to see the available options. Back to the list ****** Fedora: which network interface card is being used? Q:: I've just installed Fedora and am a bit confused as to which of my network interface cards is the one in use; I have two, and on the previous install eth0 was used as the default. Here is the output from ifconfig: --- eth0 Link encap:Ethernet HWaddr 00:30:18:58:4A:A3 inet addr:192.168.1.101 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 eth1 Link encap:Ethernet HWaddr 00:50:BA:B3:B1:A5 inet addr:192.168.1.152 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::250:baff:feb3: b1a5/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:258479 errors:0 dropped:0 overruns:0 frame:0 TX packets:264885 errors:0 dropped:0 overruns:13 carrier:0 ,,, While the network is working, it looks to me as if all traffic is going through eth1. Could you shed some light on this? A:: Are you using DHCP on both network interface cards? If so, here's what's probably happening: Step 4 resets the routing table's default gateway, overriding the route set in step 2. You can check this by running --- route -n ,,, The line that shows a destination of 0.0.0.0 will end with whichever interface is used as default. Is there a reason why you use two NICs? If so, you probably need to set up the default route manually. Otherwise, disable the 'activate device when computer starts' option for one of the NICs in the network configuration program. Back to the list ****** Apache connecting to other ports on the same server Q:: I run a hosting service with over 100 domains configured on it. Our server seems to have been sluggish for the last few days. I did some preliminary tests (using netstat) and noticed that there were a lot of connections from my server on TCP port 80 to my server on the ephemeral ports. From the output that I got I understand that I have connections originating from Apache on port 80 to the various other ports on my server. But why? How can my server be browsing my own websites? I run Apache 2 on Red Hat Enterprise Linux 3. A:: This certainly sounds like one of your more recent websites is responsible for this new behaviour. From the netstat feedback that you describe, I would say that some of the respective code is triggering connections to your web server. By analysing what your Apache server is doing, we can compare the number of netstat connections with what Apache is serving out. Enable this by placing the following in the Apache configuration file (/etc/httpd/conf/httpd.conf): --- ExtendedStatus on <Location /server-status> SetHandler server-status </Location> ,,, If you browse to www.domain.com/server-status?refresh=5, you will get a five-second update of your server's status. Pay particular attention to CPU usage (CPU) and the number of seconds since the beginning of the most recent request (SS). Also, by correlating the number of connections from the netstat output with the number of connections on a particular virtual host, you will quickly find the culprit! Back to the list ****** Sendmail: nothing gets sent to the outside world Q:: I am in the process of setting up a server to host our laboratory management software. I've gone for a Kubuntu server install with Sendmail - this is where the problem starts. I've run through the basic Sendmail configuration, leaving things alone as I understand almost nothing about Sendmail's config! It will send mail to people on the local network (eg john@localnet.co.uk) but nothing gets to the outside world (eg john@hotmail.com). I'd love some advice - or maybe a tutorial on setting up a mail server? A:: Sendmail is not the ideal choice for you. While it is undoubtedly a powerful mail server, it is also difficult to configure. Postfix or Exim would be a better choice; both of these are available as packages through the Ubuntu repositories (Postfix is the default mail server and is on the Ubuntu installation CDs). These servers have heavily commented, plain-text configuration files that make learning to configure them much simpler than battling your way through Sendmail's dense configuration options. Whichever server you choose, you should consider using Webmin to configure it. As well as presenting you the options through a friendly web front-end, it makes it more difficult to misconfigure the server with the potential loss of mail or security. You can still read or fine-tune the config files by hand if you wish, so Webmin helps you learn the configuration options rather than hiding them. Whichever server you end up running, the logfiles should provide a reason for the failure. Run --- tail -f /path/to/logfile ,,, and try to send a mail to the outside world. You should see an error message relating to the failure. This could be anything from a DNS failure (although that would be unlikely if other internet activities work) to your ISP blocking outgoing SMTP traffic. Many ISPs do this as an anti-spam measure, either redirecting all SMTP traffic to their own mail server or blocking it entirely. If this is the case, you need to set up your mail server to use your ISP's server as a 'smarthost'. This means that all mail not for your local network is sent via that server. To do this with Sendmail, put the following in sendmail.cf: --- DSmail.isp.com ,,, replacing mail.isp.com with your ISP's SMTP server. In Postfix, the line is --- relayhost = mail.isp.com ,,, If you use Webmin, this is the first option in the Sendmail module and the fourth in the Postfix module. Back to the list ****** SSH, mod_rewrite and Apache Q:: At my workplace we have a server running the usual Linux, Apache and MySQL combination, acting as a development and testing server for around about 100 sites we're building or have built. The sever is only open to access from the internal network, apart from SSH access to the outside world. I now have to do some work from home but this needs to be done over a secure connection and SSH tunnelling seems like a very sensible method. The problem is that the Apache server uses mod_rewrite to route http requests to the relevant site directory, but as I'd be connecting to the server through an SSH tunnel, I can't access the server through different hostnames. Is anyone aware of a method I could use to see any of the sites without changing the server setup too drastically? A:: Using SSH, you would port forward tcp/80 from the web server onto a port on the local system, such as 127.0.0.1:8080. Hosts can be maintained by modifying /etc/hosts and adding the appropriate sites and pointing them to 127.0.0.1. An alternative to SSH would be to use IPSec, which would allow for the same DNS configuration. However, the firewall would have to allow IPSec tunnels to be established and the appropriate rules constructed. Applications such as Vtun and OpenVPN provide a similar capability using a user-space tool, although access to a system on the border of the network would be required. Back to the list ****** XOSL problem: installing Damn Small Linux Q:: I have a PC box with multiple partitions and a few Linux distros installed to have a play with before I settle down to make one of them my favourite. I'm using XOSL as boot manager, and it works happily with a number of distros - and even things from Redmond! But it is seriously flummoxed by the Damn Small Linux distro. The DSL hard disk installation script offers no choice over where the Lilo or Grub boot manager writes its stuff to - it always goes straight into the master boot record of the hard disk (the very same spot occupied by XOSL!). So when I restore XOSL, it finds all the other OSes again, but not DSL. Or the PC boots only to DSL. They don't play nicely together. For the benefit of a beginner, could you please give a suitable guide to setting up the bootable bits (either Lilo or Grub will do) on to the partition that DSL is installed on, so that XOSL can find it and start it? A:: Installing Grub to a partition instead of the MBR is easy, so it's a shame that DSL does not offer this option. For the sake of this example, we will assume that DSL is installed on /dev/hda5. Boot into DSL, open a root terminal and run grub. This will put you in the Grub shell, where you type --- root (hd0,4) setup (hd0,4) quit ,,, Grub counts from zero, so the first disk, fifth partition (hda5 in Linux terms) is hd0,4. Now you have a bootloader for DSL installed to the partition and you can tell XOSL to boot from this partition. When XOSL boots DSL, you will get the Grub menu - which may be a little pointless as you have already chosen which OS to boot. You can get rid of it by editing /boot/grub/menu.lst and changing the timeout line from 15 to 0. If you want to be able to choose from the options DSL offers in its Grub menu, set the timeout to a low, but non-zero value. Back to the list ****** Specialised UK ISPs for Linux users Q:: I am struggling to find a reasonably priced broadband provider that deals with Linux. I looked for a high-speed dial-up for Linux, which was not successful. So I examined my download statistics under various distros. Fedora comes bottom, with a peak of 1.8kBs and an average of about 0.7. Fedora 4 and SUSE achieve about 3kBs max, with an average download speed of about 1.5kBs. Knoppix 4 is a little better. Xandros 3 gets about 4kB averaging about 2kB. The best is Mandriva 10.1 (using Mozilla and Epiphany), which peaks at about 13kBs and averages about 6kBs. These have been tried with a variety of connections and at a variety of times, but the results were pretty consistent - they all do badly about 7:00 pm and 10 am, and all seem to do best about Sunday morning. I am using a 56k external serial modem. Any ideas on getting my average speed up to double figures? PS Any ideas on networking two Linux boxes using different distros? A:: There are two UK ISPs that specialise in Linux users: UKLinux.net and UK Free Software Network (www.ukfsn.org). Both of these provide ADSL as well as dialup. Your speed problems do seem a little strange, but they're difficult to get to grips with as you have provided so little information - not even the make of your modem. It would be interesting to compare the modem configurations set by each of the distros. Using a browser to measure download speeds is not the most reliable test, as there are too many variables affecting it, including your ISP's proxy server. A better test would be to try downloading a file with wget. Try this command with each of the distros: --- wget ftp://ftp.mirrorservice.org/sites/ftp.kde.org/pub/kde/stable/3.5.2/src/kdeaddons-3.5.2.tar.bz2 ,,, Any file on a good UK-based FTP server should give a reasonable test. You are not going to see double figures with a 56k modem unless you are downloading compressible data, such as usenet postings or web pages (but not the images). The best you can hope for with compressed data, such as the above file or images, is around 7kBs. Compressed files like this give the truest indication of your connection quality. The times you mention are interesting; 7 pm is a peak time for internet usage in the UK (the web or Emmerdale: you decide) while Sunday morning sees quite low usage. It would also be worth asking BT to test your line. Even if it reports nothing wrong, the act of testing it makes a difference in many cases. When choosing an ISP based on Linux support, you would expect to get such support. I would suggest that you open a dialup account with UKFSN (it's pay as you go) and ask both ISPs for help with your connection speeds. The one that is most helpful should be the one to get your broadband business. As for your question about networking two computers with different distros, it's just the same as two computers running the same Linux variant. While the configuration tools may vary, most distros are very similar at heart. NFS, HTTP, Samba or whatever you want to network with, work the same on all distros. Back to the list ****** Fix authentication issue on mail server with Telnet Q:: I am trying to fix an authentication issue with my mail server, and the only way I have been to test it is by setting it up in Evolution. Is there any way that I could try without having to set an account in Evolution? A:: One of the best ways to test a range of different services, including SMPTP SAUTH, is to use Telnet. Now, I would never recommend normal Telnet to log in to a machine, but for testing some services it is invaluable. To trouble shoot your problem, what we want to do is to connect to the mail server on port 25 and authenticate using the encoded base 64 (read about it at http://en.wikipedia.org/wiki/Base64). First, a few helpful decoded strings, encoded with www.dillfrog.com/tools/base-64_encode: --- 'VXNlcm5hbWU6' decodes to 'Username:' 'UGFzc3dvcmQ6' decodes to 'Password:' 'dGVzdF9seGZAcmV6ZC5jby51aw==' decodes to 'test_lxf@rezd.co.uk' 'Zm9vYmFy' decodes to 'foobar' ,,, The following lines are the dialogue to test the server is authenticating. We use base 64 encoding for some of the strings, which are detailed above. First, Telnet to the mail server domain/IP address (ie mail.rezd.co.uk or 10.0.0.1) on port 25: --- telnet 10.0.0.1 25 ,,, The server will answer with an SMTP banner: --- Trying 10.0.0.1... Connected to mail.rezd.co.uk (10.0.0.1). Escape character is '^]'. 220 mail.rezd.co.uk ESMTP ,,, Issue the EHLO command: --- EHLO other.domain.rezd.org.uk ,,, Next, the server tells us what it supports. This can very from mail server to mail server. --- 250-mail.rezd.co.uk Hello other. domain.rezd.org.uk [192.168.0.1], pleased to meet you 250-ENHANCEDSTATUSCODES 250-PIPELINING 250-8BITMIME 250-AUTH DIGEST-MD5 CRAM- MD5 LOGIN PLAIN 250 HELP ,,, Authenticate to the mail server with --- AUTH LOGIN ,,, It sends out the username prompt: --- 334 VXNlcm5hbWU6 ,,, Now we send the name of a user that we are going to authenticate with, eg test_lxf@rezd.co.uk: --- dGVzdF9seGZAcmV6ZC5jb20= ,,, Next it asks for the password: --- 334 UGFzc3dvcmQ6 ,,, And we supply it: --- Zm9vYmFy ,,, And finally it says yes, so we know that authentication is working: --- 235 2.0.0 OK Authenticated ,,, If we get the following, we know that there is an issue with the authentication in some way: --- 535 5.7.0 authentication failed ,,, This is sufficient to test authentication; if we wanted to test sending mail we could continue the SMTP dialogue. Back to the list ****** Settings up Pipemanic program launcher in KDE Q:: I have installed the Pipepanic game into my home directory and can run it by typing ./pipepanic in a console with --- cd pipepanic-0.1.3-source ./pipepanic ,,, However, I can't work out how to add this as an item to the K menu using the Edit K Menu section of KDE Control Centre. I don't know what command to put in the Command box. If I put /home/marrea/pipepanic-0.1.3-source/pipepanic in the box, and then go back and click on the Pipepanic entry I've added to the K menu, all that happens is that the little hourglass with Pipepanic on it goes round and round in the Kicker bar and a gear wheel icon bounces up and down for 30 seconds or so and then they both disappear. Is it because I have installed Pipepanic in my home directory? A:: It is failing because you are not in the pipepanic directory when running it from KDE. The program needs to be run from its own directory in order to find files it needs. You can fix this by setting the Work path to /home/marrea/pipepanic-0.1.3-source/ in KDE's menu editor. This effectively adds a CD, as you did in theshell. You may also need to specify the full path in the command box. The safest way to make sure both of these are correct is to use the file selector icons to the right of the boxes. If you tick the Run In Terminal box, you will see any output from the program and, hopefully, get a clue as to where it goes wrong. That's how I saw it was failing to find a file. You will need to add something like ; sleep 5 to the end of the command, to keep the terminal window open for a few seconds after it exits. For example, --- /home/marrea/pipepanic-0.1.3-source/pipepanic; sleep 5 ,,, Back to the list ****** Rsync backups: excluding directories Q:: I've been trying to set up an rsync script to back up the important contents of my home directory to a USB drive, and I'm having great difficulty whipping it into shape. Particularly confusing is how to use --exclude-from and (even more confusing) --include-from. I'm on an Ubuntu 6.06 system, with rsync 2.6.6. Here's an outline of what I want to happen. First, all of the non-hidden files, directories and their subdirectories etc in my home directory /home/dcoldric are to be backed up, except that for the directory /home/dcoldric/MyDownloads, I don't want any subdirectories to be included, just non-directory files. Another exception is that there is a very limited number of non-hidden subdirectories - such as /home/dcoldric/cxoffice/ - that I do not want to back up. All of the hidden files and directories are to be ignored, except for a few. For example, I do want to back up /home/dcoldric/.netbeans and subdirectories, as well as .bashrc and .bash_aliases. Finally, I'd like the directory structure of the backup to mimic that of the original (except for the ignored directories). I have tried just about everything I can think of, to no avail. My latest variant looks like: --- rsync -a --delete --safe-links --exclude-from=/home/dcoldric/bin/backupExcludes /home/dcoldric/ /media/USB/backup/dcoldric ,,, where the backupExcludes file currently looks like this: --- - /* + /dcoldric/ + /dcoldric/.Creator/ + /dcoldric/.java/ + /dcoldric/.mozilla/ + /dcoldric/.mozilla-thunderbird/ + /dcoldric/.netbeans/ + /dcoldric/.bashrc + /dcoldric/.bash_aliases + /dcoldric/MyDownloads/ - /dcoldric/MyDownloads/*/ - /dcoldric/.* - /dcoldric/cxoffice - /dcoldric/jdk* - /dcoldric/sun - /dcoldric/SUNW* ,,, However, it appears to do nothing. A:: The rsync command copies everything by default, so the --exclude option tells it what to skip. It may be clearer to think of --include as --do-not-exclude. The exclude-from file you have given is actually a filter file. Filtering provides control, but it does not have a --filter- from variant. The more correct way to use a filter file is with the option --- --filter="merge myfilterfile" ,,, Your current filter file does not work because it starts with - /*, which excludes everything. So when you say it does nothing, you and the program are quite correct - because that is just what you told it to do. The first match counts, so move - /* to the end. When a filter path starts with a /, it is matched relative to the source directory, which is ~/dcoldric. So you need to remove /dcoldric from the start of each path, otherwise you are trying to match /home/dcoldric/dcoldric/.mozilla and so on. Although it doesn't affect your current filters, you should be aware that --- + /foo/bar/ - /* ,,, will match nothing. Because /* excludes everything in the base directory, the contents of foo are never checked, so foo/bar is not found. You need to force rsync to scan foo with --- + /foo/ + /foo/bar/ - /foo/* - /* ,,, A working filter file would be --- + /.netbeans/ + /.bashrc + /.bash_aliases - /MyDownloads/*/ - /.* - /cxoffice - /jdk* ,,, Call this with --- rsync -a --delete --safe-links --filter="merge~dcoldric/bin/backupFilter" ~dcoldric/ /media/USB/backup/dcoldric/ ,,, Note the trailing / on the destination: this can affect the result. Back to the list ****** VMware: choosing the right Linux version for Fedora Q:: I bought Fedora and now I want to install Fedora on VMware (in a Windows XP host). In VMware there are a few alternatives for Red Hat, such as 'Red Hat Linux' and 'Red Hat Enterprise Linux 2, 3 & 4'. I guess I can rule out the plain 'Red Hat Linux' alternative, but for this version of Fedora, which one of the others should I choose? It could be important since VMware's own VMware Tools greatly enhances the flexibility of guest operating systems' screen, mouse and pointer. Still, this facility hasn't yet worked for me on any other Linux distro that I've tried. A:: Almost every variant of Linux that I have tried to install on VMware and I have tried a lot - has installed successfully, even if the specific distribution is not listed. Most of the time I use 'Other Linux 2.6.x.kernel' but for Fedora I choose the plain 'Red Hat Linux' option. This causes no problem with installing VMware Tools as described on page 142 of the VMware Workstation manual (which you can download from www.vmware.com/support/pubs/ws_pubs.html). The steps are: Remove any mounted CD/DVD discs. Select VM > Install VMware Tools from the VMware menu. Open the CD-ROM drive in the guest operating system. Double-click on the VMware-Tools RPM file. Give the root password when prompted. Run vmware-config-tools.pl from a root terminal. You may need GCC installed for the final stage, if it needs to compile a module for your kernel. This is necessary if the installer does not have a pre-built module for your kernel, as is the case with Fedora. Back to the list ****** Best security tips for Apache Q:: I've just built an Apache web server to host some websites externally. Can you give me some general security tips? A:: Aside from securing the pages via HTTP authentication or SSL where applicable, there are a number of things you can do in the httpd.conf file, as the default configuration file can provide a potential attacker with some specific information to help them target their attack. Firstly, make absolutely sure the ServerTokens directory is set to Prod. When it's at its default value it will reveal the version of Apache you are using, as well as other modules you are using and potentially your operating system. While security by obscurity isn't something to recommend if you do fall behind with your versions, you don't want to give away too much information. To see what your server is currently giving away try executing --- curl -I http://yourwebserver ,,, Also make sure the ServerSignature is set to email - this will prevent your versions being disclosed on Apache's error pages. Do you want your users to have their own web-accessible folders? No? Then disable the userdir module. Similarly, are you using CGI? If not, remove the cgi-bin alias from the config. One other thing to be wary of is the Apache manual, which is sometimes aliased by default. Make sure directory indexes are forbidden, by setting Indexes in the Options section of the <Directory> directives. If you are running PHP, ensure the expose_php directive in your php.ini file is set to Off. If other people are publishing content to your web server you may also want to make sure that they do not override certain settings with a .htaccess file. Within the root <Directory> directive, set the AllowOverride directive to None, AuthConfig or another limited value; do not set it to All. Back to the list ****** Using serial terminals in Linux Q:: I maintain some ancient industrial hardware, and have some simple test software I wrote many years ago in Quick Basic and monitor the test results using HyperTerminal, set up to emulate a DEC VT100 using COM1 (9K6Baud). I also use a Thurlby LA160 logic analyser and a Velleman PC oscilloscope all running under Windows. Can you tell me how I obtain a similar VT100 terminal display on Linux? Do I need to master Wine to run the Thurlby and Velleman software under Linux -and what about Quick Basic (compiled) programs? My current system is a dual-boot Windows ME and SUSE 10.0 machine. A:: The Linux serial ports are numbered from /dev/ttyS0, which is equivalent to COM1. You may also have a link from /dev/modem to /dev/ttyS0. The usual replacement for HyperTerminal is Minicom, which is available with most distros, including SUSE 10.0. Minicom has a VT100 emulation mode, so it should do exactly what you want. The SUSE package does not set up global defaults, so you'll have to run --- minicom -s ,,, as root first. You also need to be a member of the UUCP group in order to write to the serial device. You can set this in Yast > Security And Users > User Management, but you have to log out of KDE and back in for the change to take effect. It's likely that you'll need to use Wine to run any proprietary software, but this will use /dev/ttyS0 as COM1, so it will still be able to access your hardware. Your Quick Basic software will also require Wine to run as is, but it may be easier in the long run to port it to something like Gambas, a Linux equivalent of Visual Basic, or a language that runs on both platforms, such as Python. Back to the list ****** Add more swap space in Linux without any unpartitioned space Q:: I need to add more swap space to my Linux machine but I don't have any unpartitioned disk space. Is there anything I can do? A:: GNU/Linux is a lot more flexible than other operating systems in a lot of respects, including swap space. First off, work out how much additional swap space you need. For argument's sake, let's say you want another 1GB of swap. Next, identify a partition on your system that has at least that amount of space free and won't be needed any time soon. When I built my system, for example, I built it with a 4GB /opt partition which had only 1.5GB utilised. Then it's time to create the file you are going to use for swap. To do this you need to use the dd command, which takes various arguments including a block size argument and a count argument. To create a file 1GB in size, use the following command: --- dd if=/dev/zero of=/opt/swapfile bs=1G count=1 ,,, This command will write a 1GB file at /opt/swapfile. The if switch specifies the input source while the of switch specifies the output file. Next up you need to format it as a swap file: --- mkswap /opt/swapfile ,,, Once this has been set up as a swap file you need to activate it by executing --- swapon /opt/swapfile ,,, You should be able to see it active on the system by executing cat /proc/swaps or simply free at the command line. To enable the swap during the boot process, add it to your /etc/fstab file: --- /opt/swap swap defaults 00 ,,, Back to the list ****** ADSL network connection with Xircom card not working in Mepis Q:: I recently installed Mepis. Unfortunately, the network connection with the internet just does not work. Do you have any clue to how I can get my ADSL connection working with Linux? I have a Xircom Creditcard Ethernet 10/100 + Modem 56. In the connection settings I found that the address type was assigned by DHCP. I tried copying the other settings, including the IP address, subnet mask, standard gateway and the DNS, but it did not seem to do much good. A:: The network side of this card is handled by the xirc2ps-cs module. This is included with Mepis. First, check whether the card has been detected. Open a terminal and type --- su - #give root password when prompted lsmod | grep xirc ,,, If you get no output, the module is not loaded, so type --- modprobe xirc2ps-cs ,,, No output from this command means everything is as it should be. Now run --- ifconfig ,,, to see a list of your network interfaces. There should be two: lo and your network interface, which I expect will be eth0. Now start the Mepis OS Centre from the KDE menu, go to the Network section and pick your network interface. Select Use DHCP For IP and also select DHCP under the DNS tab. Now your network should be configured automatically. If the card does not start automatically when you boot, you should type --- echo "xirc2ps-cs" >>/etc/modules ,,, This adds the name of the module to the list that the system automatically loads when it boots. Back to the list ****** eth0 error messages Q:: After installing Mandrake 10.1, eth0 is running well. However after reboot, I get this message: "Bringing up eth0: FAILED". Help! --- %cat /etc/resolv.conf search nsw.optushome.com.au nameserver 203.2.75.132 nameserver 198.142.0.51 %lspci | grep Ethernet 00:0b.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL - 8129/8139C/8139C+ (rev10) 00:0c.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL - 8129/8139C/8139C+ (rev10) %ifconfig eth0 192.168.0.1 1 %ping 192.168.0.5 PING 192.168.0.5 (192.168.0.5) 56(84) bytes of data. From 192.168.0.1 icmp_seq=1 1 Destination Host Unreachable From 192.168.0.1 icmp_seq=2 1 Destination Host Unreachable From 192.168.0.1 icmp_seq=3 1 Destination Host Unreachable --- 192.168.0.5 ping statistics --- 5 packets transmitted, 0 received, +3 errors, 100% packet loss, time 3998ms , pipe 3 %ping 192.168.0.1 1 PING 192.168.0.1 (192.168.0.1 1 1) 56(84) bytes of data. 64 bytes from 192.168.0.1 icmp_ 1: seq=1 ttl=64 time=0.065 ms --- 192.168.0.1 ping statistics --- 1 1 packets transmitted, 1 received, 0% packet loss, time 0ms rtt min/avg/max/mdev = 0.065/0.065/0.065/0.000 ms %ifconfig eth0 Link encap:Ethernet HWaddr 00:02:44:1 1:DD:24 inet6 addr: fe80::202:44ff:fe1 1: dd24/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:195 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:33386 (32.6 Kb) Interrupt:9 Base address:0x9f00 eth0:9 Link encap:Ethernet HWaddr 00:02:44:1 1:DD:24 inet addr:127.255.255.255 Bcast:127.255.255.255 Mask:255.0.0.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 Interrupt:9 Base address:0x9f00 eth1 Link encap:Ethernet HWaddr 00:50:22:E9:8E:A4 inet6 addr: fe80::250:22ff: fee9:8ea4/64 Scope:Link UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:23 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:2538 (2.4 Kb) Interrupt:1 Base address:0xae00 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:243 errors:0 dropped:0 overruns:0 frame:0 TX packets:243 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:20570 (20.0 Kb) TX bytes:20570 (20.0 Kb) %ifup eth0 Determining IP information for eth0... done. /sbin/ifup: line 433: 7771 Hangup / etc/init.d/tmdns reload >/dev/null 2>&1 % /etc/init.d/network status Configured devices: lo eth0 Currently active devices: lo eth1 %time /etc/init.d/network restart Shutting down interface eth0: [ OK ] Shutting down loopback interface: [OK ] Setting network parameters: [ OK ] Bringing up loopback interface: [ OK ] Bringing up interface eth0: [ OK ] 1.90user 0.66system 1:38.44elapsed 2%CPU (0avgtext+0avgdata 0maxresident)k 0inputs+0outputs (0major+64810mi nor)pagefaults 0swaps %ping 192.168.0.1 1 connect: Network is unreachable %cat /etc/sysconfig/network-scripts/ ifcfg-eth0 DEVICE=eth0 BOOTPROTO=dhcp ONBOOT=yes MII_NOT_SUPPORTED=yes NEEDHOSTNAME=yes ,,, A:: For both eth0 and eth1, there are packets being transmitted but nothing being received. This suggests that DHCP requests are sent out but not responded to. Running dhclient from the command line to manually request an IP address for eth0 will output useful information, such as link failure or errors. Why there is a 127.255.255.255 address on eth0:9 is anyone's guess. Checking out /etc/sysconfig/network-scripts and removing ifcfg-eth0:9 should remove this because it will interact with traffic on the loopback address of 127.0.0.1 since the netmask 255.0.0.0 includes this IP. While the kernel does detect the Ethernet card, it doesn't suggest that it's working correctly. Running dmesg will show any kernel messages, indicating a timeout or other driver failure causing issues with DHCP. Back to the list ****** Setting up a trash can on the Linux desktop Q:: I have two questions, both concerning forgotten program names. The first follows on from a major deleting 'oops!' I had recently. (Computers don't do what you want them to do, they do what you tell them to do, and I really need better protection from my own fallibility.) I once read, possibly in your own pages, about an undelete daemon. Any file delete command was intercepted - it must redefine the system unlink call or something - and converted to move the file to a trash folder. Then instead of immediately emptying the trash, as most people do, you find you can't. It persists until the trash totals a predefined size, or free space starts to fall below another threshold. The other issue is that I'm a web developer, and need to test on a wide variety of browsers. I have heard of a GTK and KHTML browser project, which would be very useful if only I could remember its name. I don't want to install Konqueror because it depends on just about all of KDE's bloat. I don't need a full-featured browser. Just something lightweight will be fine. A:: The trash can program you are thinking of may be Delsafe, from http://delsafe.cjb.net. This works much as you describe, replacing library calls to move deleted and overwritten files to a trash can instead of deleting them. Multiple deletions or overwrites of the same filename are timestamped, and an undel program is provided to recover the files. Another possibility is libtrash, from http://pages.stern.nyu.edu/~marriaga/software/libtrash, which offers similar features. I suspect the KHTML project you are thinking of is Gtk+ WebCore (http://gtk-webcore.sourceforge.net). This is at an early stage and may not be representative enough for your needs. I would suggest that to properly test pages in Konqueror, you need Konqueror itself, especially if your pages use JavaScript. This isn't so bad, because you do not need to install most of KDE to use Konqueror. All you need is the kdelibs package and Konqueror itself. Most distros now split the KDE packages, so you can install just Konqueror instead of the whole of kdebase (as used to be the case). Back to the list ****** Update DansGuardian blacklist files automatically Q:: My question to you is about DansGuardian blacklist files. As an unknown number of websites are registered on a weekly basis, the need arises for a sysadmin to keep their blacklist files up to date. Not many of us have the budget to do this on a regular basis. Is it possible to have my blacklist files automatically updated by using spiders and crawlers? If this is possible, how can I achieve this, and what is the potential harm or gain to my setup? Perhaps you could tell me what is the minimum recommended spec for running DansGuardian comfortably. PS: What would it take to set up a Linux user group for Nigeria? A:: The first point to bear in mind is that DansGuardian does not work purely on blacklists. It is a content filter, so its main work is done by checking the content of pages. However, it helps to keep your phraselists up to date as well, as site creators try to work around existing filtering restrictions. You can get updated phraselists from http://contentfilter.futuragts.com/phraselists. Using a spider to generate your own URL lists would be hugely expensive in terms of bandwidth, as you would be checking sites you would never visit, and it would still only use your phraselists. It is possible to download updated URL blacklists, and although some of these are commercial, others are free. The commercial lists are often amalgamations of free lists - you're just paying someone to do the work for you. There are a number of scripts on the DansGuardian website (in the Extras & Add-Ons section) that will download and install updated blacklists for you, and you can also get them from the Squidguard site at www.squidguard.org/blacklist. The required specs depend on your usage. For a home network, the requirements are minimal. The main burden on the system seems to be loading up the rules when it starts up, so a decent amount of memory is more important than a fast processor. This also depends on what else is running on the computer. As for starting a user group, all you need is a few people to meet with and a place to meet, or a website and mailing list if your group will only exist in cyberspace. There are no formal requirements, just a number of people with a shared interest in Linux. Some groups have more formal meetings, with demonstrations by members; other just get together in a pub to chat about Linux and other matters of interest. You might find the articles at http://en.tldp.org/HOWTO/User-Group-HOWTO.html and http://linuxmafia.com/faq/ Linux_PR/newlug.html useful. Back to the list ****** Create a load balancer in Linux Q:: My company has a number of web servers that we use for intranet/internet hosting. We want to load balance the traffic but don't want to either buy a load balancer or use round robin DNS. Can I do it with Linux? A:: Yes! For a while GNU/Linux has benefited from the Linux Virtual Server project (www.linuxvirtualserver.org), the code for which, ipvs, has been included in recent kernel releases. If you are using a kernel older than 2.4.28 you may need to patch and recompile your source, though. You can tell if ipvs is enabled with --- cat /proc/net/ip_vs ,,, If that file does not exist, try to load the module by executing --- modprobe ip_vs ,,, Assuming the module loads or has been compiled into the kernel you are ready to go! There are three choices when it comes to the implementation of LVS within your network: direct routing, tunnelling or NAT (Network Address Translation). NAT is by far the easiest to configure but may require an extra layer of networking. Direct routing is the fastest and will work in a flat network, but can cause configuration issues with the receiving web server. Assuming you are going to use NAT, your new load balancer will need two network cards, one within the network in which your web servers are located, the other in a DMZ (Demilitarized zone)/external network - in short, the network your HTTP requests are sent to. Let's assume your external network is 10.1.0.0 and your web server network is 192.168.1.0. Assign the machine to unused addresses, such as 10.1.0.1 and 192.168.0.1, then configure the routing table on each web server to use it as its default gateway: --- route add -net 0.0.0.0 mask 0.0.0.0 gw 192.168.1.1 ,,, At this point you need to configure how the LVS will forward traffic to each machine. There are a number of load-balancing algorithms, including: round robin, least-connection scheduling and destination hashing scheduling. To find out how each works check out the LVS website. For now, we are going to set up round-robin load balancing. This simply sends traffic to each web server in turn, but the configuration of the other algorithms is much the same. In order to manipulate the ipvs/LVS table you need to use the ipvsadm binary. This is already installed on most modern Linux distributions (it was released in July 2003) but you may need to compile it if you are using something older. The first step is to setup the VIP or virtual IP address; the IP address your requests will be received on. For now we will assume it is the address you allocated to the server earlier in the 10.1.0.0 network: --- /sbin/ipvsadm -A -t 10.1.0.1:http -s rr ,,, Now add your web servers to the VIP (insert your own IP addresses): --- /sbin/ipvsadm -a -t 10.1.0.1:http -r 192.168.1.10:http -m -w 1 /sbin/ipvsadm -a -t 10.1.0.1:http -r 192.168.1.11:http -m -w 1 /sbin/ipvsadm -a -t 10.1.0.1:http -r 192.168.1.12:http -m -w 1 ,,, This adds all three web servers to the VIP with a weight of 1 (see the -w switch). If you have a server you want to get more traffic, simply increase the weight on a per-server basis. If you want it to not take any traffic at all, set its weight to 0. Back to the list ****** Mount failing: bad superblock error message Q:: I tried to mount one of my extra disks the other day and got the following error message: --- mount: wrong fs type, bad option, bad superblock on /dev/hda1, or too many mounted file systems ,,, When I tried to scan the disk with fsck I got this message: --- fsck.ext3: No such file or directory while trying to open /dev/hda1 ,,, The superblock could not be read or does not describe a correct ext2 filesystem. I'm going to replace the disk but would like to recover the data. Is there any way to do it? A:: Luckily, yes! The ext2 and ext3 filesystems have backup superblocks stored at regular intervals throughout the disk, you simply need to find out where they are and specify them to fsck when you repair the filesystem. Their position depends on the size of the partition created. The easiest way to locate them is to rerun mke2fs specifying the -n switch. This will cause mke2fs to do nothing but tell you what it would do. --- mke2fs /dev/hda ,,, The info that this code gives you will include a list of locations that superblocks are stored at throughout the filesystem. Using that info you can instruct fsck to repair the filesystem using one of the backup superblocks. --- fsck -b 8193 /dev/hda ,,, where 8193 is the backup superblock you observed from the previous command. Once repaired you should be able to mount the filesystem as usual. Back to the list ****** Installing drivers for two separate video cards Q:: I use a dual-boot XP and Ubuntu machine at work, which has two monitors. I have two monitors at home and have set up a replica of my workstation. Everything works wonderfully, mostly thanks to the great article you guys did about dual head some months back - except that on my home machine it is desperately slow. At work I have a dual-head graphics card thanks to it being PCIe, so I have applied the Nvidia drivers and it all works great with hardware acceleration. The problem I have at home is that I have an AGP card and a PCI card providing the two video sources. They have different chipsets, and one uses the legacy Nvidia driver set and the other the new Nvidia driver set. I originally thought this would be quite straightforward to resolve, thinking I would just install both sets of drivers and then specify which one to use in the X Windows config file. Unfortunately both sets are referred to as 'nvidia' which means I have to use a combination of the official drivers for one adapter and the standard open one for the other. Needless to say my desktop is now slow and cumbersome. I need a way to install both drivers and then refer to them within my xorg.conf file so that I can use the right driver for the right adapter and my desktop speeds up. My graphics cards are a GeForce FX 5200 (AGP) and a Riva TNT2 Model 64 Pro (PCI). A:: These are not different drivers but versions of the same one, and it is not possible to have two different versions of the same module loaded into the kernel at the same time. This leaves you with a number of alternative choices. You could do as you have already tried to do and use the nv driver for one card, but this is very slow. Yo u could install an older version of the Nvidia driver; one that is compatible with the TNT2. Either the 1.0.6629 or the 1.0.7167 should be suitable here - they are the latest versions that work with legacy cards yet still support the FX5200. This should work for now, but the older Nvidia drivers have a problem with the latest kernels, so a kernel update could break things later. Or you could look for a cheap non- Nvidia card for the second display, or a newer Nvidia card that uses the latest drivers. The simplest solution, though, would seem to be the one you have already used at work. The FX5200 is a dual-head card. All you need is a DVI-to-VGA adapter (mine came with one) unless you are using a monitor with DVI input. This would enable you to set things up exactly as you did on your work computer. In that case, you could change your xorg.conf file to contain --- Section "Device" Identifier "NVIDIA Corporation NV34 [GeForce FX 5200] (rev a1)-0" VendorName "NVIDIA" Driver "nvidia" BusID "PCI:1:00:0" Screen 0 ,,, Back to the list ****** Installing Mozilla plugins on SUSE Q:: I have just installed SUSE 10.1 and have the following queries. First, where do I install Mozilla plugins? I can't find a mozilla/plugins directory. Second, I created a user during installation but this doesn't have root privileges. How can I achieve this? A:: Mozilla and Firefox plugins can be installed in one of two places, depending on whether you install as root or a user. System-wide plugins and extensions are stored in /usr/lib/firefox/plugins and /usr/lib/firefox/ extensions, respectively. Those installed by a user, which happens when you install directly from a website such as http://plugindoc.mozdev.org or http://addons.mozilla.org, go into the appropriate directory under the user's home directory. This is .mozilla/firefox/xxx.default, where xxx is some random string of characters. You shouldn't normally need to manipulate these files directly; installing, updating and removing extensions can be done from within Firefox. Your normal user does not, and should not, have root privileges - otherwise what's the point of a separate root user? When you need to run something that requires root privileges, Yast (or whatever program you are using) will usually ask for the root password, which you set during installation. The program will switch to root for as long as is needed and then switch immediately back to your normal user. If you need to run a terminal command as root, type --- su -c "command you want to execute" ,,, to run a single command or --- su - somecommand someothercommand ... logout ,,, In both cases, you will need the root password. Back to the list ****** Will CentOS work on an AMD CPU? Q:: I installed CentOS 4.3 on my Pentium III, Windows 98 computer and it worked great. It says I need a Pentium CPU. I also have an AMD FX-53 with Windows XP and Fedora 5. I would like to replace my Fedora 5 with CentOS 4.3 on this computer also, but am afraid to try. Would it be OK to try to install CentOS on my AMD computer? If not, is there a way to do it with some third-party software or will there be an AMD version in the future? A:: That should really say "Pentium-class CPU" That is, anything compatible with an i586 processor. Pentium is Intel's trademark, but AMD CPUs are compatible. You can run CentOS on your FX-53, but you wouldn't be getting the most out of the chip. The FX-53 is a 64-bit processor, but would switch to 32-bit mode to run the supplied version of CentOS. That will still be faster than most 32-bit CPUs, and it will be fine for trying out CentOS to see whether you like it, but if you want the best performance, you would be better off with the 64-bit version of CentOS, available for download from www.centos.org. Back to the list ****** NFS export keeps hanging without log messages Q:: I'm trying to mount an NFS export from my server to my workstation, and it keeps hanging. There is nothing in any logs to indicate what's going wrong - are there any common causes? A:: NFS relies on a number of RPCs, or remote procedure calls. The key to all of this is the portmap service. This processes RPC requests and sets up connections to the correct RPC. Check that it's running by looking at the process list: --- [root@test gnump3d-2.9.8]# ps -ef | grep portmap rpc 2584 1 0 Jul23 ? 00:00:00 portmap root 30843 30474 0 07:45 pts/4 00:00:00 grep portmap ,,, Assuming your NFS server service is started, you should see the following RPCs running: mountd, nfsd, lockd, statd, lockd, rquotad and idmapd. Depending on your distribution these may be started by the startup script for NFS. If they aren't, it's possible to start them manually: --- [root@test]# rpc.mountd [root@test]# ps -ef | grep mountd root 30906 1 0 07:54 ? 00:00:00 rpc. mountd ,,, You can repeat this for the other services, but you will need to add them to the appropriate startup script to ensure that they are restarted at boot time. Back to the list ****** Updating RPMs on SUSE from DVDs Q:: I am quite a new user of Linux and have just installed SUSE 10.1. When I tried to use Amarok on this installation I got no sound, even though the soundcard appeared to be working (it sounds at startup, for instance). I noticed a new version on a newer DVD so I have tried to install it through Yast. Can you tell me how I specify the new program to Yast? I know this must be a very basic question but at the moment I do not know how to do it. A:: There are two separate questions here: one about Amarok and one about software installation. Look at the status bar at the bottom left of the Amarok window when you try to play a song - this will give you some feedback. If the song appears to be playing but you hear nothing, open the mixer (usually a speaker icon in the taskbar) and make sure that the volume controls are set high enough. If Amarok refuses to play the song, it is probably your sound engine configuration at fault. Look in the Engine section of the Settings window. If this is set to 'aRts' and you are running a Gnome desktop, you are unlikely to hear anything; because Arts is the KDE sound engine. The best setting for this, in terms of both quality and compatibility with all desktops, is Xine. You may also need to set the output plugin - Autodetect normally works; if not, set it to ALSA. Yast is more suited to installing software from the repositories that it knows about. These include the SUSE directory of the install disc and any online update servers that may have been added automatically during installation, or manually by you later. You can tell Yast to install from individual RPM files from the command line, as root, with --- su yast2 --install /media/LXFDVD82/Sound/AmaroK/SUSE/*.rpm ,,, to ask it to install the packages from the DVD. However, Yast doesn't handle dependencies when run this way and may well fail without telling you why. It is better to use the rpm command directly: --- su rpm -Uhv /media/LXFDVD82/Sound/AmaroK/SUSE/*.rpm ,,, It may still fail, but at least it will tell you what is missing. A more satisfactory solution is to add a repository to Yast containing the newer software. You can find a list of such repositories, along with instructions for adding them to Yast, at http://en.opensuse.org/Additional_YaST_Package_Repositories. Back to the list ****** Reduce the number of identical log messages Q:: Can you help me reduce the number of identical log messages? When I first started using Linux there were lines of 'message repeated x times', but these have become rare. The problem is not really the size of the files but the difficulty of finding importan single messages. I have appended below some of the common sequences that occur with Mepis 3.4. The first group of messages comes from my Zip drive breaking up the log through booting and beyond. The larger figure is the total size of the disc. Only the smaller is supplied by the partition table. The second group looks as if something is sending pings at one-minute intervals. So 10.10.10.134 is the local IP address and 10.10.10.91 is remote. The third group produces hundreds of these messages within a few seconds but this occurs only occasionally. You can see the signs of a race condition. There seems to be little effect on the functioning of my machine but I would like to be able to find more serious errors without having to trawl through so much guff. Here are examples of the messages: --- Jul 18 19:07:40 localhost kernel: hdd: The disk reports a capacity of 752896000 bytes, but the drive only handles 752877568 Jul 18 19:07:40 localhost kernel: hdd: hdd4 Jul 18 19:13:20 localhost kernel: martian source 10.10.10.255 from 10.10.10.134, on dev eth1 Jul 18 19:13:20 localhost kernel: ll header: ff:ff:ff:ff: ff:ff:00:0a:5e:1d:53:c2:08:00 Jul 18 19:14:00 localhost kernel: [unmap_page_range+217/232] unmap_page_range+0xd9/0xe8 Jul 18 19:14:00 localhost kernel: [unmap_vmas+172/376] unmap_vmas+0xac/0x178 Jul 18 19:14:00 localhost kernel: [unmap_region+125/242] unmap_region+0x7d/0xf2 ,,, A:: I can think of three approaches to this. The first is to investigate the cause of the messages and deal with it, preventing them ever appearing. The system.txt file you sent was extremely helpful, as it helps pinpoint the cause of the third set of messages, which occur because you are using a 2.6.15 kernel with an Nvidia graphics card. The solution is to either upgrade to a newer kernel, or install SimplyMepis 6.0. The 'martian' network entries refer to unroutable packets. In this case they are coming from an unroutable address - 10.10.10.255. You can stop their being logged by doing echo "0" >/proc/sys/net/ipv4/ip_log_martians as root, but it would be a good idea to find the cause first. These could be caused by faulty or misconfigured network equipment, or they could be a sign of someone trying to exploit your computer. If they still occur while your network is disconnected from the internet, the cause is local, otherwise check your firewall. The Zip error may be unavoidable, which brings us to the next approach: filter out everything you don't want to see. Run the logfile through grep to remove the 'noise' before viewing it, for example --- grep -v -f /var/log/filter /var/log/messages | less ,,, where /var/log/filter is a file containing the patterns you wish to filter out, one per line, such as --- localhost kernel: *hdd: ,,, The third approach to try is the most comprehensive, but also the most complex. You can configure the system logger to filter messages into different files (or even /dev/null). Mepis uses sysklogd, which has fairly limited filtering. You could replace sysklogd with syslog-ng and put this in /etc/syslog-ng/syslog-ng.conf to have all messages relating to hdd sent to a separate file. --- destination messages { file("/var/log/messages"); }; destination d_zip { file("/var/log/zip"); }; filter f_zip { match("hdd"); }; filter f_nozip { not match("hdd"); }; ,,, Then replace the line that reads log {source(src); destination(messages); };' with --- log { source(src); filter(f_nozip); destination(messages); }; log { source(src); filter(f_zip); destination(d_zip); }; ,,, The first filter matches all messages about hdd, which are sent to a separate file. The second matches those that don't contain hdd, which go to the standard log. You may need to tweak the search string, but keep it the same for both filters or you could lose messages. Back to the list ****** Configuring a display manager Q:: I'm a command line junkie and I'm yet to decide which GUI I dislike least! What I'd really like to do is set up a system where I can log in as either CLI, KDE or Gnome and automatically get the relevant user interface. Incidentally, that doesn't mean a terminal window for the CLI. Can this be done? Of course it can - this is Linux! So how do I do it? A:: Most display managers, including GDM and KDM, enable you to select the desktop environment you wish to use when you log in. Once you've logged in as each user with the desired environment, it will use the same one each time you log in. Of course, you can also have a single user and select the desktop environment you want manually when you log in. For the CLI, if you don't want to use xterm, rxvt or Eterm, you can simply hit Ctrl+Alt+F1 and switch to a terminal window to log in. A nice, simple window manager, such as twm or fvwm, would be adequate if you wanted to run multiple terminal windows and cut and paste between them. When learning how to use Linux, a pure CLI is often a little complex and lacks the familiar look and feel for those moving from a Windows environment. Back to the list ****** OpenSUSE not printing on HP 1200 Q:: I recently installed a version of SUSE (SLICK). It all works fine but for the fact that it will not print from any application. My printer (an HP 1200) is recognised correctly but when I try to print, the jobs are processed and wait in the printer queue indefinitely - any ideas? Also, I am trying to find a flat-bed scanner for home use (not too expensive) that will work with, say, Xandros (or SUSE). Linux scanner compatibility seems a problem. A final question: why is the /dev directory such an apparent mess? Why not have the software interrogate the hardware and create the device files in /dev as required? Extra device files could be manually added if needed. A:: It is difficult to say exactly what is wrong with your printer setup without more information. Did a test print work when you first set up the printer? The best source of information is the CUPS error log. Run this command in a terminal: --- tail -f /var/log/cups/error_log ,,, If you get an error message about inability to read the file, use su to log in as root then run it again. Now try to print something and you will see messages from the CUPS print system in the terminal. The error messages should help you find the cause. It is possible that the printer is simply disabled (this happens after an error). To fix this, you would clear the print queue and try again. You can do this from the Gnome or KDE print manager, or from the command line with --- /usr/bin/enable PrinterName ,,, This should be done as root, and you must give the full path to the command. Scanner support in Linux is good these days, using the SANE (Scanner Access Now Easy) system. The website (www.sane-project.org) has a comprehensive list of supported scanners. If you want a personal recommendation, I bought a Canon LiDE 60 a few months ago. It gives good scan quality and works well with SANE. There is no support for the buttons on the front of the scanner (yet) but scanning from applications gives excellent quality. Many of the device nodes in /dev are created on demand. Plug in a scanner, printer or USB stick and its device node appears; remove it and they disappear. The /dev directory looks busy because there are so many device nodes used by the system, even though users may remain blissfully unaware of them. A static /dev directory used to be the norm, but modern Linux systems use udev to create device nodes in response to hardware detection. Back to the list ****** What exactly is the FHS (Filesystem Hierarchy Standard)? Q:: I've heard the term FHS banded about. What exactly is it and what is it for? A:: The FHS or Filesystem Hierarchy Standard is a set of requirements or guidelines for where file and directories are located under Unix systems, and what some system files should contain. For instance, it advises that "applications must never create or require special files or subdirectories in the root directory", so that root partitions can be kept simple and secure to adminstrate. Most Linux distributions adhere to the FHS loosely, which is why the filesystem layout is fairly similar from one distro to another. Each of the folders in the FHS has a defined purpose. For example, /dev contains entries referencing devices attached to the system, /lib houses libraries required to run binaries in /bin and /sbin, while /usr holds most of the binaries and libraries which are used by you, the user, and as such is one of the key folders in any Linux system. In a nutshell, the FHS is essential to the organised chaos within Linux. It means that users like you can come to expect certain directories in certain locations, and it also means that programs can 'predict' where files are located. The first filesystem hierarchy for Linux was released in 1994. In 1995, this was broadened to cover other Unix-like systems and take in BSD knowhow, and was renamed FHS. It is overseen by the Free Standards Group, which also runs the Linux Standards Base project. While all distros stick to the principles of FHS, some use the layouts in slightly different ways, or omit some of the usual directories, which is one of the reasons why different Linux systems are sometimes incompatible. Back to the list ****** Using convert instead of mogrify to resize images Q:: When using mogrify to resize and change the format of a collection of images, how do I set a target directory for the output, and also make the name of the file contain a numeric string as a timestamp? I work with groups of young kids and things can get very busy. Often I am stalled by opening a digital photograph in Gimp, resizing it and saving it to the $HOME/.tuxpaint/saved/ directory as a PNG file so they can use it with Tux Paint. But the delay means that the other kids are left waiting. So far, my command would something look like this: --- mogrify -antialias -geometry 448x376 -format png digicampic.jpg ,,, but this does not place the finished file into $HOME/.tuxpaint/saved/, and I would also like the command to rename the file with a timestamp such as 20060719162549.png. A:: Firstly, well done for getting kids working with Linux at such an early age. The more that grow up realising that Windows is one choice of several and not compulsory, the better. ImageMagick's mogrify command is for modifying images in place, so saving to another directory is out. For this, you need the convert command from the same package. This should do what you need: --- for PIC in *.jpg do convert -antialias -resize 448x376 ${PIC} $HOME/.tuxpaint/saved/$(date +%Y%m%d%H%M%S).png done ,,, The main problem with this is that you could end up overwriting one picture with the next if they are processed within a second of each other. You could get around this by testing if a file of the same name already exists and adding an extra digit to the name if it does. But as you are using the time of conversion, not the time at which the picture was taken, you could simply pause for a second if there is a clash. --- for PIC in *.jpg do while true do DEST=$HOME/.tuxpaint/saved/$(date +%Y%m%d%H%M%S).png [ -f ${DEST} ] || break sleep 1 done convert -antialias -resize 448x376 ${PIC} ${DEST} && mv ${PIC} done/ done ,,, This version also moves the picture to another directory if the conversion is successful, so you can run the command again to process newly added images. If you want to use the time the photo was taken in the filename, replace the$(date... part of the command with --- $(date -r ${PIC} +%Y%m%d%H%M%S).png ,,, This will use the last modified time of the file for the timestamp. The date man page details the various options. A more sophisticated approach would involve reading the EXIF information from the picture. There are a number of programs for this - I prefer Exiftool (www.sno.phy.queensu.ca/~phil/exiftool). Back to the list ****** Install Gnome and KDE on the same distro Q:: I have installed Fedora, but now I'm in a quandry. I don't know which is better: Gnome or KDE. Can I have them both installed on the same computer? Also, I tried to download K3b but I could not install it. Do you know why? A:: Yes, it is possible to have more than one desktop environment installed on your computer. Below the username box on the Gnome login screen, there is a menu named Session. This lets you choose which of the installed desktop environments you load. If your system is set up to boot straight into Gnome, select Log Out from the System menu and you'll see the Session menu in the login screen Of course, you have to have KDE installed for this to work, but that's as easy as selecting the KDE group from the package manager. The most likely reason why K3b failed to install is that you don't have the KDE libraries installed. You don't have to be running on KDE to use K3b, but you do need the basic KDE libraries available. Similarly, when you are running KDE, you will still be able to use Gnome programs on it, because you have the Gnome framework installed. Back to the list ****** What does the /proc directory do? Q:: When I issue a mount command, I see a filesystem called /proc that is not on my hard drive. Can you tell me what is it, and why is it there? A:: On a typical Linux system, when you issue the mount command you will see at least two filesystems that don't appear to be accessible in the normal way. The first of these is the /proc filesystem, and the second will show as something like 'none on /dev/shm'. As you may know, /dev/shm is a filesystem that is used to manage virtual memory on your system, and doesn't create anything on your local disk. The /proc filesystem contains virtual files that are like a window into the current state of the running kernel. It does not occupy any space on the hard drive, and therefore is referred to as a virtual filesystem, but acts and looks like a disk-based filesystem. Viewing some of the files in /proc can give a great deal of information about your system. If you look at /proc/meminfo, you'll get a nice stack of information about the memory on your system: --- # cat /proc/meminfo MemTotal: 515484 kB MemFree: 74656 kB Buffers: 5912 kB Cached: 352464 kB SwapCached: 12 kB Active: 126788 kB Inactive: 289772 kB ,,, Looking at this information, you will see that not only does it tell you how much memory you have, including swap and real memory, but it tells you exactly what the current state of the memory is in terms of free space, and how it's allocated. Chances are that if you run this command again, some of the information will have changed, and this is generally the whole point behind /proc. It's like a snapshot of the current system state. The more advanced user can actually change the functionality of the kernel temporarily by editing the files in the /proc filesystem. For example, to turn on IP forwarding (to allow your system to act as a router passing network traffic arriving on one network interface out on another interface), you can issue the following command: --- echo 1 > /proc/sys/net/ipv4/ip_forward ,,, Do be aware, however, that this status is not permanent and will be lost at the next reboot. To make it permanent, you need to edit the /etc/sysctl.conf file to include the following: --- net.ipv4.ip_forward = 1 ,,, To learn more about your system, have a root around in /proc. You can't break anything just by looking, and even if you do make a mistake and edit one of the /proc files accidentally, a quick reboot will wipe any changes you've made. Back to the list ****** Using NTL broadband with Linux Q:: I would like to sign up for NTL's cable broadband package, but they tell me they do not support Linux. Does this mean the system will not work with Linux, or just that they cannot provide advice? I do not doubt that if I plug the modem into the Ethernet card it will be picked up, but how do I connect it to the broadband system? Do I use KPPP with different settings or what? I'm wondering if I need a specialist provider or if I can use any provider as long as I have 'the knowledge'. A:: I can assure you that you can use NTL broadband internet with Linux - I had it myself. All you do is connect the Ethernet port of the modem to the Ethernet port of your computer (you should not need a cross-over cable) and set your network interface to use DHCP. You do not need KPPP, KDE's internet dialler, as cable broadband does not use PPP. You will need to switch on the modem and wait for the RDY and SYNC lights to become stable. This means that the modem is connected to NTL. Now you can bring up your Ethernet interface and it will get its IP address, routing and DNS information from the modem. However, you will find that with NTL, as with most ADSL broadband providers, Linux will work with their service but they don't provide support. The notable exceptions (in the UK at least) are UK Linux and The UK Free Software Network, at www.uklinux.net and www.ukfsn.org respectively. Whichever provider you choose, the most important decision is that you use an Ethernet-based modem. NTL provides one, whereas most ADSL ISPs, especially the cheaper ones, offer a USB modem as standard. Accept the 'free' modem by all means, but budget £20 or so for an Ethernet ADSL modem. Back to the list ****** Mandriva slowing down due to kded Q:: I have just installed Free Mandriva Linux 2006. Every time I boot the machine, within a couple of minutes everything slows right down and it becomes difficult to use. I have GKrellM installed and this shows the CPU working flat out. Checking with top shows the culprit is kded. If I kill this, the problem is solved. What does this daemon do, and can I permanently disable it? A:: Do you have the search tool Kat installed? If so, this is probably the real CPU hog. Kat calls kded when working, but although kded shows up in top, the problem is called by Kat, which is notorious for its ability to bring the most powerful of machines down to ZX81 levels of performance, although later versions are reported to be rather less demanding. Kded is a generic service daemon run by KDE. It handles updates to KDE's Sycoca database, which holds application information. This is probably the part that is sucking up all your CPU cycles. The most extreme solution is to remove Kat, but you can kill the program with --- killall kat killall katdaemon killall kded ,,, The first two lines kill all Kat processes; the third kills kded, which is still trying to process all the requests from Kat. If you check the process list, you'll see that KDE restarts kded, but that it is no longer bogging your system down. You prevent Kat restarting next time you boot with --- touch ~/.mdv-no_kat ,,, If you want to re-enable Kat so it starts automatically in future, delete ~/.mdv-no_kat. Back to the list ****** Installing Epiphany in Debian Q:: I am attempting to update the Epiphany browser in Debian 3.1 Sarge, using su && apt-get install epiphany-browser The following is what I get: --- Reading Package Lists... Done Building Dependency Tree... Done epiphany-browser is already the newest version. 0 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0B of archives. After unpacking 0B of additional disk space will be used. Setting up kernel-image-2.4.27-3-686 (2.4.27-10sarge3) ... cp: writing '/tmp/mkinitrd.MMwVww/initrd//lib/ libc.so.6': No space left on device cp: writing '/tmp/mkinitrd.MMwVww/initrd//lib/libe2p.so.2': No space left on device run-parts: /usr/share/initrd-tools/scripts/e2fsprogs exited with return code 1 Failed to create initrd image. ,,, Hmm, 'epiphany-browser is already the newest version'? I've seen Epiphany 1.7.6 available for download! Unless this is an example of Debian's infamously glacial update cycle, and Epiphany 1.4.8 is the latest version that's available for Debian distros... And what's this 'no space left on device' message on about? Barely 14% of my hard disk is currently occupied! A:: The latest release version of Epiphany is 2.14.3; the latest in the 1.x series is 1.8.5. While Debian has only v1.4.8 in its stable distribution, Debian Testing has v1.8.3 and v2.14.3. See what is available in the various releases at http://packages.debian.org. You need to add testing repositories to your sources list, either by editing /etc/apt/sources.list by hand or by running Synaptic. To do this easily, duplicate the stable entry and change it to Unstable. As for your 'no space...' message, this refers to a file in the /tmp directory. Do you have /tmp on a separate partition? This is a common setup, and a good idea. It prevents a runaway process from filling up your hard disk as it writes to a temporary file. I suspect this is what has happened, resulting in a full /tmp. Run --- df -h ,,, in a terminal. If /tmp shops up as a separate filesystem at 100% full, this is what has happened. You can safely delete any file in /tmp that is older than your last reboot. Back to the list ****** How do permission bits work with chmod? Q:: I'm new to Linux and have been a bit confused as to how the permission 'bits' work with chmod. Can you help? A:: Chmod access permissions can be expressed by either three single octal digits or three lots of letters. This trio represents permissions for the file owner, the group and 'world' respectively. Take chmod 755. Each digit is a sum, added up to express the various permissions. Here is each bit with its value: --- 0 = no permissions (---) 1 = execute only (--x) 2 = write only (-w-) 3 = write and execute (-wx) 4 = read only (r--) ,,, So to set read and execute permission you'd use 5; this is 1 for execute added to 4 for read access. For full access, you'd add 4 for read, 2 for write and 1 for execute, 4+2+1=7. If you set the permissions on a file to be 755, that means the owner has full access (7) and the group for the file has read and execute access (5), as does 'world' ie everyone else. There are other bits you can set for special functions but these are the main ones. Back to the list ****** Simple way to restart Linux networking Q:: I have a PC running SUSE 10.0 that is used by everyone in my household for browsing, email, TV and playing movies and music. None of my family has any Linux user knowledge and this has caused some hassles when I am not around. The PC is connected to broadband via Wi-Fi, which works well almost all of the time. The only problem I have is that when my ISP has problems and drops me off (usually in the early hours of a Saturday) the wireless router needs to be restarted. This requires a restart of the networking on the PC - it's a simple process for me, but I cannot get my non techie family members to understand firstly why a terminal session is needed and secondly what all that root rubbish is about! They just want an icon to initiate the network restart. Can you help? A:: If your family don't understand why a root password is needed, you definitely should not be giving it to them! This is exactly the sort of situation that calls for sudo. You need to create a script that contains the sequence of commands needed to bring your connection back up and save it somewhere safe, say /usr/local/bin/restartnetwork. Make sure root owns this script and only root can edit it with --- chown root: /usr/local/bin/restartnetwork chmod 755 /usr/local/bin/restartnetwork ,,, Add this to your /etc/sudoers file so that anyone in your users group can execute it without a password, like so: --- %users ALL = NOPASSWD: /usr/local/bin/restartnetwork ,,, This allows all members of the users group to run your script without having a password. If you change NOPASSWD to PASSWD, the user will have to provide their own password. You could specify individual users instead of a group by using a comma-separated list, such as --- ma,pa,johnboy ALL = NOPASSWD: /usr/local/bin/restartnetwork ,,, Now any authorised user can run the script with --- sudo restartnetwork ,,, The full path is not needed here if /usr/local/bin is in $PATH, but it must be given in /etc/sudoers. Once you have your script, you can drop it on to the desktop or the panel to create an icon or button and any of your users can reset the network with a mouse click. Because sudo is executing the script as root, all of the commands you put in it will be run as root when called from the script, without giving your family permission to run those commands (or any others) directly. I use this method to add a button to my laptop's panel to start my wireless network - not because I don't know the root password, but because I am lazy and one click is less effort than typing my password. Back to the list ****** Building a simple mail server Q:: I'm a member of a small church that has six computers networked in a peer-to-peer configuration, running Windows XP and 2000. The Internet connection to the local ISP is broadband (384K) We don't have a registered domain name, although we have five POP3 mailboxes on the ISP's server. I'd like to install a Linux server so that the user accounts, passwords and authentication could be managed by the server. I also want to implement file storage on the server. I don't have a problem getting print and file-serving working, though - my question has to do with email. We have the five accounts that the paid and unpaid staff use, but we also have many volunteers who I'd like to set up so they could email each other on the local network. They wouldn't have to have Internet access. Microsoft Exchange could do this but we would need a registered domain name and the ISP would have to point the MX record at the domain. In addition, Exchange is expensive and overkill for us. Can Sendmail or Postfix be configured to obtain and send mail for the five ISP accounts from our local ISP, as well as handling internal mail without a registered domain name? Reading the manual strongly implies that this can be done, but how? A:: Building a mail server is something that can be done very easily with Linux, and there is quite a range of different mail systems that can be mplemented. Sendmail is particularly complex and unless you're willing to learn the configuration file structure, using Postfix or Exim will make your life much easier. Both Postfix and Exim can be configured to accept mail for any domain, such as example.tld, which wouldn't be accessible via the Internet. Fetchmail can be implemented to download mail from the ISP and distribute it to the appropriate local users. You'll also need to implement a POP3 service to enable clients to download messages from the mail server, and most distributions ship with pop3d, which is a basic POP3 server. For such a small system, a simple POP3 server is more than enough. However, if you want to expand and be able to handle users through a database, the Courier mail system has a courier-pop3 mail service that can function with MySQL. There are many cost-effective options available on the Internet, where you can provide access to mailboxes that you can have people send email to. However, providing email addresses that are only accessible between a small number of hosts will quickly lose its appeal and people will start to ask why they can't send email to the addresses from the outside world. Domains can be purchased extremely cheaply, and many domain providers offer unlimited email addresses. Back to the list ****** Adding new space to the / (root) partition via another partition Q:: My / partition is nearly full. I need more space, and have a free partition where I could put, for example, /usr/lib. But how is that done? A:: Linux allows you to mount a new filesystem anywhere under your / directory, making it quite easy to use a separate partition for part of the overall filesystem to increase the space available. The trickiest part of the process is moving the data from the original filesystem to the new one. If you do not already use a separate partition for /home, I would strongly suggest using this, because separating /home carries several advantages. Whatever you do, back up first. If you accidentally delete the wrong data, you'll be glad you made a backup. Copying data, particularly system files, while a filesystem is in use is a risky business, so you should boot from a Live CD, such as Knoppix. This assumes that your current partition is on /dev/hda1 and you are moving home from there to /dev/hda2. Make the relevant adjustments if your system is different. The first step is to run QtParted to prepare and format the new partition. Now open a terminal and type the following: --- su mount /dev/hda1 /mnt/hda1 mount /dev/hda2 /mnt/hda2 rsync -avx /mnt/hda1/home/ /mnt/hda2/ ,,, The first line gives you root access, the next two mount your old and new partitions, and the third replicates everything from the old home directory to the new partition. You could use tar or cp to copy the files, but I find rsync to be the most reliable method of producing an exact copy, including all permissions and timestamps. Now you need to add a line to /etc/fstab so that the new partition will be used. Knoppix comes with the Nano text editor, among others, so do --- nano /mnt/hda1/etc/fstab ,,, and add a line like --- /dev/hda2 /home ext3 defaults 0 0 ,,, This assumes you formatted the partition with the ext3 filesystem, the default in QtParted. If you used ReiserFS instead, change ext3 to reiserfs. If you reboot into your distro and type --- df -h ,,, in a terminal, you will see that /home (or whichever directory you decided to move) is on its own partition. "But," you are shouting at the monitor, "my / partition is still full!" That is because you copied the data to the new partition, so it is still in the old location too. This was deliberate, so you could go back if something went wrong. The data is there, but invisible because the new partition is mounted on /home, obscuring the original files. You could reboot Knoppix to remove these files, once you are sure you want to, but here is a little trick to save you having to reboot: --- mkdir /mnt/tmp mount --bind / /mnt/tmp rm -fr /mnt/tmp/home/* ,,, This lets you see and delete the files in the old home directory. Make sure you only delete the contents, not the directory itself. That is needed to mount to the new partition. You could do this with /usr/lib as you suggest, but /home is a better choice if not already mounted elsewhere (otherwise look at moving /usr/local). A lot depends on how much space you want to free up, so it helps to know how much space each directory is using. My favourite tool for this job is Filelight, available from www.methylblue.com/filelight and included in some distros' package repositories. Back to the list ****** Cannot import Microsoft Money files into KMyMoney Q:: I have been using Linux for about three years; two years running Mandrake and one running Ubuntu. I am trying to convert my other half to use it as well - she already uses OpenOffice.org on her XP machine. I have loaded a disc for her with Ubuntu Dapper Drake but there are two snags. One is that I cannot successfully export MS Money files to KMyMoney: the data files will export to QIF files, but KMyMoney will not import them no matter what I do. It says it is an unrecognised format, probably the Microsoft version of a QIF file. The other problem is that she also uses MS AutoRoute and I cannot find an equivalent for Linux. I was thinking of using Wine as an alternative but I know nothing about running Wine, or how one installs a Microsoft program using it. Any help that you could give to me would be most gratefully received. A:: The answer to your first question is to use a different program to convert the files. GnuCash will import Microsoft QIF files, which you can then save out in GnuCash's own format. GnuCash has options to handle several variations on the QIF format (I've successfully imported files from MS Money in the past). KMyMoney has an option to import GnuCash files - I use this because I keep my accounts in GnuCash but like KMyMoney's reporting options. The reason for using GnuCash's own format for saving is that this is a fixed format, whereas QIF files come in quite a variety of flavours. You could also try the latest version of KMyMoney, released recently - it mentioned improved QIF support. There is a route-planning package for Linux called Navigator, from www.directions.ltd.uk. This is a commercial product that works on x86 Linux and Windows. There's no demo version, so check with the manufacturer for compatibility first. Installing Wine is easy with Ubuntu, there is a package in the Universe repository. Select Settings > Repositories in Synaptic and tick the box for 'Ubuntu 6.06 LTS (binary) Community maintained (Universe)' close the repository windows and hit Reload. Once the reload is complete, use the Search button to find Wine. You only need select the wine package itself. Once Synaptic has installed the package, run winecfg to set things up, although the defaults are fine for most uses. Now you can run a Windows program with --- wine /path/to/someprogram.exe ,,, Try the Wine Applications Database at http://appdb.winehq.org for information on compatibility with various programs. You could consider, too, CrossOver Linux, the commercial derivative of Wine from www.codeweavers.com. Back to the list ****** WebDAV Hotmail access - moving mails to POP3 server Q:: I've got WebDAV access to my Hotmail account. Is there anyway I can get it into my POP3 server? A:: There are a few things out there that will do the job for you. I prefer to use Hotwayd. It runs as a simple inetd service and can be used in conjunction with Fetchmail. Get the source from http://hotwayd.sourceforge.net and once you've expanded the archive, simply install it with your favourite configure options. When you've done that and Hotwayd is installed, you need to activate it. To do this with xinetd, create a file in /etc/xinetd.d called hotwayd and populate it as follows: --- service hotwayd { only_from = 127.0.0.1 disable = no type = unlisted socket_type = stream protocol = tcp wait = no user = nobody groups = yes server = /usr/sbin/hotwayd port = 1100 } ,,, Restart xinetd and you're sorted! From there you can use Fetchmail. Simply create a .fetchmailrc file in your home directory containing: --- poll localhost protocol pop3 port 1100 username "username@somemail.com" password "yourpassword" ,,, Now run Fetchmail. It will poll and pull down your mail from Hotmail to your local POP3 server. Back to the list ****** Restore user to admin group in Ubuntu Q:: I have been a bit of a fool and removed admin privileges from all three users on my Ubuntu system. This cut down the System-Administration menu to only a few items. At the moment, I do not have a user in the admin group that I can log in as to manage system items. I can use Gnome Terminal as sudo but cannot work out how to add one of the users back in to the admin group. I tried to use the usermod -g command but cannot get it right. That is, of course, if I am using right command. How can I add my user 'master' to the admin group again? A:: First, don't worry about having made a mistake, we all do it. The two most important aspects of making a mistake are learning from it and not letting anyone else know you've done it. The command to add your user to the admin group is --- gpasswd -a master admin ,,, However, only the root user can manipulate the password and group databases, so you have a Catch 22 situation here: you need to use gpasswd to add yourself to the admin group, but you need to be a member of the admin group to do this. Do not despair, there is a simple solution. The installation disc is also a Live distro, set up so you can run root commands with sudo. Boot from the disc, open a terminal and run --- sudo bash mount /dev/hdaN /mnt nano /mnt/etc/group ,,, Replace hdaN with whichever partition contains your Ubuntu installation. Nano is an easy-to-use console text editor. Scroll down to the line beginning 'admin:x:112:' and add 'master' to the end, so it reads --- admin:x:112:master ,,, You can add more than one user if you wish by separating them with commas, for example: --- admin:x:112:master,slave ,,, Don't worry if the number is not 112; leave it as is. Press Ctrl-X to save the file and exit, then reboot from your hard disk. You should now have your admin privileges again. Back to the list ****** Slow network card: what is half duplex mode? Q:: My box has really poor network performance. Someone recently mentioned I might be set to half duplex (whatever that is). How can I find this out and what speed I am connected at? A:: Firstly I'll explain half duplex. In a nutshell this means that your network card has negotiated with your network hardware and is not sending and receiving packets at the same time; in essence it's a one-way conversation. If you are using any modern piece of network hardware you should be able to achieve full duplex easily. When a NIC is connected to a network device it has to negotiate a compatible speed and duplex setting at the physical layer. On most cheaper switches this is done through a process known as autonegotiation: the switch 'advertises' what link modes it supports, the NIC chooses one and informs the switch. This is the default behaviour for most NICs. On more expensive managed switches this setting can be fixed to ensure optimal performance. Often, if this is configured on the switch but your machine is still set to Autonegotiate you'll end up with a duplex mismatch, which causes network performance to be poor. To find out what your NIC is currently set to you need to use the ethtool command: --- [root@dan ~]# ethtool eth0 ,,, This will show you various details. Note the Duplex and Speed entries; you'll also see what advertised modes the switch supports. Assuming your duplex is the issue and your switch is hard set to, say, 100Mbps for speed and Full Duplex, you can change eth0's setting by executing --- ethtool -s eth0 speed 100 duplex full autoneg off ,,, Be aware, though, that this will revert when you reboot the system. To set it permanently you should set the options for your NIC driver in modules.conf when it's loaded. If this doesn't solve the issue there are a number of things you can look at, but first you need to narrow down the issue. Is it a particular service that is slow? Your network connection could be fine but a service could be slow to respond for a number of reasons. Run ifconfig and see if you have any Tx/Rx errors or collisions - is it just your machine? Could it be affecting several machines due to a saturated switch? In essence, you need to track down where the issue lies to define your problem and resolve it! Back to the list ****** Create a web-based TV station in Linux Q:: Is it possible to use Linux to create a web-based TV station, broadcasting over the internet, mixing live feeds from webcams or video cameras? Secondly, if it is, is it possible to do this totally with open source? A:: Yes, it is possible, and all with open source software. You haven't given us much detail about your intended project, so it is difficult to give specific help, but the (LS)3 Open Media Streaming Project looks to be a suitable starting point. This includes Fenice, a multimedia streaming server, and plenty of documentation to help you. It specifically mentions streaming from live video feeds. Fenice supports Video4Linux devices, so any webcam that works with Linux should be suitable. The (LS)3 website is at http://streaming.polito.it and includes discussion forums where you can exchange information with the developers and other users. Another server worth investigation is Flumotion, from www.fluendo.com. This is a commercial project, but the basic server is free under the GPL. You may also find a use for FreeJ, for mixing images and effects in real time. Its website is at http://freej.dyne.org. Finally, you should look at Dynebolic, a distro aimed at multimedia production and broadcast. It can be used as a Live CD, enabling you to try it out before installation. You can get the latest version from www.dynebolic.org. Good luck with your project and let us know when it goes public! Back to the list ****** Removing Linux and restoring Windows partitions Q:: I purchased Fedora on DVD from a publication and installed it on my Acer notebook, thinking I would always have access to my programs already installed on my Windows desktop. That was a wrong assumption on my part. As you can tell I'm new to this Linux OS. I realise there is the Wine software that allows one to incorporate Windows with Linux. Unfortunately, when I partitioned my hard drive, I lost my wireless connection to the internet and I can't seem to reconnect. Also, I have important software that I have to use on my Windows XP Pro but can no longer access. My question to you is: how do I undo the partition, thus removing Fedora until I'm ready to reinstall it? I've tried to upload Partition Magic 8.0 software that I have but it won't work since it's an .exe file. I should also mention that my notebook came pre-installed with Windows XP and therefore I don't have the Windows XP CD. I've gone to different websites from my PC but to no avail since they all mention having the XP CD. A:: There are two possibilities here. The first is that you deleted your Windows partition when installing Fedora by choosing the option to Remove All Partitions On Selected Drives. If this is the case, you have lost Windows and will have to reinstall. You should be able to obtain an installation CD from your laptop's supplier or manufacturer. The second, and hopefully correct, possibility, is that you still have Windows installed but have lost the option to boot it. Most distros' installers have the option to set up a dual-boot with Linux, where you get to choose your operating system each time you boot up. When you see the message 'booting Fedora... in n seconds' shortly after booting, press a key and you will see a menu. If Windows is on this menu, select it and you'll have Windows working again. To remove the Fedora bootloader and have your system boot straight into Windows needs a Windows rescue disc. As it does not need a full XP installation CD, you can usually fix things with one of the boot discs available from www.bootdisk.com. It will be easier if you have access to a working Windows computer to download a disc image from here and write it to a floppy drive. Boot into the rescue system and run fixmbr to restore the Windows bootloader and remove the Grub menu. Fedora will still be there, but you can now run Partition Magic to reclaim the space it uses. Back to the list ****** Faster network sharing than NFS Q:: I've been using NFS between two boxes but have noticed that it's not the fastest transport in the world. Is there anything else you can recommend? A:: Indeed, there's a great project that has developed something called the Network Block Device (http://nbd.sourceforge.net). It's something that has been compiled into the kernel for some time now and essentially presents a remote filesystem as a local device. The only downside is that you can only have it mounted read/write by one machine. Assuming that this wouldn't cause you any problems, I'd suggest you give Network Block Device a go - it's much faster than NFS and is really straightforward to configure. First of all, because NBD uses a file rather than a directory as its device you need to create a file to the size you require. To create a 1GB NBD you can do the following on the server: --- Dd if=/dev/zero of=/mnt/nbd-drive bs=1gb count=1 ,,, This will create a 1GB file as /mnt/remote. Next up you need to tell the NBD server to start up, listen to a certain port and use the file we just created. In this example we are using port 1077: --- nbd-server 1077 /mnt/ndb-drive ,,, Once this is done, ensure the nbd-client module is loaded on the client machine: --- modprobe nbd.o nbd=client 192.168.1.2 1077 /dev/nd0 ,,, Obviously you need to replace the IP given here with that of your server. You can use any filesystem you want to with NBD - because this is the first time we have accessed it, we'll format it ext2: --- mke2fs /dev/nd0 ,,, And finally we can mount it: --- mount -text2 /dev/nd0 /mnt/nbd-drive ,,, If your server has multiple network cards you can start NBD on multiple ports to provide extra capacity or resilience: --- nbd-server 1077 1078 1079 1080 /mnt/ndb-drive ,,, And then on the client you can specify multiple IPs and ports: --- nbd-client 192.168.1.2 1077 1078 192.168.2.2 1079 1080 /dev/nda ,,, Back to the list ****** DigiKam won't recognise Trust digital camera Q:: I have just been given a new digital camera (manufactured by Trust), and I have been unable to get DigiKam to recognise it. However, /var/log/messages seems to detect that a device has been added to the USB port, and the camera is powered up. I don't know anything about USB, but I do know that an entry does not exist in my /etc/fstab. Could this be the problem? Is KDE's DigiKam the best software to use? A:: You don't need an entry in fstab for KDE's automounting to work. In fact, it generally works best without such an entry. However, not all digital cameras work as USB mass storage devices; some use camera-specific protocols. Does /var/log/messages show partitions when you connect the camera, in much the same way you would see with a USB memory stick? Something along these lines would indicate that it is a mass storage device: --- usb-storage: waiting for device to settle before scanning Vendor: NIKON Model: NIKON DSC E3200 Rev: 1.00 Type: Direct-Access ANSI SCSI revision: 02 SCSI device sda: 2012160 512-byte hdwr sectors (1030 MB) ... SCSI device sda: 2012160 512-byte hdwr sectors (1030 MB) sda: sda1 ,,, If not, this camera doesn't show itself as a storage device. But DigiKam should still recognise it if it is supported by Gphoto2, and you have Gphoto2 installed. Gphoto2 is the command-line client for libgphoto2, which is used by DigiKam. If you don't have it installed, you should find it on your distro's discs. You should run it as a normal user and as root - any difference in the output will indicate a problem with permissions. To find out if your camera is supported, run --- gphoto2 --auto-detect ,,, and note what it shows. If your camera is not recognised, check the archives of the mailing lists at www.gphoto.org and send the developers details about your camera. DigiKam is a fine program for managing digital photos, but if all you want to do is copy pictures from the camera, you can do that with Konqueror, by typing camera:/ in the location bar. This will show a list of any connected cameras that libgphoto2 recognises. Back to the list ****** Check email anywhere from SquirrelMail server Q:: I have a Linux mail server (SquirrelMail). How can I check my mail from any Windows workstation without installing any extra software? A:: You can read your mail using any standard mail software, or even Outlook Express. If you want to be able to read it without reconfiguring the mailer on the workstation, as would be the case if you were only using it temporarily, your best option is to install a webmail program on the server - that way you can read your mail from anywhere with nothing more than a web browser. One of the most popular webmail servers is Squirrelmail (www.squirrelmail.org). You will need an IMAP server running on the Linux box, because most webmail programs use IMAP. SquirrelMail is a PHP program, running through a web server, so you will also need Apache (or another web server) installed and running. Once installed and configured, which is well documented and a simple process, you can access your mailbox from most web browsers. While SquirrelMail is one of the more popular and longstanding webmail projects, there are several other choices; I rather like RoundCube, from www.roundcube.net. This is an Ajax project, and although it's only at release 0.1beta2, seems stable with a reasonable feature set. Which of these you choose to use, or whether you go for another alternative, such as NeoMail (http://neocodesolutions.com/software/neomail) depends on your needs. If you need only occasional access to check your email when away from your own computers, I would recommend you try RoundCube, although any of these is fine for this task. If you anticipate a heavier use requiring more of the features of a full email client, you should try them all and see which suits your needs best. You can have more than one of these installed at a time by putting each one in a different directory on the server. That way you can jump between them before you decide which one suits your needs most closely. Back to the list ****** Have I got a trojan? Q:: I'm running a Debian unstable-based distribution, with chkrootkit for security reasons. It recently gave me a message that reads: "lkm you have 2 process hidden for readdir you have 2 process hidden for ps command warning possible LKM trojan installed". Does anyone know a well-reputed trojan remover for Linux? Does anyone else get messages like this? How would I remove them? A:: It's not unusual for chkrootkit to throw up some false positives if it isn't compiled against the specific kernel build being used. With some recent kernels, there are kernel-space processes that throw up false positives and chkrootkit will identify them as being possible trojans. A great way to test the system for malicious processes is with the kstat utility, which can give a list of processes that the kernel knows about as opposed to those picked up by ps. These two lists can then be compared and any malicious processes identified. There are quite a few Linux trojans that install modules and startup processes to perform a variety of malicious activities. However, they generally throw up other red flags in chkrootkit, such as changed system binaries. If you have any concerns that a system is compromised, booting from Knoppix or another rescue disk, or simply using the busybox binary to execute ps and ensure that it isn't compromised, will reassure you that your system is safe. Back to the list ****** Ubuntu installation: GRUB being installed in Dell hidden partition Q:: My father in-law recently expressed some interest in GNU/Linux, so I told him to download the brand-new-to-Linux distro du jour, Ubuntu. I really, really, expected the install to go smoothly, but he ran into a problem that has me stumped. The answer is probably simple, but beyond me. He is using a new-ish P4 Dell that came with Windows XP pre-installed. He had about 10GB of free space to use for the installation. The normal installation procedure ran smoothly and instructed him to reboot. Upon reboot, Grub returned an Error 21. After he did a little digging, he found out that Dell places a small, invisible partition on its disks that contains Dell tools and utilities. Apparently, the MBR [Master Boot Record] has been moved somewhere unusual on these machines. After consulting the marvellous Ubuntu forums, I discovered that, a/ yes, this is the problem, and b/ nobody seems to have a good solution. So, how do I configure Grub so that it boots properly? I don't have physical access to the machine, but I know it only has a single hard drive. After install, the partitions should lay out something like: --- hda1 Dell super-secret files. hda2 Windows. hda3 onwards boot. / swap. ,,, A:: You don't say which model of Dell this is, but the usual layout is to put the Dell utilities partition on hda1 and a bootable Windows partition on hda2. The MBR should be in the usual place, otherwise the BIOS wouldn't be able to find the partitions. Grub Error 21 is a stage 2 error, which tells us that Grub has already loaded from the MBR and found its stage 2 files in /boot to be able to get as far as this error. Error 21 means 'Selected disk does not exist' so this would appear to be an error in the Grub configuration: trying to load a kernel from the wrong place, such as a non-existent partition. This is definitely the case if Grub is able to load Windows (which would atleast prove that Grub itself is working). Press Esc to get to the Grub menu, highlight the Linux entry and press 'e' to see the details. You should see something like this: --- root (hd0,0) kernel /boot/vmlinuz-2.6.15-23-386 root=... ,,, There is a good chance the root setting is wrong. Press 'c' to get the grub command prompt and type --- find /boot/vmlinuz-2.6.15-23-386 ,,, using the full filename specified in the kernel line above. This will return the location of the partition that contains the kernel, which will probably be hd(0,2) or (hd0,4), depending on whether /boot is on a primary or logical partition. Press Esc to get back to the menu entry, highlight the 'root' line and press 'e' to change it to match the output of find. Press Enter to accept the change then 'b' to boot. Once you know it works, you can edit the configuration file to make the change permanent by running this in a terminal: --- sudo nano /boot/grub/menu.lst ,,, You'll find the menu details below the line that reads '## ## End Default Options ##' The bad news is that you really need physical access to the computer to do this, or to be able to talk your father-in-law through it. Back to the list ****** Automating layout with Tetex or Lyx Q:: I would like to print an image, JPEG or BMP, and a text file in one report. Is there any freeware that would allow me to do this easily via the command line? I want to pre-generate reports via a script automatically. A:: Yes, there are a number of ways to do this. Which one you choose depends on the quality of output you need and how much time you are prepared to spend implementing it. The easiest way is to write the report to HTML, which can be viewed or printed in a browser. The following shell script will take the names of an image and a text file and write the HTML to standard output. It's a very basic example, but you'll get the idea. --- #!/bin/sh echo "<html><head><title>My Report</title></ head><body>" echo "<img src=\"$1\" align=\"right\">" cat $2 echo "</body></html>" ,,, At the other end of the spectrum would be one of the Tex-based packages, such as Tetex or Lyx. These are typesetting programs that give you a great deal of control over the finished document's layout. The learning curve is steep, but the results may justify it. Tex source files are plain text, so it would be easy to generate them from the command line using a template file and a short shell script. Lyx provides a somewhat simpler means of producing Tex files. It is a graphical application, but once you have created your template you could manipulate it with a shell script. You have a choice of splitting the template into three and doing something like this: --- cat template1.lyx >report.lyx echo /path/to/my/image >>report.lyx cat template2.lyx report.txt template3.lyx >>report.lyx ,,, Or you could do something more complex by using sed to replace parts of the template with your text and image files. Once you have the report.lyx file, you can output it in a number of formats, all at the highest quality. For example, --- lyx --export pdf report.lyx ,,, will produce a PDF report. Lyx is a powerful program with detailed online help. Give it a try. An alternative is to use the scripting capabilities of a word processor or page layout program such as OpenOffice.org, AbiWord or Scribus. Back to the list ****** Canon LBP-1120 printer not working in Linux Q:: I have a dual boot system (Windows XP and SUSE 10.1) and a Canon LBP-1120 laser printer. As stated, the printer works fine on the Windows system. The problem is that try as I might, I cannot make it do anything on the Linux setup. I have downloaded and installed the CAPT drivers (numerous times) and then gone through the printer configuration and generally, nothing happens. The most that has happened is that occasionally the printer will send a piece of paper through, with nothing printed on it. Apart from that, anything that I try to print just stays in the print queue, unless I email it to myself and use Windows to do the printing. A:: This is a 'Winprinter' - one that uses the driver to do part of the work of the firmware. As with their cousins, Winmodems, getting Winprinters to work with anything but Windows can be a frustrating process that is not always successful. You have a choice of drivers for this printer: there is the official Canon driver, which is the one I guess you have tried, and one recommended on www.linuxprinting org that is available from www.boichat.ch/nicolas/capt. I suggest you try both of these drivers and also follow the advice given for this printer at http://linuxprinting.org/show_printer. cgi?recnum=Canon-LBP-1120. When diagnosing printer problems, your first step should be to check the CUPS log files. Type --- tail -f /var/log/cups/error_log ,,, in a terminal, then try to print a page. You should see messages written to the error log in the terminal. This often gives a clue as to the cause. By default, the logged messages are quite limited. If you need more information, edit /etc/cups/cupsd.conf (as root), find the line --- LogLevel info ,,, and change 'info' to 'debug' Restart CUPS, either from Yast or the terminal with --- /etc/init.d/cups restart ,,, Now the error_log will contain much more detail. As a general point of advice, any Linux user should check the printer database at www.linuxprinting.org before investing in a printer. Back to the list ****** Monitor services and restart them if they die Q:: I am running a number of services on my server - is there a way that I can monitor these services, and restart them if they die? I wondered about using some sort of Cron task. A:: There are a number of programs written specifically for this task - the most popular of which is probably Mon, which you can get from www.kernel.org/software/mon. There is quite a long list of dependencies, mainly Perl modules, so it would be most convenient to install it with your distro's package manager. Mon can be installed on the computer that you wish to monitor or on any other computer that can reach it over the network. The latter is a better choice, as it will be able to let you know if the server dies altogether. Mon is controlled by a config file located in /etc/mon. Here's an example section that monitors a web server: --- hostgroup servers www.example.com watch servers service http interval 5m monitor http.monitor period wd {Sun-Sat} alertevery 1h alert mail.alert webmaster@example.com ,,, This will attempt to connect to the web server every five minutes and email an alert if it fails. The alertevery parameter means that although it will continue to check every five minutes, it will not send a mail on every consecutive failure, only nag you every hour. Mon is able to monitor more than services: it can also keep track of things like disk space and processes, which could help you prevent a rogue program or denial of service attack stopping the server completely. There are other alert options supplied with Mon, including pager alerts (after all, there's no point in an email alert if the mail server has just died). Monitors and alerts are Perl scripts, so you can customise them or build your own - the Mon website has a collection of user-contributed monitors and alerts, you can be nagged by AIM or text message if you really want. Another program worth considering is Monit -www.tildeslash.com/monit. This works in a similar way to Mon, but is designed to run on the server itself and be able to take corrective action rather than disturb the sysadmin. Monit is able to restart a service that has died - it also has a built-in web server that enables you to log in from a remote computer to check on the status of monitored services. The safest approach is to run Mon remotely and Monit locally. Back to the list ****** Truncated files on Red Hat server after FTP uploads Q:: I am having some trouble with an updated Red Hat server at work. We have an applet that makes an FTP connection to the server, and users can upload files. This all works fine. The problem lies with a script that runs on the server to look for changes in the modification date of the folder that they are loaded into. When the date changes it processes the file. This worked fine on the old server, but on the new server it appears that while the transfer is in process the folder modification date is being changed. This means that we are getting truncated files, because the process starts before the transfer is complete. Is there any way to set how folder modification dates are created? Is it an FTP issue or an OS issue? The server is RHEL ES 4, the old server was RHEL ES 2. A:: Is your script looking at the file modification dates manually? The problem is that the folder is modified twice: once when the new file is opened at the start of the upload and again when it is closed on completion. I had to set something like this up recently and found the best approach is to use the Fam (File Alteration Monitor) service, which is able to distinguish between these events. You need to install Fam and ensure that the famd service is run at startup. Then you need a program that will ask the server to watch for changes to your files or directories and take the appropriate action when informed of them. I found the fileschanged program ideal for this if all you need to do is run a script. You can get fileschanged from http://fileschanged.sourceforge.net, and run it like this: --- fileschanged --show changed --exec /usr/local/bin/ourscript /var/ftp/somedir/ ,,, The --show option tells fileschanged to listen only for changes to files in the directory, skipping the initial notification when the file is created (fileschanged can also watch for files being deleted or executed). When it receives a notification, it runs the script with two arguments. The first is a single letter indicating the type of file change: 'M' for modified. The second argument is the name of the file. With this information, your script knows the name of the file that was uploaded and can do whatever you want. You may also find it useful to add a --timeout option to increase the delay in notification of changes to a file. Back to the list ****** Log network traffic on selected interfaces Q:: I need to keep track of how much bandwidth my servers are using. How can I log network traffic for all or selected interfaces? A:: There are a number of programs that will monitor and display the traffic though each network interface, most of these use information culled from the /proc filesystem. The main differences between them are the way in which they display the statistics. For a simple overview, Vnstat is a good choice. Available from http://humdi.net/vnstat, and probably in your distro's package repositories too, Vnstat is normally run as an hourly Cron job, collecting statistics from /proc and adding them to its database. You can query this database at any time by running Vnstat from the command line. There are options to display the statistics by day, week or month as well as various other ways of tweaking the output. If you need more than a simple ASCII report, you should try Traffic-vis, from www.mindrot.org/traffic-vis.html. This consists of a number of tools; the one that does most of the work is Traffic-collector, which should be running all the time. Traffic-collector collates information on the traffic passing through the specified network interfaces and saves this data to a file. This file is not meant to be read directly but passed to one of the other programs in the suite that process the traffic data and produce reports in HTML, PostScript, plain text and GIF formats. The HTML option is particularly interesting if you want to monitor a web server, as you could have a CGI script run Traffic-tohtml and give you on-demand traffic reports from a web browser. There are other utility programs included that can process the data in other ways; for example, Traffic-exclude is a useful option if you have bandwidth limits or charges and want to know only how much traffic the interface passed over your more expensive connection while ignoring and traffic between, say, the web server and database server on the same network. Back to the list ****** Make KFind skip directories and mount points Q:: I am using Kubuntu 6.06 with the Ichthux packages but my question is common to any distribution using KDE. The 'Find Files' utility searches every directory on the root (/), including the directories in the /mnt directory. This means KFind is searching my files in other distributions, and I usually have several, so KFind loses a lot of time here. Is there some way to configure find to skip /mnt? Otherwise the only solution I can think of would be to unmount /mnt every time before using KFind, but that might create problems. A:: KFind is a front-end to two standard shell commands: find and locate. Unfortunately it doesn't give access to all of the options of find, such as specifying which directories or filesystems to search or skip. All you can do is give a starting point. This isn't an issue when searching your home directory, the default, but it can cause the problems you describe when trying to search the whole filesystem. Happily, in the 'Use files index' checkbox you can elect to use locate instead of find. Locate uses a database built with the updatedb command for much faster searching, although it only finds files that were present when the database was searched. Updatedb is usually run as a daily or weekly cron task. The search path of locate is configurable, so you should add /mnt to the PRUNEPATHS list in /etc/updatedb.conf. For maximum flexibility, it is worth learning the find and locate commands themselves, eg --- find / /home -xdev -iname '*.pdf' ,,, will look for all files ending in .pdf or .PDF in / or /home, but ignore other filesystems (thanks to the use of -xdev) such as /proc, /dev and those mounted under /mnt or /media. The find and locate man pages will give you a lot more information, but the main thing to remember is that locate is for a fast, name-based search while find allows far more control over the search parameters, including filename, file type and file age as well as the directories and filesystems searched. Unmounting filesystems mounted under /mnt may work, but is just as likely to fail if you have an open file or directory in any of them. Either way, it shouldn't be necessary. Back to the list ****** How to Enlarge GTK fonts Q:: I am looking for information on how to enlarge the fonts in Xara Xtreme, Gimp and similar programs that do not change when System Config > Appearance and Themes > Fonts are manipulated. I am using SimplyMepis - and like it! A:: Xara Xtreme and Gimp use the GTK2 toolkit, whereas the System Config settings only affect KDE programs. I would normally recommend that you install the gtk2-engines-gtk-qt package, which makes Gnome and other GTK programs use your KDE settings and adds options to control the appearance of GTK programs to the Settings menu and the KDE Control Centre. However... although the package is installed by default with Mepis 6.0 it doesn't work. The programs are there but it is impossible to load it (others have reported the same on the SimplyMepis forums). Worry not: there is another way to do what you want, by installing gnome-control-center and using gnome-font-properties. Run the program by typing gnome-font-properties in a terminal or the Run command dialog that appears when you press Alt+F2. This allows you to set Gnome fonts in a similar manner to the way you set KDE fonts. This technique will modify the fonts for the current session, but they will revert to the defaults on the next restart. To make the changes permanent, type this in a terminal as your normal user (not root): --- ln -s /usr/lib/control-center/gnome-settings-daemon ~/.kde/Autostart/ ,,, This ensures that gnome-settings-daemon is started whenever you load your KDE desktop. The daemon causes your Gnome settings to be applied to all GTK programs. Back to the list ****** Installing TightVNC: ./configure command not working Q:: I have my usual problem in trying to install a VNC program to my laptop running Xubuntu. I downloaded all the VNC programs on to a recent disc and copied them to my laptop. I then used the correct tar instruction to unpack the archive both of TightVNC and VNC, ran cd into the resulting directory, typed ./configure... and was told the instruction did not exist. I'm afraid this is the usual state when I try to load your programs either on Ubuntu or SUSE. Am I doing something wrong? A:: In my ever so humble opinion, it is the distro makers that are doing something wrong. They assume that you will find every program you need in their repositories and that only developers will need compiler tools. In reality, most Linux users will need a compiler at some time, even if it is only to install a driver for their network card or the latest Nvidia graphics card drivers. Even installing VMware, a closed source binary package, requires a compiler for the kernel modules. Many of these require the kernel source too, another package considered non-essential. That's enough ranting: on Ubuntu, you need to install the build-essential package, which includes everything you need to compile from source. With SUSE, you need the gcc package. If you want to install software from source tarballs, which will happen sooner or later, these packages will be essential, so install them now and avoid any grief later. However, in this case compiling from source is not necessary. Both distros include recent versions of TightVNC in their standard repositories or on the install discs, with Ubuntu also including the standard VNC. Unless you need the latest, most bleeding-edge versions, it makes sense to use the packages that came with your distro, as they have been tested, and you will be informed of any updates. Back to the list ****** Modem not working in SLED 10 Q:: Try as I might I cannot get my modem working in SLED 10. It keeps asking me for an Ethernet card, which I don't have, and saying that it is not connected. I know the card is not damn well connected because I don't have one! How do I configure the modem to work? I am on dial-up, I do not have broadband. A:: When you installed SLED, you should have seen a screen listing your networking hardware (Ethernet/DSL/modem) with options to configure each. If you have an Ethernet connection in your computer - and most motherboards have one built in nowadays - this will be used as the default if you didn't select anything else. (Remember that SLED is an 'Enterprise Desktop' distro, so its native habitat is a PC connected to a LAN.) I suspect this is what has happened here. The default setting for a network card is to use DHCP to ask the network for an address and other connection details. Because your network card is not connected, this request fails, giving the error message you see. You need to disable the Ethernet connection and enable your modem. Both these tasks are done in the Control Centre. Select Network Cards, highlight your network card and press Delete. The card will still show up, but it is now listed as Not Configured. Now go back to the Control Centre and select Modem. From here, you can choose an ISP and input your connection details. Back to the list ****** Configuring Apache and NAT Q:: I have Mandrake 10.1 with Apache running my own website in Apache's default document root. A friend asked if I could host his site with his DNS (www.somename.com). I said yes and thought it would take about ten minutes to set up, but as I read Server School Apache from the Complete Linux Handbook's 1 and 2, it didn't seem like it was going to be quite so simple a process. So far, I've simply gone to www.no-ip.com and added his chosen domain to my account, as well as my own. I thought that the next step was to add him to a user account on my system and so I created /home/somename/html and put his website in the html dir. When I tried http://localhost, it came up with my site, so I thought I'd try his www.somename.com site, but it came up with my router's login page and not his. What am I doing wrong here? Here's a copy of my Vhosts.conf, if that helps: --- NameVirtualHost 192.168.0.5 <VirtualHost 192.168.0.5> ServerName www.somename.com #ServerPath /domain DocumentRoot /home/somename/ html </VirtualHost> ,,, A:: If you're seeing your router configuration page when you connect, it sounds more like a network issue than an Apache configuration problem. You'll need to permit port 80 through your outer and NAT it onto the 192.168.0.5 internal address. If you're hosting your own site, here should already be a rule. However, as you were accessing it via 'localhost' rather than its real outside address, there could be a DNS misconfiguration at some point. Your Apache VirtualHosts configuration appears to be correct, and you should be able to see successful requests in the access_log file to verify that the appropriate DocumentRoot entry is being hit. You'll also need to add a VirtualHost entry for your own site, as well as localhost, because once NameVirtualHost is used, the default DocumentRoot configuration options ignored. Back to the list ****** Add copy protection to CDs in Linux Q:: With Windows, there are commands in Nero and other software that enable you to put copy protection on to the CD-ROMs that you make. Is there such a command with Linux? A:: If by "copy protection" you mean the sort of thing that commercial CDs have, the answer appears to be no. The idea of restricting copying is anathema to free software. However, if you want to encrypt your data to protect it from prying eyes, such as when backing up personal files, the answer is yes. You might have come across an application called Cdrecord, which is the CD-writing back-end used by most CD programs. There is a patch for this available to add encryption of the data as it is written to disc. Most distros do not include the patched version of Cdrecord (which is contained in a package called cdrtools), but you can tell if your copy includes encryption with --- cdrecord --version ,,, If this does not state that encryption is included you will have to patch and build it yourself. This is a fairly simple process: download the Cdrtools source from ftp://ftp.berlios.de/pub/cdrecord and the matching patch from http://burbon04.gmxhome.de/linux/CDREncryption.html, then execute the following commands as root: --- tar xjf cdrtools-VERSION.tar.bz2 zcat cdrtools-VERSION-encrypt-1.0.diff.gz | patch -p0 cd cdrtools-VERSION make make install ,,, You will need the GCC compiler and associated tools to do this; installing the gcc package should pull in everything you need. Now you can create an encrypted CD by adding -encrypt -encpass=ahardtoguesspassword to the cdrecord command. If you are using a GUI CD-burning program, such as K3b, you can add arguments to Cdrecord in the program's preferences. Store the password in a file and use -encpassfile instead of -encpass if you prefer. Keeping the password file on a USB key would improve security. Reading the encrypted CD requires that you have dm-crypt support in your kernel (you almost certainly will have) and the cryptsetup package installed. Mounting the disc is a two-stage process: --- cryptsetup -r -c aes -s 256 -h sha256 create ecdrom /dev/cdrom mount /dev/mapper/ecdrom /mnt/cdrom ,,, You could put these commands in a script to save typing them every time. If you have your password in a file, add --key-file /path.to/key to the cryptsetup command to save typing in the password. Unmounting follows a similar process: --- umount /mnt/cdrom cryptsetup remove ecdrom ,,, Back to the list ****** Create a boot CD containing Grub to dual-boot Windows and Linux Q:: I belong to a computer club that is 98% Windows-oriented and I'd like to install Mepis 6.0 on the club's laptop to demonstrate Linux and perhaps persuade some members to try it. Installing Grub on the MBR [master boot record] is not a good idea as the laptop is also used by our members to take home and they wouldn't like the idea of choosing Linux or Windows XP at boot (some are not interested in Linux). How do you create a boot CD to boot Grub and then choose Windows or Linux? Creating a boot floppy is not an option as the laptop has no floppy drive, and using a USB floppy is a problem. A:: I commend you on your mission to show your fellow club members the joys of Linux through SimplyMepis. Now to your problem. There are two possible solutions to this. The first is to use Smart Boot Manager. This is a bootloader disk that also works from a CD. You'll find an ISO image in the Essentials/SBM directory of the cover DVD. To use this, you must install the bootloader for Mepis into the root partition rather than the MBR; this option is offered during the installation process. When you boot normally, the original Windows bootloader for the MBR will be used and the computer will boot straight into Windows. When you boot from the CD, a menu will appear, from which you can choose the partition to boot - select the Linux root partition here and it should boot. If the Linux partitions do not appear in the menu, press Ctrl+H to rescan the hard disk - I've needed this with some hardware. The Smart Boot Manager CD is only used to run the bootloader. You can remove it as soon as the Smart Boot Manager menu appears, which means you can also use SBM to boot recalcitrant DVDs and CDs. Another option is to stick with a bootloader on the MBR but hide its menu. To do this with Grub, install Mepis as normal, with the bootloader on the MBR; then boot into it and edit /boot/grub/menu.lst as root. Change the timeout to something short, say 5 (seconds), then add these lines after the timeout: --- hiddenmenu default 1 ,,, Grub counts from zero so default 1 makes the second menu entry the default. Now when you boot, users will see a message like 'Press Esc to enter the menu' and a countdown from 5 before Windows boots. Unless they press the Esc key, they will not see any reference to Linux. Let us know how you get on! Back to the list ****** Block attempts to use Apache as a proxy server Q:: I'm getting entries like the following in my Apache server log: 'GET http://cn.yahoo.com/ HTTP/1.1" 200 291' Note the request for a completely different domain to mine and the protocol prepended to it, which would normally be stripped off. What concerns me is that the server is returning a code of 200. Should I be concerned? A:: Yes, you should be concerned. It appears that someone is attempting to use your server as a web proxy. If you have the mod_proxy module loaded and a ProxyRequests directive in one of your configuration files, Apache's proxy server will be activated. Even if proxying is not activated, you could see a log entry like this; if you are using virtual hosting Apache will normally return the homepage for your default virtual host. You should be able to tell from the IP addresses and frequency of these log entries whether this is a single, misconfigured computer or scripted attempts to find suitable servers to exploit. If the size of the returned page is always the same, irrespective of the URL requested, Apache is returning a local page - probably an error message from the small size. In this case, you are not acting as a proxy for nefarious activities and the only harm done is the extra load on your server and bandwidth to service these requests. You can disable proxying altogether by using the --disable-proxy option when building Apache, or by ensuring that the -D PROXY option is not used when starting Apache. If you are receiving a large number of these requests from robot scripts, you could look at blocking or dropping these addresses with iptables, which would save the server having to reply to them, even with an error. Back to the list ****** How to dual-boot Windows XP and Ubuntu Q:: I currently have Windows XP on my computer but am looking to change over to Linux. Can I load Ubuntu without disrupting my XP? The reason being I have broadband and my ISP doesn't support Linux. If I do load Ubuntu will the partitions it puts disrupt my XP? I have an 80GB drive with at least 40GB available for Ubuntu. A:: What you are asking for is called dual booting - almost all Linux installers support this. This used to be regarded as a somewhat hazardous process (although I have never had a problem in many, many installations) but the current Linux installers are much better and safer. The Ubuntu installer will offer to resize your Windows partition to create space on your hard disk for Ubuntu. All you need to do is tell it how much space to give to each OS. Fragmentation of the Windows partition affects how well the installer can resize it, so you should defragment the disk from Windows before installing Ubuntu. Simply right-click on the drive in My Computer and select Properties, go to the Tools tab and hit Defragment Now. The installer will also add a new bootloader with a menu that offers you the choice of Linux or Windows each time you boot. I should warn you that resizing a filesystem is potentially dangerous; for example, a power failure during the process could trash your data. The chances of a problem are minimal, but the consequences could be serious. If you value your data, back it up first. As far as your broadband connection is concerned, actually it will most likely work on Linux, depending on the type of broadband (cable or ADSL) and the hardware you use to connect. Lack of Linux support from most ISPs is just that: they don't provide support. This does not mean that you cannot use their service with Linux. Provided you have a modem with an Ethernet connection, either for ADSL or cable, you should have no problem getting online with Linux. In most cases you'll find that it is simply a case of configuring your Ethernet connection to set up its address automatically, which is generally the default anyway. Back to the list ****** Linux equivalent to Windows FinePrint: KPrinter Q:: I have a very useful utility on Windows called FinePrint, which buffers print requests and enables me to preview them, reorder them, delete pages from them, save them, print them 2-up, 4-up, double-sided, booklet... and so on. I would be lost without it. Is there anything remotely similar available for Linux, which batches print requests and allows them to be manipulated before they are sent to the printer? A:: Not only is there something like this available for Linux, but you may already have it installed! KDE's print program KPrinter offers much of what you describe. When you print from a KDE program, click on the Properties button in the printer dialog and you'll see options to do things like printing two or four pages per sheet. KPrinter can be used with non-KDE applications - most programs have an option to set the print command, which usually defaults to lp or lpr. Change this to kprinter and all print requests will go through the KDE print system. If you want some of the other features you mention, you will have to use the command line the possibilities you mention are all there, and then some, but without a controlling GUI. The best program I have found for this is a2ps, the Any To PostScript filter. This is provided with most distros and may already be installed on your system. As the name implies, a2ps takes data in (almost) any format and outputs it as PostScript, ready for sending to your printer. The filter part of the description is the interesting part, because a2ps does more than translate one file format to another, it also lays it out according to your specification. Running --- a2ps -4 myfile -d ,,, will print myfile four pages to a sheet and send the results to the default printer. As a filter, a2ps is ideal for inclusion in a pipeline, taking its input from one program and sending it to another. If you use this as the print command for a program --- a2ps -=booklet | kghostview - ,,, it will process the program's output according to the user option booklet and send it to KGhostView. You can then preview the layout before pressing the Print button in KGhostView. User options are a powerful feature of a2ps. Set in the user's config file at ~/.a2ps/a2psrc, they enable you to group a number of settings as a single option, a sort of option macro. You will find full details of this in the a2ps info page - run info a2ps in a terminal or type info:/a2ps into Konqueror's location bar. Back to the list ****** How to clear the Qmail queue Q:: My server is running Qmail and I have a lot of failure notice emails in the mail queue. How do I clear the mail queue on my mail server? A:: To solve this problem you'll need a tool called QmHandle. It can easily be downloaded from http://hurricane.hinasu.net/scripts/qmHandle. This is a modified version of the tool with some extra functionality added. Using QmHandle you can then delete messages based on sender and also on recipient. Run ./qmHandle to get more information; here's a run-down of the parameters available (taken from the man page): --- -a: try to send queued messages now (Qmail must be running). -l: list message queues. -L: list local message queue. -R: list remote message queue. -s: show some statistics. -mN: display message number N. -dN: delete message number N. -Stext: delete all messages that have/contain text as Subject. -Ftext: delete all messages that have/contain text as Sender. -Ttext: delete all messages that have/contain text as Recipient -D: delete all messages in the queue (local and remote) -V: print program version Additional (optional) parameters: -c: display colored output -N: list message numbers only (to be used either with -l, -L or -R) ,,, You can view or delete multiple message ie -d123 -v456 -d567. So to answer your question, you would need to run the QmHandle command like this: --- ./qmHandle -S'failure' ,,, Back to the list ****** Canon i865 printer won't work in Linux Q:: I print photographs as an amateur and for that purpose purchased a Canon i865 printer. I've had it a year and until now it was more than satisfactory for my needs (as a Windows user). Three months ago I moved my home computer to Kubuntu, which is very user friendly. Now I feel I can try other distros. However, I cannot get my printer to work from Linux. There seem to be no drivers available for this (and many other Canon printers). All my printing is done through my dual-booted Windows. To make the move to Linux complete I need to be able to print. I have tried various methods to print, setting up a generic printer, various Canon drivers that are available etc. Nothing works. I did get two pieces of 'advice' from a forum I tried: changing my printer and buying a driver from a firm called TurboPrint. Are these the only solutions? A:: Canon printers are notoriously poorly supported in Linux (unlike Canon scanners and cameras - I wouldn't part with either of mine) but there is a driver that is reported to give excellent results with this printer and the good news is that you probably have it installed already. When configuring your printer, select Canon BJC-8200 as the printer type -the BJC 8200 driver works with the Canon i865 up to the printer's maximum resolution. There are actually two drivers for the BJC-8200: one included with CUPS (the standard print system) and the other in the gimp-print package. If you have Gimp-Print (or Gutenprint, as the latest versions are called) installed you will be given a choice of the two drivers; you should try each of them to see which works best for your needs. Installing the printer can be done from your distro's configuration programs, such as Yast in SUSE, or through a web browser. Point your browser to http://localhost:631, click on the Add Printer button and answer the questions. Once you have set up the printer with one of the drivers, you can click on the Printers tab and click on Modify Printer' when you wish to try the other driver. TurboPrint is a commercial set of printer drivers that supports more printers than CUPS or Gutenprint (the company is Zedonet GmbH). Being commercial means it can buy developer kits from the printer manufacturers. The quality is excellent and you can download a demo version from www.turboprint.de/english.html. The demo adds a small TurboPrint logo to prints made at the highest quality, but this doesn't stop you from gauging the quality yourself to decide whether it is worth spending some money on the full version. Back to the list ****** Monitor mailboxes for corruption Q:: I run a mail server. Can you tell me how I can monitor mailboxes for corruption? A:: Have a look for this error in the mail log /var/log/maillog): 'File isn't in mbox format - Couldn't open INBOX' If you find it, the mailbox is definitely corrupted. To avoid checking mailboxes manually, here's a script you can use: --- #!/usr/bin/env python import os, sys, re mailpath = '/var/mail' mailboxes = os.listdir(mailpath) re_valid = re.compile('From\s+[^\s]', re.I) mailboxes.sort() for m in mailboxes: fn = mailpath + os.sep + m if not os.path.isdir(fn): f = open(fn, 'r') l = f.readline() if l: if re_valid.match(l): continue print "Invalid: %s" % m ,,, Name the script verifymailboxes.bin and run it with python veryfimailboxes.bin Back to the list ****** Add certificate-based authentication to LAMP server Q:: I am building a website (LAMP-based) that will provide sensitive information and store sensitive customer data in the database. The site will be restricted to specific IP addresses but I would like to add certificate based authentication so that every user that is allowed to use the site should have a personal certificate in their browser that would be used in conjunction with their username and password. That way, if someone tried to enter the site from an accepted IP address but did not have the correct username-password- browser certificate combination, they would be rejected. Can you tell if it is possible to do that? A:: This is certainly possible. Apache can use SSL to authenticate clients with certificates, as well as to authenticate the server to the client. You will want the latter too, as it is important for your users to know they have connected to the correct server before sending sensitive information. The first step is to put your certificate and its keyfile in Apache's configuration directory, preferably in an ssl subdirectory, and then to add these lines to httpd.conf to activate SSL and give their location: --- SSLEngine on SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA: +HIGH:+MEDIUM:+LOW:+SSLv2: +EXP:+eNULL SSLCertificateFile conf/ssl/ myserver.crt SSLCertificateKeyFile conf/ssl/ myserver.key ,,, Configure Apache to listen on port 443 (or create a virtual host for this and add the above lines to the virtual host's definition), and Apache will now authenticate the server to clients using your certificate. To authenticate each client with the server, add these lines to httpd.conf (or within a <Directory> container in your virtual host's definition): --- SSLVerifyClient require SSLVerifyDepth 1 SSLCACertificateFile conf/ssl/myserver.crt ,,, This will block access to any client that does not have a certificate signed by the server, so you need to create one for each user by running these commands on the server: --- openssl genrsa -des3 -out username.key 1024 openssl req -new -key username.key -out username.csr openssl x509 -req -in username.csr -out username.crt -sha1 -CA myserver.crt -CAkey myserver.key -CAcreateserial -days 365 openssl pkcs12 -export -in username.crt -inkey username.key -name "$USER Cert" -out username.p12 openssl pkcs12 -in username.p12 -clcerts -nokeys -info ,,, The export stage will prompt for an 'export password' This is needed, along with the username.p12 file, to install the certificate in the user's browser. The last line simply displays the certificate so you can check that all is well. For maximum security, install the certificate yourself, then the user will not be able to copy it to another machine as they will not know the password. Back to the list ****** Install win32codecs on SUSE Q:: When I try to install w32codec on SUSE 10.1, I get messages like --- 'Transaction failed: Package transaction failed: Can not find resolvable w32codec-all 20060611-0.pm.0' ,,, or --- '2006-06-03 08:55:00 w32codec-all- 20060501-0.pm.0.i586.rpm install failed rpm output: error: unpacking of archive failed on file /usr/lib/codecs: cpio: rename failed - Is a directory' ,,, What should I do so I can see video files? A:: You don't say where you obtained the w32codec package - it could be that the file you downloaded was corrupt, or that it is not compatible with SUSE 10.1. The safest way to install the Win32 codecs, and any other software, is through Yast. The default Yast setup only includes the installation discs and may be an update repository, so the first step is to add extra software sources. Run Yast and select Installation Source in the Software section; click on Add and pick HTTP from the menu that pops up. Now type --- packman.unixheads.com/suse/10.1 ,,, in the Server Name, press OK and click on Finish. This adds the Packman repository, which contains such goodies as the Win32 codec files. You can also add the main SUSE repositories for both free and non-free packages (the latter are excluded from OpenSUSE discs) with mirrors. kernel.org/opensuse/distribution/SL-10.1/inst-source and mirrors.kernel.org/opensuse/distribution/SL-10.1/non-oss-inst-source. Now you can go into the installation section of Yast and install w32codecs-all. This will enable you to play various video file formats, but you will still be unable to watch copy-protected DVDs - the Xine libraries provided with SUSE OSS (OpenSUSE) do not have support for libdvdcss, needed to decrypt protected DVDs. As you have added the Packman repository to Yast, an update should take care of this, but you also need to install libdvdcss, so open a terminal as root and type --- yast --install http://download.videolan.org/pub/libdvdcss/1.2.9/rpm/libdvdcss2-1.2.9-1.i386.rpm ,,, For more information on extending SUSE OSS 10.1 by adding the missing, but useful, non-free parts, see the Jem Report at www.thejemreport.com/mambo/content/view/254. Back to the list ****** Two Linuxes fighting to boot Q:: I got a new computer with a 40GB hard drive, so I decided to take 13GB and put Linux into it. I did all the partitioning and I installed Linux in three different partitions, as follows: /dev/hda6 /boot 102 MB, /dev/had7 swap 1977 MB (twice the RAM size I have), and /dev/hda8 / 10080 MB. So far so good. I then decided to put in the hard drive from my old computer, which has a Linux installation already. The size of that hard drive is also 13GB and it had two partitions in it: the swap, with size around 700MB, and / partition. This hard drive was the slave in the old computer so I installed it in my new computer as a slave too. The thing is, I want to use the old Linux installation. However, I can't boot because every time I boot my computer, the new installation kicks in. A:: Many BIOSes support booting from a slave disk, and this is the simplest way to switch between the two disks without actually changing anything. You can tell the BIOS to boot from the specific disk you want, rather than from the first one it finds. A fancier approach is to set up your boot loader on the first disk to jump to the second disk when you make a specific selection. With LILO, you can add a section to /etc/lilo.conf as follows, then rerun LILO on the system installed on the first disk: --- other=/dev/hdb label=OldLinux ,,, Depending on which bootloader is installed on the slave disk, it will kick in and you can boot from the disk as if it were the only one on the box. As you had two disks in the past, you may have to install LILO on the second disk because it will have been installed on the first disk when you did the initial Linux installation. Back to the list ****** Ubuntu won't write to SD cards Q:: My Evesham Voyager (running XP Pro and Ubuntu 64) will read but not write SD cards whether the slider is locked or unlocked. The card works fine in my brother's Toshiba. I have contacted Evesham and reloaded USB drivers but to no avail. I thought it might be helpful if I told them I'm dual boot and that the same happens in Linux. They now refuse to help, saying they can't support a dual-boot PC, and that I must reformat the hard drive! I only recently reinstalled everything so don't want to do that, and I want to continue with Ubuntu. XP lists generic USB drives CFC, MMC, MSC, but there's no SDC even though the slot is supposed to be 4-in-1. In Ubuntu the Read, Write and Execute permission buttons are all ticked. I thought I'd try HardInfo, but loading fails because glibc is too old. I'm told 'you need at least the following symbols in glibc:GLIBC_2.0' yet I've installed all auto updates. It tells me that upgrading glibc is highly dangerous, that whoever built the package did not build correctly, and that I should report this to the provider and ask them to rebuild using apbuild. Can you help? A:: Right. If this happens in both Windows and Linux, your card reader is almost certainly at fault and you will need to get Evesham to fix it, something that the company should do whichever operating system is installed because this is a hardware fault. If Evesham insists on your removing Linux, you could use Partition Image (www.partimage.org) to back up your Linux partition(s). But if this error only happens in Linux, it is most likely a permissions problem. Even though the directory at which the device is mounted is writable by you, the underlying device may not be. Can you write to the card as root? You don't need to log into the desktop as root to do this; assuming the card is mounted at /media/sd, open a terminal and type --- sudo touch /media/sd/tmp ,,, If you can write as root, it would appear that the device node for the card is not writable by your normal user. Run mount to see the device name - you'll see something like --- /dev/sda1 on /media/sd type vfat (rw,noexec,nosuid,nodev,noatime,uid=1000,utf8,shortname=lower) ,,, at the end of mount's output, showing that the device, in this example, is /dev/sda1. Inspect the permissions on the device node with --- ls -l /dev/sda1 ,,, You will see something like --- brw-rw---- 1 root plugdev 8, 1 Oct 23 17:29 /dev/sda1 ,,, This shows that the device is owned by the root user and the plugdev group. The rw-rw---- shows that the user and group can read and write and that others cannot, so you need to ensure that you are a member of the plugdev group. Run id from the terminal to see which groups you belong to and use the following commands to add yourself to plugdev: --- sudo gpasswd -a $USER plugdev newgrp plugdev ,,, The first command adds you to the plugdev group; the second makes that your current group, otherwise you would have to log out and back in again for the change to take effect. The HardInfo error is odd, because Ubuntu Dapper comes with version 2.3.6 of glibc. This could be an error in the Autopackage build. An older version of HardInfo is in the Ubuntu Universe repository - the latest version, 0.4.1, is in the Ubuntu Edgy repository. Add --- deb http://archive.ubuntu.com edgy main universe ,,, to /etc/apt/sources.lst and you will be able to install it from Synaptic. We have also included a Deb package of HardInfo on the DVD. Back to the list ****** Restore partition table in Linux after formatting drive Q:: In my frustration at trying to get a new SATA drive to format, I accidentally formatted the wrong drive, which had three partitions (/, /home and swap). I must admit I was using the Windows XP install disk (last resort, honest!). I managed to press the reset button a few seconds in after failing to stop it with Esc or Ctrl+Alt+Del. The hard drive is of course unbootable now, but when I load up Knoppix and QtParted it still seems as if the /home partition is there (the desktop icon is present), although the other partitions have bitten the dust (unformatted space). If I try to get the partition (hda3) to mount by double-clicking on its icon on the Knoppix desktop, an error code says something like 'filesystem not defined', which I suppose has something to do with the first chunk of hard drive having been formatted (is that where the 'TOC' info is held?). Can you help? A:: If you know the sizes of the partitions, you can create them again in Cfdisk. As long as you have not created any new filesystems in their place, the filesystems should still be on the disk - you have probably only deleted the partition table. It may take some trial and error to find the correct sizes for each partition, but as long as you mount each one read- only (add -o ro to the mount command) you can't make things worst. It is not surprising that you can no longer boot from the disk, as you have removed the root partition from the partition table, so Grub cannot find its files. There are a couple of utilities for automating the process: Gpart (not to be confused with GParted) and TestDisk. They are both on the Knoppix 5.0.1 CD and DVD. You should be aware that these programs are trying to guess your partition layout from leftover data; the Gpart man page sums it up nicely with, "It should be stressed that Gpart does a very heuristic job, never believe its output without any plausibility checks. It can be easily right in its guesswork but it can also be terribly wrong. You have been warned." Whichever program you try, read the man page thoroughly before you touch a byte of your disk, and be patient. Both programs take a long time to run, as they are scanning every sector of your hard disk, so an extra few minutes spent reading won't make much difference to the overall time taken, but could have a huge effect on the result. Incidentally, a TOC (table of contents) is used on CD and DVD filesystems. Hard disks have a partition table at the start of the disk, with the directory information contained in the filesystem itself. Back to the list ****** Locked out of Ubuntu: 'session only lasted 10 seconds' Q:: When I try to log into my newly-installed Ubuntu I get a message saying: 'Session only lasted 10 seconds'. When I check the log I see: --- 'Failed to set permission 700 to .gnome2_private'. ,,, I can only log in to a terminal, so how do I fix this? A:: Are you reusing a home directory from another distro? What you describe is a classic symptom of that. Even though you may have used the same username as on your old system when installing Ubuntu, the username and group could have been allocated different numerical values (usually referred to as UID and GID). The filesystem only stores the numerical values, so these files are no longer owned by your user, hence the error when trying to change the attributes of one of them. Fortunately, the solution is simple and fairly quick. At the terminal prompt, type --- sudo chown -R fred: ~fred ,,, This resets everything in Fred's home directory to be owned by Fred. The string needs to be run as root, hence the use of sudo. The trailing colon after the username, which you will obviously replace with your own, is important - it sets the GID to whichever group Fred belongs to, his primary group. Not only is this quicker than running chgrp, it even saves you from having to look up the correct group. Back to the list ****** 3D acceleration not working in SimplyMepis Q:: I am using SimplyMepis 6.0. I have sound and internet, am pleased with apt-get, Beagle and SuperKaramba are working well ... the only thing missing is 3D acceleration. I installed the ATI drivers and activated them in the Mepis control centre. I also ran aticonfig, but the following is what I get from glxinfo: --- root@1[philippe]# glxinfo | grep direct _______________________________________ Xlib: extension "XFree86-DRI" missing on display ":0.0". direct rendering: No OpenGL renderer string: Mesa GLX Indirect ,,, I have attached my xorg.conf file. A:: Large configuration files, like the one you attached, can make it difficult to spot problems; a case of not being able to see the wood for the trees. Removing all the commented lines and sections made it easier to check and showed that you have two Device entries for your graphics card; one using the ATI drivers and one using VESA. This is normal, as are the duplicated Screen entries to go with them - it makes switching between the two setups as easy as changing one line in the ServerLayout section. However, this part of your configuration is definitely broken: --- 'Section "ServerLayout" Screen 0 "ATIScreen" 0 0 Screen 0 "aticonfig-Screen[0]" 0 0 InputDevice "Keyboard0" "CoreKeyboard" InputDevice "PS/2 Mouse" "CorePointer" EndSection' . ,,, You have included two definitions for Screen 0 in ServerLayout. It would appear that X.org is using the first one, as this uses the VESA definition, but a quick check of the log file with --- grep Screen /var/log/Xorg.0.log ,,, will show you which screen is in use. All you need to do is comment out the incorrect Screen entry in ServerLayout and restart X to get your 3D acceleration working. You could remove the line, but by commenting it out, you can switch back to the VESA driver at any time by moving the comment to the other Screen line. Back to the list ****** Transfer Mepis from secondary drive onto primary drive Q:: I have a two-disk box with the hda holding Windows 98 SE, and SUSE 10.1 and Mepis 6.0 on hdb. I have decided to switch my main distro to Mepis. Everything is now installed on it and I want to ditch SUSE. However, I would really like to transfer Mepis 6.0 on to the primary drive. I am currently booting through SUSE's Grub, installed in the boot section of hda MBR, so as soon as I delete SUSE I will lose the ability to boot (but could solve this with the Live CD option). Is there an easy way, such as a total disk transfer, to copy Mepis as configured over to hda? I need to keep Win98, unfortunately. A:: Grub is installed in the MBR, so it won't be deleted when you delete SUSE. You will lose the /boot/grub directory, which contains files needed by Grub, but you can replace that with the same from Mepis. The whole process is best done from a Live CD - you don't want to be messing with filesystems that the OS could be changing while you are copying them. Boot from the Mepis Live CD and log in as root, with password 'root' Assuming that SUSE is on /dev/hda2 and Mepis on /dev/hdb1, do the following: --- mount /dev/hda2 mount /dev/hdb1 cp -a /mnt/hda2/boot /mnt/hdb1/boot.suse ,,, This mounts the filesystems and creates a backup copy of the SUSE boot directory, just in case you need it later. Now you can reformat the SUSE partition. You have a choice of filesystems to use here but if you are unsure use --- mke2fs -j /dev/hda2 ,,, to create an ext3 filesystem. Now copy everything across with --- rsync -ax /mnt/hdb1/ /mnt/hda2/ ,,, The trailing slashes are important! This process may take a while. When it has finished you need to edit /mnt/hda2/etc/fstab and change all the hdb references to suit the new locations on hda. You can reuse the SUSE swap partition on /dev/hda. The last step is to ensure your system boots correctly. If there was a /boot/grub directory on your Mepis disk, you need to edit the configuration file in /mnt/hda2/boot/grub/menu.lst to suit the new locations. Grub numbers disks and partitions from zero, so /dev/hda2 is (hd0,1) in Grubspeak. You may also need to add a menu entry to boot Windows, which you can copy from /mnt/hda2/boot.suse/grub/menu.lst. If there was no grub directory on your Mepis installation and you were handling everything from SUSE's bootloader, copy the grub directory from boot. suse into boot and edit menu.lst to add a suitable entry such as --- title MEPIS at hda2, kernel 2.6.15-26-386 root (hd0,1) kernel /boot/vmlinuz-2.6.15-26-386 root=/dev/hda2 nomce quiet vga=791 ,,, The name of the kernel file should match whatever is in your boot directory. Finally, run Grub to make sure you are using the correct configuration: --- grub root (hd0,1) setup (hd0) quit ,,, Back to the list ****** Windows overwriting Linux bootloader Q:: When I installed SUSE Linux Enterprise Desktop [SLED] 10, I made these partitions on my 28GB hard drive: --- FAT32, 10GB, /windows/C. Linux, 10GB, /. FAT32, 7GB, /windows/E. swap, 9GB, swap. ,,, Then I installed Windows on C:. But now I have a problem: Linux is not booting. I do not get the menu that asks me to choose between the OSes; Windows starts directly. I should mention that when I was installing Windows it said something like, 'there is an unknown partition, it will be inactive, if you want to activate it do...', but when I went to follow its instructions it couldn't recognise that partition and now I have Windows with only two partitions, C: and D: (note that D: is the 7GB, not the 10GB). Is there an answer? A:: This is a common problem, caused by the Windows installer's assumption that there are no non-Microsoft operating systems. When you install Windows it overwrites the bootloader with its own, without considering that you may wish to keep it. The good news is that your Linux installation is untouched, including the original bootloader menu and other settings. All you have to do is reset the hard disk's Master Boot Record to use the Grub bootloader that SUSE set up for you. Boot from the SLED CD/DVD and select the Rescue System option from the menu; this will boot to a login prompt. Type root at the prompt (there's no password needed) and you are in a basic rescue shell. The first step is to determine which is your Linux partition. Run fdisk -l to display a list of partitions. One of them will be marked as Linux - probably /dev/hda2 based on your list of partitions above. You can mount this partition with --- mount /dev/hda2 /mnt ,,, Then type the following commands to enter the Grub shell and find the correct partition for the bootloader: --- grub find /boot/vmlinuz ,,, This returns the boot disk in Grub's terminology, probably (hd0,1). Now type the following commands to set up the bootloader again: --- root (hd0,1) #the disk label returned above setup (hd0) quit ,,, That's it, you can now reboot with the cryptically-named reboot command. Eject the CD/DVD and you should get your Grub menu back with the same choices as before. Note that if you ever need to reinstall Windows, the same will happen again - with the same solution. Back to the list ****** Cannot access Desktop:0 output to TV Q:: I have a home LAN of five PCs, one of which is set up without mouse, keyboard or monitor but has the TV-Out port connected via S-Video cable to an analogue TV. This PC has a digital HDTV card, and I have configured xorg.conf to clone the video card output to the TV. Everything works fine if I attach a mouse and keyboard because I can use the TV output instead of a monitor. Currently I can remotely access this PC using NX (preferable) or TightVNC, but I cannot figure out how to remotely access Desktop:0, which is the desktop being output to the TV. I have tried Google, but nothing that I have found seems to resolve this problem. I guess that an alternative solution may be to somehow set up xorg.conf to output Desktop:1 to the TV. I am currently running Kubuntu 6.10. A:: VNC is intended to work like this, to run as a separate X session, although it is possible to change it. By running X11vnc, housed at www.karlrunge.com/x11vnc, you can make your existing X display available via VNC. But because you are using Kubuntu, and therefore the KDE desktop, the answer is simpler: use the Rdesktop server and client contained in KDE. On the headless PC, start the KDE Control Centre and go to the Internet & Network > Desktop Sharing section. Next, turn Allow Uninvited Connections on and Confirm Uninvited Connections off. You must set a password to stop unauthorised connections. Blocking port 5900 at your router is also a good idea, unless you want to be able to connect from the internet. Now you can connect with K-Menu > Internet > Krdc Remote Desktop Connection on another computer. The alternative is to connect to the box with SSH and run individual X programs on your local desktop, thus: --- ssh -X hostname someprogram ,,, This solution has the possible advantage of the computer program not appearing on the TV display, which is useful if you want to carry out some administrative task while viewing video output to the TV. Although some films may arguably be improved by the presence of a Konsole window opened on top of them, it is unlikely that family members watching the TV will agree with that. Setting X.org to put display 1 on the card won't help, as the standard VNC server will then start up on display 0. Back to the list ****** Reset BIOS/CMOS password Q:: I have four totally different computers. Six months ago I entered the same password on each of them to protect the CMOS. The password has six letters and a * and an &. Now that I need to install and change some hardware I'm finding out that none of the four computers is letting me into the CMOS with that password. There is absolutely no doubt the password is the right one. How do I reset the CMOS? A:: It is most likely that the password you have given contains unacceptable characters. Resetting the CMOS varies somewhat from one motherboard to the next, but there is generally a motherboard jumper marked something like CLEAR CMOS or RESET RTC. You need to turn off the computer, move this jumper to the reset position - the is the opposite of its current setting - wait a few seconds then return it to the standard setting. Under no circumstances should you do this with the power on. In fact you should disconnect the power lead because the PSU will supply a small amount of power to the motherboard even when the computer is turned off. When you reconnect the power, you should find your BIOS has reverted to the default settings. The procedure can vary between manufacturers; you should only take the above a a guideline, except for the part about disconnecting the power lead - not doing so can wreck your motherboard. Check your motherboard's manual or search the manufacturer's website for a PDF version if you have no printed manual. It is important to stress that while the basic procedure is the same for all motherboards I've used, the details vary (one I checked required you to remove the motherboard battery too) so you must read the documentation before doing anything. Back to the list ****** Display serial terminal logs from printer Q:: We have a small 5ESS switch which sends log files to a ROP (read-only printer) at 1200-O-7-1. The cable is the normal three-wire Unix serial cable. I have connected a PC to this using a Black Box bridge. The PC is running Windows 2000 with Procomm [a terminal emulator]. I can make a direct connection to the serial port and the logs are displayed on the screen. I am also capturing these displayed logs from the switch to a file (saves a lot of paper). I would like to do the same thing with Linux (so I can discard Windows) but am unable to make this work using tee or any redirections. Ideally, not only would this log display always be available in a window, it would continue to be captured in a file, even if the user were logged off. In addition, I would like a new file to be created every night at midnight, closing the old file. Can you help with the syntax or a workable method of doing this? A:: The first program I would try for this is Minicom. This is a terminal emulator, much like Procomm, and should be in your distribution's software repository. If it isn't, you can get it from the website http:/ alioth.debian.org/projects/minicom. Minicom has an option to log data to a file as well as display it - so, providing it will communicate with your switch, it should do all you need. Alternatives involve more low-level communication with the serial port. There is Logserial from www.gtlib.cc.gatech.edu/pub/Linux/system/serial, which dumps the data from the specified serial port to stdout or a file; you could use tee to do both. There is also a Perl script available from http://aplawrence.com/Unix/logger.html, although this may require some modification to fit your requirements. To keep the program running all the time, run it in screen. This keeps the program ticking over when the user is logged out, plus you can reconnect to it at any time, even over an SSH connection from elsewhere. Provided you have set the logging and other options in Minicom's configuration, you can start it with --- screen minicom ,,, Press Ctrl+A D to exit screen while leaving Minicom running, and type screen -r to reconnect. One thing to watch out for when running Minicon in screen is that both use Ctrl+A as a command key; to get screen to pass Ctrl+A to a program running in it, press Ctrl+A A so you would use Ctrl+A A Z to show Minicom's help screen. I would use Logrotate to split the log files. This is probably already installed and running as most distros use it to rotate system log files. Back to the list ****** How to configure Evolution on Ubuntu Q:: I need help with post-installation configuration of Evolution mail on my Ubuntu system. My home computer runs a dual-boot system with Windows XP on one partition and Ubuntu Linux on the other. I have an Ethernet connection to a broadband hub. Mozilla works beautifully under the Ubuntu OS - indeed, I have successfully used it for upgrades, Google searches etc. So Ubuntu has excellent internet access. Evolution mail, however, needs to be configured properly. A Google search about Evolution gave a hit that revealed a document about setting up Evolution. At first that seemed to be very helpful but it required the use of the Tools menu on the menu bar of Evolution - which my version of Evolution does not have. How can I configure my Evolution without a Tools menu? A:: As you have not stated which versions of Evolution and Ubuntu you are using, I am going to assume you are using the latest Evolution, version 2.6. Now, Evolution 2.0 has the Tools link in the menu bar, but Evolution 2.4 and 2.6 don't have it. The first time you start Evolution, you should see the First-Run Assistant. In your case, it seems that didn't work and Evolution has hidden the option from you now. To invoke First-Run Assistant, you should delete the folder .evolution (note the dot) in your home directory then type evolution from the command line. This will make Evolution think it is running for the first time and it will show the First-Run Assistant, which you can then use to configure the software. If you don't want to take the command line option, just create a new mail account by clicking on Edit > Preferences, then Mail Accounts > Add. Back to the list ****** Analysing web server log files Q:: I'm trying to set up a server log file analyser. It runs on one machine (10.0.0.14 on my LAN) but needs an entry in its configuration file to point to the access log file. This log is held on my web server (192.168.0.2 on my DMZ). I know it's a basic question, but what should this entry be? I don't know how to specify that the configuration file needs to look at a file held on another machine. For anybody with a similar system, I'm trying to set up Analog. A:: The simplest solution to sharing a file across a LAN is to use NFS, although you'll need to ensure that you can route between your internal network and your DMZ. You can mount your log file directory from Apache on the system running Analog, then point to the appropriate access_log file. You can add a line to /etc/exports on the web server to permit NFS mounts: --- /var/log/apache 10.0.0.14(ro,no_ root_squash) ,,, Now perform exportfs -r in order to refresh the Exports list. Mounting this directory on your internal system is as easy as mounting a CD. --- # mount 192.168.0.2:/var/log/apache /var/log/apache ,,, The other option is to use rsync, which can be configured to sync files across a network, without requiring any authentication. This allows it to work very effectively from a cron job script without any user input, and it will transfer all of the changes to the log file each time it's executed. Back to the list ****** New version of Kino won't capture video Q:: A few days ago I installed Kino using Synaptic on my Ubuntu 6.06, I think the Kino version was 0.8 and it worked very well in all respects, including capturing from my camcorder. Today I accepted the offer of an Ubuntu automatic update to Kino 0.9.2 but when I run Kino now it won't capture. Instead it gives the message: --- 'Warning: dv1394 kernel module not loaded or failure to read/write /dev/ieee1394/dv/host0/PAL/in'. ,,, My camera shows up correctly as the capture device and Dvgrab from Kino captures without problem (horrible to use) so surely the 1394 must be working. Any ideas on how I can fix this? A:: Hmmm. Did you update anything else at the same time? Ubuntu uses udev to manage devices, and at the moment the raw1394 device doesn't play nicely, so it doesn't get created (there is the same problem on Fedora and other distros that now embrace udev more completely). So, that is the likely 'why' As for what you can do about it, an inelegant hack is to merely create the device node yourself: --- mknod /dev/raw1394 c 171 0 ,,, That should take care of it. If it doesn't, check that the relevant modules are actually being loaded! Running lsmod should tell you if raw1394 and video1394 are loaded properly. You may need to change the permissions on the device to get Kino to read it properly. As a side point, there is much consternation among the distro packagers regarding Kino's insistence on using the raw1394 device. Apart from everything else, relaxing permissions on this device does raise security issues. At some time in the future, Kino may end up using a more regular device to access DV devices. Back to the list ****** Copyright of images from the internet Q:: If I was to use images from the net in a slide show, what would the rules concering these be? I can't find any associated copyrights to the images I want to use and nothing is marked down, but I was wondering what the rules are govering the use of images from the internet. A:: In the UK, copyright applies at the moment of creation, and does not need to be explicitly stated. So, for unknown images on the internet, you should assume they are copyright unless stated otherwise. However, there are exceptions granted in the UK to copyright, mainly in the case of research or fair usage (eg if you were reviewing a website, it would be considered fair usage to include an image of it). The UK patents site has some useful information on this: www.patent.gov.uk/copy/c-manage/c-useenforce/c-useenforce-use/c-useenforce-use-exception.htm. So under some circumstances you can still use them legally, but ultimately, it is better to ask permission. If you just need general images to use for your presentation, can I suggest you search for Creative Commons-licensed work? Check out www.creativecommons.org. Back to the list ****** SSH directly into system in a DMZ Q:: I have set up a web server (SUSE 10.0) running inside a virtual machine that is hosted on my SUSE 10.0 box. I have configured it to be in the DMZ of my router (also SUSE 10.0). Web traffic is correctly routed to the box; however, I cannot seem to access it from the internal network on any port. I would like to be able to ssh directly into the box from within the internal network. The firewall on the router (192.168.0.9) was configured using aYast and maps its external port 80 to the web server (192.168.1.2). I tried mapping the internal (192.168.0) port 80 to the web server but this doesn't seem to work. Is it possible to do this with the Yast tool? If not, is there any easy way to convert the existing Yast setup into an Iptables script where it should be easy to achieve? Hope you can help... A:: I would highly recommend that you install IPCop, a specialist Linux firewall distribution, instead of using SUSE 10.0 for the router. I used IPCop for a long time before I switched to Cisco PIX firewalls, and it only takes a few minutes to install. IPCop uses the concept of a 'Green network' for an internal protected interface such as your web server's, and this makes it relatively easy to join to two networks like yours together. There is an excellent HOWTO about IPCop at http://howtoforge.net/perfect_linux_firewall_ipcop_p2, and the project homepage is located at www. ipcop.org. Back to the list ****** Ubuntu screen resolution not correct - can't see buttons Q:: I am using Ubuntu 6.06 on a Compaq Presario SR1720NX and am very new to this. When I try to add an Epson Stylus CX4800 printer using Gnome CUPS, the bottom of the screen, which should show Cancel, Back and Forward or Apply buttons, is missing. I can use the Enter key instead of Forward, but on screen 3 I can't find a way to activate the Apply button. A:: This would appear to be a problem with your screen resolution. If Ubuntu's installer was unable to get accurate information about your graphics card and monitor, it would have defaulted to a safe 640x480 resolution. This is too small to display the full Add Printer window. A quick fix is to hold down the Alt key, click in the middle of the window and drag it upwards to expose the buttons. Alt+clicking means you can drag from any part of the window, so you can move it upwards even if that means moving the titlebar off the screen. This will allow you to add your printer, but does not fix the cause. To change the screen resolution to something more suitable, use Preferences > Screen Resolution from the System menu. This should offer all the resolutions that are suitable for your combination of graphics hardware and display. If only 640x480 is offered, your hardware was not identified during installation. The Device Manager, from the System > Administration menu, will show you if your graphics card was identified correctly - it should be an ATI Radeon XPress 200 IGP on your computer. To change the settings for graphics card or monitor, you should run dexconf to probe the hardware and write a configuration file. It is wise to back up the existing configuration file first, so run this in a terminal: --- cp /etc/X11/xorg.conf ~ sudo dexconf ,,, This will generate a new configuration file in /etc/X11/xorg.conf after making a copy of the original in your home directory. Once you have done this, you will need to restart the X server. This can be done from the command line, but as you're a new user, restarting the computer is probably the easiest way to do it. If 640x480 is still the only resolution available to you, you will need to edit the xorg.conf file. Without seeing your existing configuration, it is impossible to say what needs changing. If you get this far and still cannot get past 640x480, I recommend you ask on our Help forum at www.linuxformat.co.uk, including the contents of /etc/X11/xorg.conf, the output from running lspci -v and details of your monitor. Back to the list ****** Specify Cron job times in SUSE 9.3 Q:: I am having some minor trouble with Cron jobs on my SUSE 9.3 system. Placing scripts in the cron.hourly, cron.daily and cron.weekly folders works fine, but how do I control when the files in those directories are executed? Can I set whether weekly jobs are done every Sunday or Friday and whether daily jobs are done at noon or midnight? I have tried to track down the way this works but it isn't clear. As best I can tell, there is only one Cron job scheduled in the crontabs file that runs every few minutes for all of the folder-based Cron jobs. That Cron job seems to call a script that looks in all of the cron.* directories, keeps track of successes and failures, and somehow keeps track of which jobs need to be done when. A:: SUSE 9.3 does this slightly differently from some other distros. Instead of running the contents of these directories at a specific time, it runs them according to when they were last run. You've got most of the way to discovering this yourself: the single line in /etc/crontab calls the run-crons script every 15 minutes. This looks for marker files associated with each of the /etc/cron.* directories in /var/spool/cron/lastrun. If the marker file is more than an hour/day/week old, it runs the scripts in the directory and updates the timestamp on the marker. If no marker file is present, it runs the scripts and then creates one. This all means that instead of, say, running the daily scripts at 4:30 every morning when system load is low, it runs them a day after they were last run. You can force a particular time by altering the timestamp on the files in /var/spool/cron/lastrun. This script will change the timestamp of each of the monthly, weekly and daily files to 4:30 am while leaving the date unchanged (otherwise you'd never run the weekly or monthly scripts). --- #!/bin/sh cd /var/spool/cron/lastrun for i in daily weekly monthly do if [ -f cron.$i ] then touch -t $(date -r cron.$i +%Y%m%d)0430 cron.$i fi done ,,, This uses the date command to extract the file's current date in YYYYMMDD format, adds the time you want (0430) and passes this to the touch command to update the file's timestamp. You can change the day for weekly scripts in a similar way. While this will switch the runtimes to a time of day more suited to you, bear in mind that any delay in running the scripts later, such as the computer being turned off at 4:30, will set the runtime to whenever the scripts were run. You could automate this by setting up a separate task in /etc/crontab to run this script at 0400 every day. Back to the list ****** Samsung ML-1210 printer not working in Gentoo Q:: I'm getting on surprisingly well with the installation I made of Gentoo, but I can't get my USB printer to print. I've gone through the Gentoo Printing Guide and other USB documentation most carefully. I've checked (and triple-checked) my kernel config options and I'm sure I've included everything I need. I've tried compiling with the USB parameters compiled into the kernel or as loadable modules and neither works. Neither does genkernel, so I don't believe it's a kernel issue. The printer is a Samsung ML-1210. It's a discontinued host-based printer, but it serves my needs adequately and has always worked fine with Linux. And it prints fine from Ubuntu Edgy from another partition on the same machine using the same USB port, so neither CUPS per se nor the hardware is the problem. If I open the Gnome Print Manager app, the printer is autodetected and the wizard offers me the same CUPS driver as other distros, but when I go to print a test page, nothing comes out the other end. The same happens when I use OpenOffice.org. OOo seems to think it has printed a document, but nothing appears. Doing lsusb shows: --- 'Bus 002 Device 003: ID 04e8:300c Samsung Electronics Co., Ltd ML-1210 Printer'. ,,, I checked /var/log/cups/error_log, and it showed nothing untoward that I can see. A:: The first thing to do when encountering CUPS problems is to turn up the logging level. Edit /etc/cups/cupsd.conf by changing LogLevel from 'info' to 'debug'; then restart CUPS. In this case, there is a clue in the logs you supplied. You are using GPL Ghostscript, which doesn't properly support the binary drivers needed by a GDI printer (aka WinPrinter) like your Samsung. So unmerge ghostscript-gpl and emerge ghostscript-esp, which has better printer support, like this: --- emerge --unmerge ghostscript-gpl emerge --oneshot ghostscript-esp ,,, It is also probable you need the openslp package, even though this is supposed to be an optional dependency of CUPS. SLP (Service Locator Protocol) is useful for other programs too, so add it to your USE flags in etc/make.conf. It is also worth adding foomaticdb, which doesn't affect CUPS directly but increases the level of printer support for some programs. Now rebuild any packages that make use of your changed flags, including CUPS, with --- emerge --newuse --deep --verbose --ask world ,,, This will display a list of packages that will be updated or installed thanks to your changed USE flags, which should include CUPS and OpenSLP. Press Enter to install them and restart CUPS when it has finished. USE flags are an important part of Gentoo and they are all described in /usr/portage/profiles/use.desc and /usr/portage/profiles/use.local.desc. Or you may find it easier to emerge Profuse and search, browse or set them in a GUI. Back to the list ****** Get Belkin WiFi card working with rt2500 driver Q:: I was wondering if you could help with a Wi-Fi/NdisWrapper problem. I'm trying to get a Belkin card to work under NdisWrapper using the rt2500 Windows XP driver. The online instructions are great and I detected the card, worked out what driver I needed and so on. I've installed NdisWrapper, installed the XP driver and when I type ndiswrapper-l it shows the driver installed and hardware present. I then did modprobe to load NdisWrapper into the kernel, configured the wireless LAN settings and it all worked fine. When I rebooted, of course, it forgot everything and I now can't get it work. NdisWrapper still shows the driver installed and hardware present, but the lights are off on the card and when I try to configure it and get an IP using DHCP it says 'no link present check cable'. I've rerun modprobe ndiswrapper, and had the card out and back in again, but the card still doesn't light up. A:: This sort of problem is not uncommon with NdisWrapper, but it should not affect you. There is no need to use NdisWrapper with an rt2500 wireless card, because it should only be used when there is no Linux driver for the card (running Windows code as root is not something you should do if you can avoid it). Linux kernel drivers for the rt2500 chipset are available from http://rt2x00.serialmonkey.com and http://sourceforge.net/projects/rt2400. Don't worry about the 2400 in the name - the same project produces drivers for the rt2400 (802.11b) and rt2500 (802.11g) chipsets. These are semi-official drivers in that they are based on the original closed source drivers from Ralink, which it was subsequently encouraged to release under the GPL. As well as the drivers themselves, the project includes a GUI for wireless scanning and configuration. Some distros, such as Debian, include the drivers in their repositories, while with others you need to build from source. Without knowing your distro it is hard to give specific installation advice, but if you want to install from source, you will need the kernel sources installed. These are usually in a package called something like kernel-sources, linux-sources or kernel-devel. Make sure you install the package with the same version as your running kernel. As with all external kernel modules, if you ever upgrade your kernel you will need to reinstall the module. Because you may not have internet access until you do, I'd advise you to keep a copy of the source tarball or installation package somewhere safe. If you insist on using NdisWrapper, it looks like you need to run ndiswrapper -m to set up an alias for wlan0 in the NdisWrapper configuration. This forces NdisWrapper to load the module and driver Back to the list ****** Remove menu clutter after installing multiple distros Q:: After installing various distros I have a cluttered /home/username folder and a menu full of unusable program entries. How can I delete these dead entries and sort the files into folders by extension, for example putting all GIF, PNG and JPEG files into one directory? I would also need to deal with duplicate files. This is in preparation for reinstalling Ubuntu. A:: I would rename the /home/username folder before installation, then only copy over the files you need. This is probably easier than trying to clean out the detritus from a live home directory. The best program I have found for identifying and removing redundant files is Kleansweep, from http://linux.bydg.org/~yogin. Finding duplicate files is best done with Fdupes from http://netdial.caribe.net/~adrian2/fdupes.html. Use it like this: --- fdupes --recurse ~ fdupes --recurse --omitfirst ~ | xargs rm ,,, The first line will show all duplicate files, the second will remove all but the first occurrence of each file - use this with care. Sorting files by name is best done with the find command. You can move the files you mention with --- mkdir pics find ~ -iname '*.jpg' -o -iname '*.png' -o -iname '*.gif' -exec mv "{}" pics ';' ,,, Back to the list ****** USB flash drive not working on some hardware Q:: I have a 2GB USB stick on which I have installed Slax Popcorn Edition. I can easily boot my computer from the stick and save all my changes to the system. Once in a while I run into a system where I would really need to boot it from the USB stick but rebooting is not possible, and because the host system is configured 'tight' I can't use Qemu. I have been trying to find the solution to this problem. I tried VMplayer, Qemu and Moka to no avail: there is always something missing. My ultimate solution would be VMplayer installed in the USB stick with the OS image, but I haven't found a way to do this. Is there a solution in the market that would allow me to run my own OS from the USB stick regardless of the host machine? A:: There are a number of reasons why you may not be able to boot a USB Flash device on some hardware. Some computers are incapable of booting from USB devices, although these are thankfully few now. Another scenario, which you seem to be experiencing, is that the owner of the computer has configured the BIOS to not boot from USB. If this is the case, trying to circumvent such restrictions is usually wrong and often illegal, unless you have the owner's permission. If you do have the go-ahead, you can often use a bootable CD to start the boot before passing control to the USB device; the Slax website (www.slax.org) contains just such a CD image. Another obstacle to booting from USB devices is that there are at least three ways of doing this. The device can be set up to boot as if it were a floppy disc, Zip disc or hard disk; the Slax USB installer appears to use the first option. Not all BIOSes can boot all three types, so you may need more than one USB stick. Damn Small Linux has a USB installer capable of creating either a USB-ZIP or USB-HDD-style bootable device, so it may be worth investigating. My laptop will not boot the Slax image but will boot a DSL installation on the same USB key. Some computers will not boot a USB Flash device from a partition larger than 256MB, so you should partition your drive with a 256MB partition for the OS and the rest for your data. Your VMware solution is ingenious, as it removes the need to reboot, but VMplayer needs files to be installed on the host operating system. Moka would appear to avoid that need, but it works by temporally installing files to the host Windows system, so needs to be run as an administrator. If the configuration of the computer is stopping you from booting a USB device, you should accept that or ask the owner to change it. If it is the way the computer boots from USB devices causing your problem, try a different distro. Mandriva has just announced Mandriva Flash, a complete desktop on a 2GB USB key. I haven't tried it yet - but you can find more information at www.mandriva.com/linux/2007/flash. Back to the list ****** Mepis OnTheGo disk selection dialog is blank Q:: I am trying to create an 'OnTheGo' disk from the Live distro version of SimplyMepis 6.0, but the disk selection box remains blank with no options offered. I have tried: Booting with the USB Flash drive in place, then mounting it. Inserting it after the computer has booted, then mounting it. Logging on as both 'demo' and 'root' . Both an Advent 2GB and a Huke 512MB USB2 drive. I know that the drive has been successfully mounted because I am able to save files to it - I have dragged and dropped the selection of background pictures supplied, and they are still there after a hard reboot. My computer is about six years old; it's a Pentium 3 with Windows 98SE installed and a USB2 PCI card as an upgrade. My only experience with Linux is with the Live distros on magazine coverdiscs over the past few months. As a Linux newbie I am at a loss as to what else to try. A:: You have to be logged in as root to set up OnTheGo, and the USB device must not be mounted. After logging in as root, plug in the device. If the KDE dialog pops up asking you what you want to do, select Do Nothing. If the disc automounts, use KwikDisk from the Kicker panel to unmount it or type unmount /dev/sda1 in a terminal. Do not use the Safely Remove option from the disc's icon as this also removes the device's node in /dev, rendering it unavailable to the installer. Now run Mepis Utilities - select the option to create an OnTheGo disc and your drive should be available, most likely as sda. Once the process is complete, remove the USB disc (there's no need to unmount it) and select Log Out from the K menu, followed by End Current Session. When the login screen appears, plug in the USB disc, wait ten seconds for it to be detected and log in with a username and password of 'onthego' If you created OnTheGo with encryption, you will be asked for the encryption password later. The OnTheGo disc only contains your personal data, which can be encrypted; you still need to boot from the Mepis CD. On the other hand, you won't run into any of the problems booting from a USB device mentioned in Linux On A Stick, and you can copy the .onthego.iso file to a different USB disc if you wish. Back to the list ****** Linux across two hard drives Q:: I can dual boot my Linux/XP PC on my 20GB disk. I just added a separate 8GB disk drive to my machine. Now I've got two separate drives: 20GB and 8GB. I want to put Linux onto the 8GB one. I was just wondering if the process is still the same, given that I've just installed an extra drive. A:: Linux can be installed on the second disk simply by partitioning it and doing an install. However, you'll want to install the boot loader for it onto the first disk because this is what the BIOS will try to boot from. You can then dual boot between Windows XP on the first disk and Linux on the second. Most distributions will enable you to install Grub or LILO onto the first disk during the installation process, as well as adding entries so you can boot Windows XP, or your old Linux install on the first disk. Back to the list ****** Best Linux distribution for Moodle Q:: I am looking at implementing Moodle as a course management system (ultimately with a web hosting service that already has Linux, Apache and MySQL). But is there a version of Linux that is best to start with? Moodle permits a Windows install but I think it is best to go all the way and do it right. A:: Is there a version of Linux best to start with? I guess it depends on your preferences. As a Debian user, I would say try Debian, as it's very stable and easy to install. If you are a beginner I would opt for Ubuntu (which is based on Debian). The latest version is Ubuntu 6.06 LTS Server. I have experience on Debian and I can say it would take you 30 minutes maximum to install a Debian server from the moment you insert the CD and boot from it. The Debian package administration is extremely simple to use, using the command dselect. Back to the list ****** Remove entire line containing string in a file Q:: I need to grep for a particular 'string' in a file and remove the entire line where the occurrence of the string is found. I want it to work across with a collection of files. Can you help? A:: It is possible to use grep for this: grep -v string file will output all lines that do not contain the string. But sed is a more suitable tool for batch editing. --- sed --in-place '/some string/d' myfile ,,, will delete all lines containing 'some string' To process a collection of files, you need to use a for loop (or find) because sed 's --in-place option only works on single files. One of these commands will do it: --- for f in *.txt; do sed --in-place '/some string/d' "$f"; done find -name '*.txt' -exec sed --in-place=.bak '/some string/d' "{}" ';' ,,, Adding =.bak in the latter example makes sed save a backup of the original file before modifying it. Back to the list ****** Start VNC server on boot in Slackware Q:: I connect to my home server using VNC (not over SSH yet!). However, it doesn't bring up my 'start' bar on KDE and I automatically log in as the person who started the VNC server (not tested with root!). I would like my system (Slackware 10.2) to start VNC on boot so I can vnc to the XDM/KDE login screen. My init is currently set to level 4. Any ideas, hints or advice on better software? My server doesn't have a monitor. A:: Here's what you need to do to configure a VNC server. Note: the VNC server must be running, and it must be configured to run your preferred window manager. You can do this by editing the file $HOME/.vnc/xstartup to call your preferred window manager. Use startkde & for KDE, gnome-session & for Gnome or fvwm2 & for Fvwm2. Also, make sure you have run vncpasswd in $HOME/.vnc/passwd to create the password file. Red Hat provides an easy way to start up the VNC desktop at boot time. Use linuxconf to set the vncserver boot script (in /etc/init.d/vncserver) to come up at boot. The default bootscript, however, doesn't quite give the flexibility that I'd prefer. Edit /etc/init.d/vncserver, looking for the line that says --- su - ${display##*:} -c \"cd && [ -f .vnc/passwd ] && vncserver :${display%%:*}\" ,,, Change it to look like this: --- su - ${display##*:} -c \"cd && [ -f .vnc/ passwd ] && vncserver ${ARGS} :${display%%:*}\" ,,, Then edit /etc/sysconfig/vncservers to this: --- # The VNCSERVERS variable is a list of # display: user pairs. # Uncomment the line below to start a VNC # server on display :1 as my 'myusername' # (adjust # this to your own). # You will also need to set a VNC password; # run 'man vncpasswd' to see how to do # that. # DO NOT RUN THIS SERVICE if your local # area network is untrusted! For a secure # way of using VNC, see <URL:http://www. # uk.research.att.com/vnc/sshvnc.html>. VNCSERVERS="1:jdimpson" ARGS="-geometry 1024x768 -alwaysshared" ,,, Change the value 1024x768 in ARGS to represent the size of your actual X desktop. Add any other VNC server arguments that you wish to this ARGS variable. Also change jdimpson in VNCSERVERS to whatever user you wish to run the VNC desktop. The value 1 in VNCSERVERS makes the VNC server run as display 1. You can have additional desktops come up like this: --- VNCSERVERS="1:jdimpson 2:phred 3:sysadmin" ,,, On a Red Hat system, make sure the VNC server is running by executing this: --- /etc/init.d/vncserver start ,,, At this point, you can connect to the VNC desktop using any VNC client. Back to the list ****** Mandriva: no 3D effects with Nvidia GeForce 6800GT card Q:: I have successfully installed Mandriva Linux 2007 and am trying to enable 3D desktop effects. When I click on the 3D icon under Configure Your Computer/Hardware, everything is greyed out, with a message at the top saying 'Your System does not support 3D desktop effects'. I have an Nvidia GeForce 6800GT, which ran perfectly with Mandriva 2006. What can I do to get the 3D desktop working? A:: The most likely cause of this is that you are using the free nv driver for your graphics card. This driver does not support any sort of 3D acceleration - you need Nvidia's own drivers for that. These can be downloaded from www.nvidia.com as a single file that you run to install them. However, you will need several other packages installed before you can do this. At the very least you will need the kernel sources to match your running kernel. Mandriva no longer includes these on its DVDs, so you will need to add Mandriva's online repository to the Mandriva Control Center before you can install it. You may also need a compiler installed. The Nvidia installer comes with precompiled modules for a few kernel variants, but compiles them on the fly for others. Once the drivers are installed, you will have to edit your X configuration to use the new drivers. The Nvidia installer requires that you do all of this without X running, working entirely from a virtual console. Fortunately, there is a much easier way. The Penguin Liberation Front (PLF) is the "official unofficial" repository for Mandriva, containing a number of non-free (as in speech) packages and others that cannot be included in the main distro because of legal complications; such as libdvdcss, needed to watch encrypted DVDs. The first step to easy Mandriva software installation is to add this and the official Mandriva repositories to your system. Go to http://easyurpmi.zarb.org and select suitable mirrors for the Mandriva and PLF sources; those closest to you are usually best. Click on Proceed and it will display a screed of text for you to type in a terminal, but even that is easy. Open a terminal from the Mandriva menu with System > Terminals > Konsole and type su to become root, then drag your mouse over the text in the browser so that all of the text in the box, and nothing else, is highlighted. Now place the mouse over the terminal window, press its middle button to paste in the highlighted text and press Enter. You'll need to be online to do this and it will take a few minutes as it downloads lists of available packages. Now fire up the Mandriva Control Center (System > Configuration > Configure Your Computer), go to the software section and type 'nvidia' in the search box. Select the package (it is currently nvidia-8774-4plf but the numbers may change as the 9000 drivers could be out by the time you read this), and click Apply. If any other packages are needed, they will be installed automatically - you only need to select the one package. Finally, go into the Hardware > Graphical Server section of the Control Center and select the Nvidia option for your graphics card. When you reboot you will be using Nvidia's drivers in all their 3D glory and you will be able to set up the 3D desktop effects. Have fun! Back to the list ****** Incorrect keyboard layout/mapping in SUSE 10.1 Q:: I have had a hopefully minor glitch in installing SUSE 10.1. When I key in the at sign I get ". Similarly, double quotes gives @. I definitely selected English and UK in the setup as this was shown in the confirmation before the installation panel. Hopefully there is a way to correct this mishmash without reinstalling. Can you help? A:: You will have to change the keyboard mapping. To do this, you need to edit the file /etc/X11/xorg.conf. Make sure you do take a backup of that file, in case you delete or edit the wrong line. This needs to be done logged as root (su -) - just open up a terminal and navigate to the folder /etc/X11. To back up the file it's as easy as doing this: --- cp xorg.conf xorg.conf-back ,,, Edit the file using your favourite editor. Look for the line Option "XkbLayout" "whatever" change whatever for gb, save the file and restart your workstation (shutdown -r now). Back to the list ****** Remove Zen-updater icon from appearing on login Q:: This may sound a little strange, but, I want to stop the little Zen-updater icon from appearing when a user logs in. Can you tell me how I should go about this? The reason why I want to disable it is that when a domain user logs in, Zen crashes, giving a lovely exception message. I am assuming this is because the domain users do not have a 'local' user ID and cannot be looked up by the system. Also, it is not required for users to be able to perform system updates. A:: The fix is very simple. There is a file called zen-updater-auto.desktop in the folder /etc/xdg/autostart. You would need to edit that file with your favourite editor (Vi, Pico or whatever) and comment out the line 'Icon=zen-icon' You might then need to restart the Zen-updater application. Back to the list ****** Mount a SATA disk in OpenSUSE 10.2 Q:: Please help me! I have a problem. My motherboard is an ASUS P4S800D with an SIS655FX chipset. I have two hard disks: the first, an IDE disk, with OpenSUSE 10.2, the second, a SATA disk with Windows. The installer of SUSE 10.2 detects only the IDE disk. How can I set up and mount the SATA disk in OpenSUSE 10.2? On the official site of SIS I find a driver, but I get a make error because it can't find scsi_request.h. Is this being caused by a problem in the kernel? A:: SATA does still continue to be a problem for many people. In our experience, the easiest way to fix the problem is to switch the drives into compatibility mode using your BIOS, then complete the installation and try switching it back. Many distros struggle to get installed on normal SATA drives, but then work just fine once they're installed - particularly after you've installed all the latest patches. You should also check to make sure you're not using software RAID, because that can also cause problems. As a last resort, try adding insmod=ide-generic to the installation boot options box. Good luck! Back to the list ****** Make DVDs or video CDs from slideshows of photos Q:: I would like to make video CDs of some of my photos. At the moment I just want to get the photos on to video CD so that they can be played on a simple DVD player and TV. Later, I will want to add a soundtrack. It appears that there are various tools out there, but I haven't found a clear description of how to perform this simple task. Using FFmpeg, for example, I can create a movie from my JPEG files that takes about 0.4 of a second to run. I'd like to be able to show each frame for three seconds (for example) but can't find a way to add a delay between frames. Convert looks promising, but I just get errors about mpeg2encode with it. I'm using Ubuntu Dapper. Thanks in advance for any pointers. A:: DVD would be better than video CD. Not only can you fit a lot more photos on one disc, but the quality is much higher. The main part of the process is much the same whether you're making DVDs or video CDs, although most of the tools are set up for DVD creation, so will need some tweaking to create video CDs. The most straightforward way to put a slide show on to a disc is to use the slide show plugin in DigiKam or KPhotoAlbum (both programs use the same plugin) to create a DVD slide show from an album or selected photos. These are quite limited, as you can only adjust the length of time that an image appears for and the length of its fade - and these have to be the same for all images. If you want more control, DVD-Slideshow (its homepage is at http://dvd-slideshow.sourceforge.net) is a better choice. This is a set of scripts to generate DVDs from images and sound. The main script, dvd-slideshow, uses a text file listing all images and effects to create a DVD VOB file. Use dir2slideshow to generate a DVD-Slideshow input file, which you can pass directly to dvd-slideshow or edit to change the timings or effects. Then use dvd-slideshow to create the slideshow and add music. You can use MPlayer to view the resulting VOB file to check it before putting it on to a disc. Finally, dvd-menu will create (no surprises here) a DVD menu for one or more slide shows, and you have the option of calling dvdauthor to write everything to an ISO image ready for writing to a DVD. Assuming you have a directory called pics that you want to make into a slide show, the commands are as follows: --- mkdir slideshow dir2slideshow -o slideshow -t 5 -c 1 -n myslideshow pics # edit myslideshow.txt if you want to change timings or efects dvd-slideshow -a somemusic.ogg myslideshow.txt dvd-menu -t "My slide show" -f myslideshow.xml -iso ,,, This creates a slide show with each image shown for five seconds with a one- second fade, and writes it to an ISO file ready for burning to a DVD. It is also possible to generate a DVD with a single slide show that plays immediately, without going through a menu. The programs default to NTSC output; for a PAL DVD you should add the -p option to each command or put --- pal=1 ,,, in ~/.dvd-slideshowrc. If you want to create a video CD-compatible MPEG, you can use FFmpeg to transcode the VOB file that you created with dvd-slideshow, like this: --- ffmpeg -target pal-vcd -i dvdslide.vob vcdslide.mpg ,,, Back to the list ****** Create symbolic link (symlink) to fix OpenOffice.org installation Q:: I need to link one directory to another so that if a program asks for directory x, it is shown directory y instead. I've tried ln with various options, but it just keeps creating the link inside the target directory. The reason for doing this is that I've just updated from OpenOffice.org 2.0 to OOo 2.1, which has created a new directory called /opt/openoffice.org2.1. When I click on a text document or spreadsheet in KDE it tries to look inside /opt/openoffice.org2.0, which no longer exists. If I cd into /opt and do --- ln -s openoffice.org2.0 openoffice.org2.1 ,,, it creates the OpenOffice.org 2.0 symlink to within the 2.1 directory. I've tried everything but just cannot get it to work! A:: There are two problems with the way you are using ln. The first is that the syntax is 'ln -s source destination' This one got me too: for some time I had to think twice, having first used links on an OS that used the opposite order. The arguments should go in the same order as they do when you're using cp and mv : I find that helps me remember. The other problem is that if the destination given is an existing directory, ln thinks that you want to create the link inside that directory. This is also consistent with cp and mv, which copy or move into a directory if it is given as the destination. Remove the destination directory and ln will create the link as you need. --- ln -s openoffice.org2.1 /opt/openoffice.org2.0 ,,, Note that with symlinks, the source is given relative to the destination, so even though this command is not executed in the /opt directory -and therefore no file or directory called OpenOffice.org2.1 exists - the ln command will still work. Alternatively, you could go into the file associations section of the KDE Control Centre and fix it to call the ooffice2 programs with the correct path. Back to the list ****** Run audio encoding tasks in serial or in parallel on AMD64? Q:: I'm running Ubuntu 6.10 64-bit on AMD64 and I do a ton of audio encoding. I set up a small test to see what was more effective: encoding four directories worth of FLAC files (four files to each directory, all the same size) to OGG in serial or in parallel. I wrote two Bash scripts to attempt to measure the performance. The first script takes around nine minutes to execute (just over two minutes per directory) while the second script also takes roughly nine minutes, even though each folder contains nine minutes' worth of encoding. I'm sure that there's a point at which running all of the tasks in parallel runs slower than running them one at a time. Watching the output from top shows four instances of flac running, each taking approximately 20% of the CPU's capacity when running in parallel. While running in serial, a single flac process uses much more CPU power. Are there any benchmarks or guidelines to follow? Without further testing I'm left wondering whether I could be saving a lot of my time one way or the other when I need to encode tons of files. A:: There is some overhead in running tasks in parallel, because of the extra task switching and memory management involved, but this is insignificant for small numbers of tasks. Had you tried to run 20 or 30 encoding processes in parallel you would have noticed a reduction in speed, especially if you started to use swap space. Encoding files from hard disk to hard disk places a heavy load on the CPU and memory while demanding little of your disks - this is what the techies call a 'compute-bound' or 'CPU-bound' task. On the other hand, ripping data from a CD or DVD is largely dependent on the speed of the transfer while asking little of the CPU - this is called 'IO bound' So running two CPU-bound, or two IO-bound, processes in parallel is likely to have little benefit over running them in serial, but running one of each in parallel will give a large speed benefit. If the audio that you're encoding is coming from optical discs, or any other source that gives relatively slow transfer speeds, you will see a great improvement in parallelling the processes, as in the following: Rip track 1 Encode track 1 in the background Rip track 2 There are a number of CD ripper/encoders that do just this, including my favourites: Grip (www.nostatic.org/grip) for GUI operation; and Abcde (www.hispalinux.es/~data/abcde.php) for console use. If your audio files are already on your hard disk, you may as well keep the number of encoding processes low, but be sure to use at least two - a single process will always be subject to interruption. The only really useful benchmark is one that closely mirrors your own usage, which normally means running your own tasks and timing them, as you have already done. Bear in mind that your encoding will take place in the background, so unless you do a huge amount, or each job is urgent, you could easily spend more time on benchmarking than you would save by improving your machine's performance. You have already established that there is little discernible difference for a small number of processes. Higher numbers will not improve things - unless you're running multiple multi-core processors. Back to the list ****** USB filesystem not recognised Q:: I'm trying to use a memory stick to transfer files between computers. This is a 256MB USB 2.0 device. Initially, I used it to copy files from a Mac to my Linux box. The device was recognised and the files copied, but I was then unable to delete the files from the stick. Only by going back to the Mac was I able to clear the device. I then found that the device wasn't actually recognised on my Linux box, probably because it appeared to be completely unformatted. I then wanted to transfer files from my Windows box, which also found the device to be unformatted, and so I formatted the device and copied some files to it. However, the stick is now inaccessible on the Linux box. Using dmesg, I found it was /dev/sda, but it doesn't recognise the filesystem. I tried using mount -t vfat /mnt/removable as a user and as root, but without any success (there is an fstab entry for /dev/sda to mount to /mnt/removable). Can anyone suggest how I can mount this device, or what filesystem is the most appropriate to use in this situation? Also, how do I format it for this? A:: Memory sticks can have some extremely strange partition structures, with some being accessible on sda, some on sda1 and others on sda4. You can verify the partition table on the stick using 'fdisk -l /dev/sda', and mount the appropriate filesystem on your Linux system. In theory, using the stick in different boxes won't change the partition structure, although it's not uncommon for certain systems to install the filesystem onto the main device, sda, rather than into the partition that exists. Using VFAT for the filesystem will make it nice and portable, as well as being accessible on Windows, Linux and Mac systems. You won't have the luxury of being able to use Unix UID/GID permissions, but for a simple removable media such as a USB memory stick, it won't matter. Back to the list ****** Connecting a digital camera to Linux Q:: I've bought a new digital camera, a Pentax Optio 430 RS. My reasoning was that the old (about five years) Pentax digital of my brother's had connected to my first Linux box (an old P1 233 running RH 7.1) perfectly - all I had to do was something like "modprobe DC200", plug in the USB, switch on the camera and mount it. Foolishly I thought that my new system - Mandrake 9.2, updated almost weekly and running kernel 2.4.22-21 - would have no problems. I've looked at the Pentax website - no reference to Linux. I've Googled, and the only relevant references I can find tell me to upgrade and recompile to kernel 2.4.20 or above". I'm using 2.4.22-21, does that not qualify? I just don't know which way to go now, even the order that one does things in eludes me. I'm not an expert Linux user - I'm a cabinet maker. I use Linux because I hate the Gates concept of continually paying for your computer, and I like free' things. Please could you give me a 1... 2... 3... on how to make this work. A:: The problem is likely to be that the camera is simply not recognised by the drivers. Every USB device has two IDs - a manufacturer and a product ID. This is used by the driver subsystem to match a driver to attached devices. Confusingly, sometimes the numbers stay the same across similar devices, and sometimes they change. The most likely reason your camera doesn't work is that your particular model has no pair of magic numbers entered in the USB drivers. Curiously though, your model is listed as working with the USB drivers. Try using usbview (it is included in most distros) to check whether the camera is recognised or not - the ID numbers listed for this device are 0x0a17, 0x0004. If yours are different, it may be a variant not accounted for in the driver. You can find out lots more info on cameras and Linux at http://www.teaser.fr/~hfiguiere/linux/digicam.html. Back to the list ****** Set up GUI to manage Broadcom BCM4318 network settings Q:: I have a stable Linux system that runs my desktop and small home/office LAN. I keep a few spare partitions on my hard disk to try out new distros, and from curiosity I installed Fedora. The main challenge I always have to overcome in such experiments is getting my PCI wireless card to work. It uses the rather infamous Broadcom BCM4318 chipset and is not all Linux-friendly. Following tips and advice I used the following three steps to activate the card. First, I installed the drivers using NdisWrapper. Second, I disabled the BCM43xx Fedora driver. Third, following instructions on SourceForge, I tweaked two network files [Modprobe.conf and icfg-eth0]. All of that enabled my eth0 interface to work like wlan0 in other distros. The card starts from the command line like so: /etc/init.d/network restart To finish the job I activated network manager from the system menu on the KDE desktop. I brought up the network configuration box to do a few last tweaks, but it was empty. It shows no NIC interface of any kind and yet the whole system is running perfectly. I can surf in technicolour and multimedia splendour on broadband. How do I get the GUI controls to reflect what has already been done in the murky depths of the system using the command line? A:: Although you have taken a somewhat unorthodox route to enable your wireless networking, it works - well done! Did you set up the NdisWrapper alias by using these commands as root? --- ndiswrapper -ma echo "alias wlan0 ndiswrapper" >> /etc/modprobe.conf ,,, Most importantly, after doing all that, did you use the Fedora system-config -network tool to create a new network interface for the device? If you've done all that and Network Manager still isn't working, you could try starting it at boot time like this (again, as root user): --- chkconfig NetworkManager on chkconfig NetworkManagerDispatcher on ,,, Network Manager is actually quite a new tool, and is under constant development. You may find your problems just disappear in Fedora 7, which should be out in April. Back to the list ****** Make font sizes bigger in Fedora 6 Q:: I've just installed Fedora. How do I make the font size larger on the desktop/system. A:: Ah! An easy question. I like easy questions. The font size in Fedora is set in the System > Administration menu, under the Fonts menu item. When the Fonts Preferences box appears, click on Details in the bottom-right corner, then look for the resolution in the top-left corner of the new window. Increasing that number makes fonts bigger, and will also make buttons, windows, menus and other things larger so that the fonts fit properly. Be sure to write down the original resolution, though, just in case you want to get back to it in the future. Back to the list ****** Get Belkin F5D7632uk wireless G modem working in Linux Q:: I have recently installed Fedora in dual-boot mode on my HP Pavilion t3065 (Intel Pentium 4 3.4GHz with 1GB of RAM). All went well until I tried to connect to my Belkin wireless G modem (802.11g - model F5D7632uk ver 1000), and it was only after much head scratching and searching on the internet that I discovered I needed a wireless driver. Having interrogated my network controller, I found that my chipset is as follows : --- Intersil Corporation ISL3890 [Prism GT/Prism Duette] / ISL3886 [Prism Javelin/Prism Xbow] (rev 01). Subsystem: Accton Technology Corporation WN4201B. Flags: bus master, medium devsel, latency 64, IRQ169. Memory at cfffc000 (32 bit, non-prefetchable) Size 8k. Capabilities: (dc) Power management version 1. ,,, Having had a look at various sources on the web on the subject of connectivity with this chipset (including www.prism54.org), I am now confused. Do I need a FullMAC driver or an Islsm driver? The listed drivers cover one or the other ISL variant but not both together! This begs the question: does it matter which one I choose? Assuming that I can get Linux to talk to my wireless modem, does Fedora or any other distro support WPA-PSK security, or is 128-bit encryption the best that is on offer for now? How would I go about implementing WPA-PSK on my PC? A:: A few years ago, Prism released a new version of its chipset that offloaded some of the work to the host computer (in other words, it was a cheaper, half-complete design rather like a Winmodem). This became known as the SoftMAC design, and it broke compatibility with the Prism54 drivers until the Islsm drivers were developed. The Islsm driver works with both SoftMAC and the original FullMAC chips. The FullMAC driver works better with FullMAC devices, but not at all with SoftMAC ones. Unfortunately, it is difficult to tell which you have - the ISL3890 works with the FullMAC driver but the ISL3886 needs Islsm. The FullMAC driver is built into the standard kernel for Fedora; you only need to install the firmware file, which can be downloaded from http://prism54.org/fullmac.html. You can test it by opening a terminal and typing --- su (give root password when asked) modprobe prism54 lsmod | grep prism54 ,,, If the final command gives an output, the driver is present and loaded - so try to connect to your modem. You should disable all encryption (WEP and WPA) while testing at first - get the connection working then sort out the encryption (until it works you have nothing to encrypt anyway). If the Prism54 driver fails to connect, try the Islsm driver. This also needs a firmware file, but a different one, which you can get from http://prism54.org/newdrivers.html. Comprehensive installation instructions are included in the package. WPA-PSK encryption is available for Linux, in the form of wpa_supplicant (http://hostap.epitest.fi/wpa_supplicant). Fedora includes packages for this - you need to install wpa_supplicant and wpa_supplicant-gui. Only the first is essential, but the second provides a GUI for configuration, which saves reading and editing configuration files. Back to the list ****** Screen goes black with Fedora and Nvidia GeForce graphics card Q:: I have an AMD64 3000+ CPU, with 1GB of RAM, an Nvidia GeForce PCI-express graphics card and a 320GB SATA HDD. When I install Fedora, everything seems to go well until it gets to 'starting udev [OK]', then the screen goes black. After that, the hard drive seems to continue working, but I can't see anything on the screen, which then delivers this message: 'Mode not supported'. I thought at first it might be the graphics card, but I then installed Elive 0.5, and everything there works. I tried removing the card and using the VIA in-built card, nothing; used a different screen (on the off-chance), nothing. I tried to boot in all of the options given at the Fedora screen by pressing 'e' but nothing worked. I tried to force the screen resolution (linux resolution=1024x768) and I tried using linux noprobe. The only other error messages that might have some bearing that I can see are: 'PCI: BIOS Bug: MCFG area at e0000000 is not E820-reserved' and 'PCI:Not using MMCONFIG' I don't know if that has anything to do with it, as they don't hinder Elive from working. Can you help me get Fedora to work? A:: It sounds to me as though Fedora is trying to use its internal Nvidia driver, and it's struggling to cope with your screen resolution. The quick fix for this problem is to switch over to the VESA driver, which ought to work on pretty much any graphics configuration. If you open up the file /etc/X11/xorg.conf as root, look for this line: --- Driver "nv" ,,, Change that to read vesa rather than nv, then reboot. That should at least give you a working Fedora system. Now, if you find that VESA isn't good enough for your day-to-day work, or if you want to try AIGLX or any 3D games, your best bet is to install the official Nvidia driver from www.nvidia.com. This is a great deal more stable than the driver that comes with Fedora, and should solve your problem. Back to the list ****** DeLi Linux installer saying 'failed to mount the source device' Q:: It was good to see DeLi Linux, as I have a 486 PC that I thought would be good to learn on. Burning the CD and the boot floppy went well, as did the install until a pane appeared asking 'Where is delibase.tgz?' Part of the pane states 'I can scan for CD-Rom drives. Should I try to do so?'. Clicking on Yes gives another pane asking me to 'Enter the device which contains the DeLi Linux Base Package delibase.tgz'No matter what I put in there I get the message 'ERROR! Failed to mount the source device. Exiting ...' and that's it. I had copied that file to C:\, because that was one of the locations stated in the original pane 'Where is... etc' - however, when I try to enter C:\ I can only get C:#. A:: It appears that your installer was unable to detect your CD drive. This is possible if it is not a standard ATAPI IDE device. The fact that you needed to create a boot disk indicates that this may be the case. Your inability to type 'C:\' is almost certainly caused by an incorrect keymap, which maps symbols differently to your keyboard. The \ character is there, but you'd have to keep pressing keys until you found the right one. This is similar to the problems I have finding # and @ when I boot a Live CD that insists on using a US keymap with my UK keyboard. You can usually avoid playing hunt the key by choosing the correct keyboard earlier in the installation, or you'll experience the same problem when you try to use DeLi Linux. Good luck with DeLi Linux: running anything on a 486 is going to seem like hard work. You may also like to try out Damn Small Linux - this is another distro designed to be lightweight. Back to the list ****** Restrict number of processes that a user can run in Ubuntu Q:: Is there a way to restrict the number of processes that any one user is allowed to run while using their shell? I am using Ubuntu Dapper. A:: There are two slightly different ways of doing this depending on whether your system uses PAM (Pluggable Authentication Modules). Ubuntu uses PAM by default, so you set limits in /etc/security/limits.conf. To limit user Fred to ten processes, add a line like this: --- fred hard nproc 10 ,,, For systems not using PAM, the limits are set in /etc/limits, and the same restriction needs: --- fred U10 ,,, In either case, you can use * as a username, to limit everyone but the root user. The limits set in these files are per login (rather than an overall limit for each user), but remember that a login for a graphical desktop may require several processes. A terminal window opened from this desktop is not, by default, a separate login, so set the limit to something reasonable to avoid crippling the desktop. To get an idea of the number of processes that a user runs with a standard startup, run --- sudo ps -u fred | wc -l ,,, The PAM example includes the hard option, because PAM sets two types of limit, hard and soft. Hard limits are immutable - only the superuser can change them - but a user can increase a setting above the soft limit up to the hard limit with the ulimit command (consider the soft limit a default and the hard limit an absolute maximum). You can set them both to the same value by using '-' as the second item in the /etc/security/limits.conf line. You are not limited to restricting the number of processes; you can also limit RAM or CPU usage. See the man pages for limits.conf and ulimit for (much) more information. Back to the list ****** SUSE doesn't recognise copier coponent of HP 3210 Q:: Although SUSE 10.2 recognises my HP 3210 all-in-one Photosmart printer and prints OK, it does not recognise the copier bit. The only distro that does so is Ubuntu, but I prefer SUSE because it is more stable on my machine. On an unrelated note, Tomboy is great, but the SUSE 10.2 version fails because of an Alt/F12 error as usual. I closed it down but now when I try to restart it I am told it is running. I deleted it via Yast but it is still there. I loaded the latest version from the Tomboy site but it needs so many other bits that I gave up. I am keen to use it but despair of getting it to work. A:: The scanning/copying side of things will be handled by SANE. One way to get it working in SUSE is to boot into Ubuntu and check which driver it is using with --- scanimage --list-devices ,,, This will show the driver before the device name. For example, my scanner uses the Genesys driver and shows: --- 'device 'genesys:libusb:005:003' is a Canon LiDE 60 flatbed scanner'. ,,, Once you have established which driver you're going to use you're halfway there, but you could cheat on the other half by copying the configuration files (which are usually found in /etc/sane.d) from Ubuntu to SUSE. I don't know what you mean by the Alt/F12 error. I use Tomboy a lot and have never had Alt+F12 do anything but pop up the menu as it is supposed to. It is possible that something else is also trying to act on this key combination, but it is simple to change the Menu hotkey to anything you like in the Tomboy preferences. When a program fails to start or otherwise gives problems, the first step is to run it from a terminal instead of the menu. This won't immediately solve the problem, but it usually gives more information about what went wrong. It sounds as though Tomboy is still running in the background. You can test this by typing --- ps -ax | grep -i tomboy ,,, in a terminal. This will show up any processes with 'tomboy' in the name. The leftmost item in the output is the process ID (PID), which you can use to kill the process with --- kill nnnn ,,, where nnnn is the PID. Now you can be sure that the program has been terminated, and it should now start up as expected. Back to the list ****** Psion serial port programs not allowing access to Garmin connection Q:: I have only one serial port on my motherboard; I also have a Psion 5 and a Garmin Geko 201, both with serial connections. It seems that the programs associated with the Psion run in the background even if the Psion is not connected, and prevent the Garmin (and software, including GPSman) from accessing the serial port. Under Linux (Kubuntu Edgy) the top command doesn't bring up any Psion-related apps that I recognise, and neither does ps -aux. Could you tell me which processes might be hogging the port while KPsion is installed, and how to shut them down without removing the KPsion packages? If all else fails I suppose I could just buy a PCI serial card! A:: Remember the Unix mantra 'everything is a file'? Well it works here, because your serial port can be treated as a file and the lsof command lists open files. With no arguments, it lists all open files on your system, and you'll probably be surprised how many there are on a running Linux system (over 7,000 as I type this). You can narrow things down by specifying the name of the file; in your case, this is probably /dev/ttyS0, the first serial port. Take a look at the following code: --- sudo lsof /dev/ttyS0 COMMAND PID USER FD TYPE DEVICE SIZE NODE NAME belkin 9758 nut 4u CHR 4,64 5134 /dev/ttyS0 ,,, This example shows that my serial port is used by the monitoring software for my UPS. In your case it will show the program that has a lock on the port, preventing anything else from using it, which is probably something to do with KPsion. It is possible that the program is using a symlink to /dev/ttyS0, such as /dev/modem or /dev/psion. You can see what links to /dev/ttyS0 with --- ls -l /dev/ | grep ttyS0 ,,, Try running the links through lsof too. It is also worth checking to see just what is being run automatically when you boot. The Ubuntu variants, like most Debian derivatives, use runlevel 2 by default, so --- ls -1 /etc/rc2.d ,,, will show which programs are being started. If you want to add a serial port, the cheapest (and often easiest) way is to buy a USB-to-serial adaptor. These are available on eBay for a few pounds and generally 'just work' when you plug them in. Some even advertise Linux compatibility on eBay no less! Back to the list ****** Mepis root directory not being mounted read-only Q:: Having successfully converted my computer from running from an old Mandrake Linux system to the SimplyMepis 3.4 distribution, I found that all ran so smoothly that I soon gave up using F2 during the boot process, as there never seemed to be any warnings. However, a few weeks ago I looked at the booting details and found a message that fsck could not run because the root directory was not mounted read-only. I found no satisfactory resolution on the web. On my machine, /boot/grub/menu.lst.example contains: --- color cyan/blue white/blue foreground ffffff background 2f5178 gfxmenu /boot/grub/message title MEPIS at hda2, kernel 2.6 kernel (hd0,2)/boot/vmlinuz-2.6.12-586tsc root=/dev/hda2 nomce psmouse.proto=imps splash=verbose vga=791 initrd (hd0,2)/boot/initrd.img-2.6.12-586tsc ,,, However, /boot/grub/menu.lst contains: --- color cyan/blue white/blue foreground ffffff background 0639a1 gfxmenu /boot/grub/message title MEPIS at hda6, kernel 2.6.15-1-586tsc kernel /boot/vmlinuz-2.6.15-1-586tsc root=/dev/hda6 nomce quiet vga=791 ,,, where the format of the code booting the 2.6.15 kernel doesn't correspond to the example file. Although my OS works despite the warning message, I would obviously prefer to have fsck working where it is designed to, and would be grateful for any suggestions you may have as to the likely cause of this problem and the best way of fixing it. A:: The ext2/3 filesystem runs fsck after a set time or number of mounts, which can be changed by tune2fs. It is likely that this problem has been there from day one but that you didn't hit the time limit until after you stopped looking at the boot messages. There are two main differences between your Grub configuration and the example setting. The first is that yours doesn't use an initrd to provide a splash, which has no bearing on your problem. The second is that no root path is provided for the Mepis boot. The kernel line should start with kernel (hd0,5)/boot/vmlinuz, otherwise you are relying on some indeterminate default. Alternatively, put root (hd0,5) at the top of the file. This is unlikely to affect the boot process's filesystem checks, but may cause more subtle problems. You can configure Grub to mount your root partition read-only: by adding ro to the list of options on the kernel line. The filesystem will be remounted, using the settings from /etc/fstab, early in the boot process but after fsck has been run. Your kernel line should look like this: --- kernel (hd0,5)/boot/vmlinuz-2.6.15-1-586tsc root=/dev/hda6 ro nomce quiet vga=791 ,,, You can also run fsck yourself by booting in a minimal maintenance mode. When the Grub menu appears, select your Mepis entry, press E to edit it, select the kernel line and press E again. Remove quiet from the options and replace with ro init=/bin/sh. Press Enter to accept the change and B to boot. This will give you a command prompt and, as the root filesystem is mounted read-only, you can run --- fsck -f /dev/hda6 && shutdown -r -n now ,,, This will check the disk and then reboot the computer only if the fsck was successful. Back to the list ****** Wacom Graphire3 tablet: need to change event device number every boot Q:: I have a strange problem with my Wacom Graphire3 tablet: I have to change events in my xorg.conf every time I turn on my machine. I'm currently on Fedora 6 with Gnome and my xorg.conf is exactly the same as it was when I was using Fedora 5 with Gnome and the same hardware setup. The tablet worked perfectly at that time. I've tried searching Google with no success, so I don't have a clue what is happening. Any pointers even to help me understand the problem would be great. A:: I take it you mean you have to change the number of the event device. Your enclosed xorg.conf contains --- Option "Device" "/dev/input/event2" ,,, and you have to change the number. This is because input devices are numbered in the order they are detected, and something is changing the order each time you boot - maybe another device that is only sometimes connected, such as a memory stick or scanner. The solution is to get udev, the device manager, to assign a persistent name to your tablet, one that it will always have irrespective of detection order. This is done by writing a udev rule. You first have to see how the system identifies the device with --- udevinfo -a -p /sys/class/input/event2 | less ,,, When run on my Aiptek tablet, the third block of output contains: --- SUBSYSTEMS=="usb" DRIVERS=="aiptek" ATTRS{vendor}=="AIPTEK" ,,, This is plenty to uniquely identify the device; you should see something similar for your Wacom tablet. To turn this into a udev rule, open a terminal, use su to become root and use your favourite editor to edit /etc/udev/rules.d/10-local.rules (create the file if it does not exist). Now, please do not be tempted to add the rule to an existing rules file, as it may be overwritten when udev is updated 10-local.rules is the correct place for your own rules. Now add a line like this, but using the values from when you ran the udevinfo command: --- SUBSYSTEMS=="usb", DRIVERS=="aiptek", ATTRS{vendor}=="AIPTEK", SYMLINK:="input/tablet" ,,, You'll see it's just the attributes that identify the device, separated by commas, followed by a SYMLINK setting. Note that the attributes are followed by ==, indicating a comparison, whereas the final item uses := because it is assigning a value. Your device will still be created as /dev/input/eventN but it will be linked from /dev/input/tablet, whatever the value of N - running ls -l /dev/input will confirm this. Now you can use /dev/input/tablet in xorg.conf and your tablet should always work. Back to the list ****** Best Linux distro for a Seagate hard drive? Q:: I recently bought an Intel 865 desktop board. I have a Seagate 120GB SATA hard drive. I tried installing Red Hat on it, but with no success. Can you tell me which Linux flavour I should use? Will Mandrake 10.1 detect my Seagate SATA hard disk? A:: Any recent distribution of Linux will have a kernel capable of supporting SATA controllers, including Mandrake 10.1. If you want a distribution comparable to Red Hat, you could install Fedora, which is the free brand of Red Hat's popular Linux distro. Anything older than a year or two isn't going to have a kernel that supports the use of SATA devices, simply because they didn't exist, and unfortunately there's no compatibility mode with a generic IDE controller to use SATA devices. Back to the list ****** Fedora installer video problems on Dell Latitude Q:: Last week, I purchased a Dell Latitude (Pentium III) laptop that came from a local university surplus outlet. It formerly had Windows on it, but was sold without any software so we thought, let's try Linux! But it freezes up. A:: It looks like the Fedora installer is incorrectly reading your display details, resulting in a corrupted framebuffer display. There are a number of options that you can pass to the installer to try to correct this. When the splash screen appears, try typing this at the boot prompt and pressing Enter: --- linux skipddc ,,, This tells the installer to skip probing your monitor for details and use a (hopefully sane) default instead. If this fails, you can try specifying the display resolution with one of the following: --- linux resolution=1024x768 linux resolution=800x600 linux resolution=640x480 ,,, If all else fails, you can run the installer in text mode with --- linux text ,,, This provides a basic-looking but fully-functional text installer that you can navigate with the cursor, Tab and Enter keys. I should stress that the display problem you have relates only to the installer; it will not prevent you from installing and setting up a graphical desktop during installation. Back to the list ****** Rip audio CDs and batch convert in Linux Q:: Can you recommend a format to rip audio CDs to so that they'll play out of the box on patent-free distros? Can you then tell me how to batch convert the 30 or so albums I ripped to MP3 a few years back when I was a Microsoft user? A:: Ogg Vorbis provides slightly better compression than MP3 and somewhat better quality, and it is completely free. The other free audio compression format is FLAC (Free Lossless Audio Codec). Because this is lossless, there is nowhere near as much space to be saved, but you do preserve every bit of the original track. If you have the hard disk space, FLAC is a good format for storing ripped CD tracks you can convert them to Ogg Vorbis or MP3 later if you want to fit them on to a smaller device such as a flash-based MP3 player. Transcoding your existing MP3s to Ogg Vorbis will result in some loss of quality. Starting again for the CDs is a much better option. There are various GUI tools for this, the easiest of which is Konqueror if you run KDE. Pop a CD in the drive and type 'media:/' into the location bar. Pick the CD from the list and you see the contents of the disc represented as MP3, Ogg Vorbis, FLAC and Wave files. None of these files is real, of course, but copying them to your hard disk causes them to be encoded on the fly. You can set the compression parameters in the Sounds & Multimedia section of the KDE Control Centre. If you don't use KDE, I'd recommend Grip from www.nostatic.org/grip. This is a GTK audio player and ripper. For console use, there is abcde (from www.hispalinux.es/~data/abcde.php), which is a shell script that rips, encodes and tags. This is ideal for encoding a batch of CDs - just keep feeding it more CDs as it spits each one out after encoding it. All of these methods use the online CDDB database to add tag information to the files they create. If you do have to convert your MP3s, there's a script called mp32ogg to do this - you can get it from http://faceprint.com/code. Using it can be as simple as running --- mp32ogg musicdir ,,, to get it to convert all MP3 files in musicdir to Ogg Vorbis. There are various options to control quality levels, file naming and whether the originals are deleted - run mp32off --help to see them. This script not only converts the music in the files, it also transfers the tags from the MP3 files. Back to the list ****** Scanning permissions problem Q:: I have recently changed from SUSE 10.0 to OpenSUSE 10.2 via a re-install. My CanoScan N640P scanner worked well with XSane/SANE under SUSE 10.0, but will now only work with root privileges. SANE support for this scanner is good. The XSane/SANE releases are as supplied by the distro, viz 0.991-32 i586 and 1.0.18-34 i586. I performed an online update soon after the install. The SUSE 10.0 SANE version was 1.0.15-20.2 i586. Yast2 configures the scanner correctly - manually added to give device 'canon_pp: parport0' - and tests OK with scanimage -d canon_pp:parport0 -T. Also, I can run this command successfully from the command line with root privileges but not as a normal user. When I invoke XSane as a normal user I get the message 'no devices available' with six possible reasons given. Of these the third, 'the permissions of the device file do not allow you to use it - try root' seems most likely. However, I can't find a device file for the scanner. Also, as far as I can tell, all the configuration files are set up correctly. I have tried all four settings of the parallel port - normal, ECP (DMA3), ECP/EPP and EPP in the BIOS, all with the same result. A:: You are right in thinking this is a permissions problem. I had exactly the same with my Canon USB scanner. The scanner device should be /dev/parport0 (although it is possible that on some distro setups it is /dev/lp0). Running --- ls -l /dev/{par,lp}* ,,, will show you all relevant devices and their permissions. You would normally see something like --- 'crw-rw---- 1 root lp 99, 0 Jan 27 11:37 /dev/parport0' ,,, This shows that the device is only readable by root and members of the lp group. In this case, the simplest solution is to add yourself to the lp group with --- gpasswd -a yourusername lp ,,, Some distros use a 'scanner' group instead of lp, in which case you should make the obvious change to the above command. This only affects new logins, so log out of your desktop and back in again. Then try --- scanimage --list-devices ,,, as root and as your normal user. You should see the scanner listed both times. If the device is owned by root:root or the permissions are not rw-rw----, you need to change these, which you can do with a suitable udev rule. Add this line to /etc/udev/rules.d/10/local.rules: --- KERNEL=="parport0", GROUP:="scanner", MODE:="660", SYMLINK:="scanner" ,,, This sets /dev/parport0 to have rw-rw---- permissions and to belong to the scanner group. It also creates a /dev/scanner symlink, which some software looks for. For more details on udev rules, see the answer to Moving Tablet on page 105. If your dDisco inferno!evice is not parport0 or lp0, you should be able to find it with --- dmesg | grep -i -C 3 -e parport -e canon -e sane ,,, Back to the list ****** Making a multi-boot DVD Q:: How do you make a DVD boot multiple distros? I want to make something similar for relatives and friends on dial-up to experience the Linux difference. I've got a DVD burner and media, so I think that should take care of hardware. I just need to know how to burn multiple distros and make each one bootable. A:: Making a multi-boot DVD is tricky: you need to be familiar with the Grub and Isolinux bootloaders, and the structure of the distros that you want to combine. First, create a new directory and copy the contents of the first distro CD into it. If you haven't burned the CD, you can loopback mount the ISO image as follows (as root): --- mkdir /loop/ mount -o loop discimage.iso /loop/ ,,, Once you've copied the contents of the disc into your new directory, you should access the second distro disc in the same way. Look at the files and see if there's any clash with the contents of the first disc. If there is, you'll have to manually hack the distro (possibly even rebuilding it), so in effect you're out of luck. If nothing clashes, or if the clashes are limited to directories called grub or isolinux, that's OK. Copy the second distro disc's files over to your new directory. Now you have a directory containing two distros. You then need to configure the bootloader for multibooting. Select the bootloader of one of the distros (in the boot, grub or isolinux directory - whatever the distro uses), and edit its configuration files (ie menu.lst, isolinux.conf - see the documentation for Grub and Isolinux to find out typical filenames). Edit the configuration file and add boot entries for the second distro; you can get these from the second distro's bootloader directory. The bootloader config file for the first distro should now contain boot entries from the second distro's bootloader configuration file. Still following? It's tough, but make sure you keep track of the bootloader config files of both distros, and you can merge them together. Then burn the directory to a disc, using the first distro's bootloader directory contents as the boot block - this will be named something like isolinux.bin (for Isolinux) or stage2_eltorito (for Grub). If you've merged the bootloader config files correctly, and no directories from the distros have overlapped, you should be able to boot the new disc and choose your distro from the boot menu. Back to the list ****** K3b: burn a disc but retain original timestamps Q:: I use the K3b burning software to burn data discs for my job. When you burn a data disc in K3b the time stamps on all the files are made the same. I need to keep the original time stamp of the files I burn but can't find any way to do that. I was wondering if I could get a little help? A:: K3b should do this by default, but it is an option you can change. When the K3b Project window opens, after you click Burn, go to the Filesystem tab and ensure that the box for Preserve File Permissions (Backup) is ticked. You should also tick the Generate Rock Ridge Extensions and Generate Joliet Extensions boxes for full compatibility. After setting the boxes as you want, click the Save User Defaults button to have them applied every time. If this is a regular task, using the same parameters each time, you may find it quicker to use a short shell script to do the job; something like --- #!/bin/bash SOURCE_DIR="~/work/data" DVD_WRITER="/dev/dvd" ISO_FILE="~/tmp/image.iso" mkisofs -rdJ -o $ISO_FILE $SOURCE_DIR && cdrecord dev=$DVD_WRITER $ISO_FILE ,,, You need a recent release of CDRecord - or wodim, which includes a CDRecord version that writes DVDs - to use this with a DVD writer, otherwise change the last line to --- growisofs -dvd-compat -Z $DVD_WRITER -rdJ $SOURCE_DIR ,,, but note that growisofs will only work with DVDs, not CDs. Back to the list ****** Installation problems with Ubuntu and OpenSUSE - video modes and Wine Q:: I decided to have another go with Linux and see if I can fiddle with Wine to get my finite element engineering packages to run. I tried installing Ubuntu 6.10 but the display flickered during boot-up even though I used Start With Low Resolution support and used F4 to change the resolution to what my monitor and video card supported. I think the frequency refresh of my monitor is 50Hz but Ubuntu 6.10 and Fedora both set it as 60Hz regardless of what resolution you choose. Then I tried to install Ubuntu 6.06, which did support the display and installed it without a hassle. I then tried to install Wine from source and ./configure suggested installing 'flex' which suggested installing 'm4' and then 'Bison' needed to be installed. Following all of these Wine returned with an error message during make. As I was not fully successful with Ubuntu, I tried installing Open SUSE 10.2 but got the following error message halfway through the installation: --- "error occurred while creating the catalog Cd///?devices=/dev/hdc source rejected by the user Retry (yes) (no)" ,,, Pressing Yes gave the message 'error occurred dvd/// source rejected by the user' I had been trying a dual boot installation with Windows XP already on the hard disk and am not sure if this is why the above error occurred. A:: You are really going through a baptism of fire, but I'll try to address the various problems you have met. Most monitors handle a minimum 60Hz refresh rate, but you can edit the /etc/X11/xorg.conf file after installation to set it to suit your monitor. Look for the part that begins with Section "Monitor" and you'll see settings for HorizSync and VertRefresh. Change these to suit the specification of your monitor. Most distros provide a large selection of software in their repositories and do not expect typical users to have to install software from source. As a result, the necessary tools are not installed by default. Ubuntu offers two approaches to your Wine problem. You could install the build-essentials package, which installs all you need to install from source, including flex and m4. The simpler alternative is to add WineHQ's own repository to your list of package sources, then you can install the latest version with the package manager. Run these commands to add the repository: --- wget -q http://wine.budgetdedicated.com/apt/387EE263.gpg -O- | sudo apt-key add - sudo wget http://wine.budgetdedicated.com/apt/ sources.list.d/edgy.list -O /etc/apt/sources.list.d/winehq.list ,,, The first adds the repository's key to your list of trusted keys, the second adds the source list itself. This is for Ubuntu 6.10 - for 6.06 change edgy to dapper in the second command. Distro installers can occasionally get confused and fail to find the drive from which you are installing. This is usually when you have two optical drives; you boot the disc from one but it detects the other one and tries to load its data from that. In this case, the simplest solution is usually to boot from the other drive. If this is not possible, such as when the first device is a CD drive and you are using a DVD, temporarily disconnecting the first device will avoid this error. You don't need to physically remove the cable - most BIOSes provide an option to disable individual devices. This particular problem only affects installation; you can actually reconnect the drive once everything else is working. A similar problem sometimes occurs when trying to install from a USB-connected DVD drive. It is also possible that you have a damaged disc. The easiest way to test it is to try booting it in another computer. You don't need to install to that computer, just boot up and see if the installer runs without the error. Back to the list ****** How much RAM do I need in Linux? Q:: I'm very aware that the amount of memory on my server could be a bottleneck. For a start, the server seems to be using swap space all the time. But I find it hard to work out just how much memory I actually need on the system to make it run efficiently. I could just buy all the RAM I can afford, but it seems there ought to be some way to better determine where the sweet spot for memory is. A:: Your question, while apparently simple, really requires a lot of Linux understanding to answer. In the first place, don't be too concerned about the swap space usage. A default Linux system will practically always use swap space. Excessive use can be a problem, though. For example, you may have the latest in multi-core processors on your box, but it is the utilisation that matters. For very data-heavy processes, unless your physical RAM is fast and copious enough, the server will spend most of its time thrashing the data around in and out of swap space, and not very much time actually processing any of it. There are a number of Linux tools that can help you determine what is actually going on with your system: top and uptime are quite useful. As well as other information, uptime displays a triplet of numbers that shows the load average on your box for the last one, five and 15 minutes. What does the 'load' number mean. Well, it is a magic number that shows the amount of work the box is doing. Higher numbers mean a lot of work, lower numbers, not so much. Actually, it represents an exponentially damped moving average of the total queue length for the CPU. But it is easier to think of it as a magic number, and you can't take this number on its own and turn it into something useful. Your box may be very busy, but coping very well with the load. It's only if the number crawls up and grows that your box may be experiencing trouble. Running top will show the running processes and their CPU and memory utilisation. But as I have said before, high CPU utilisation isn't necessarily bad, and low utilisation isn't always good. The latter may indicate that the data is spending too long getting to and from the process, so low utilisation with a corresponding high load value is a bad sign that your I/O isn't fast enough (buy some WD Raptors and a good controller), or you don't have enough physical RAM. By looking at a combination of top, uptime and free (which displays memory usage) you should be able to determine which is the case for you. If you can't wait for a busy time on the box to test it, you can always create some activity of your own. For example, Apache comes with a benchmarking tool, ApacheBench (the actual binary name is ab), which can simulate high demand on the server for you. Also, as a final tip, it is useful to check through the running processes on the box. Although Linux is quite good about running services only when they are needed, there is still some mileage to be had from killing off errant processes - print daemons, sound servers, X... even the HAL daemon isn't likely to be needed, but often runs by default. If you really want to make the best use of memory, you should try recompiling binaries and taking a good look at the features you need and want. You can make surprising reductions in the memory usage of things like MySQL, Apache, PHP et al. There is a lot more useful information online, and I would recommend the article on memory analysis by Lubos Lunak at http://ktown.kde.org/~seli/memory. Back to the list ****** SSH hardening Q:: For some tasks I want to be able to run a remote shell (SSH I guess) on my server. I'm nervous of running extra services on the box though, and wonder if it is really safe to leave an SSH server running. Also, as I know nothing about it, I wonder if you have any tips for making it more secure. A:: SSH is actually pretty secure by default, but of course, there are always ways to make it more secure. Most of these revolve around restricting the ways in which you can log in, the accounts you can log in to and the places you can log in from. By default, SSH enables a simple password login. With this method, when you connect to the SSH server as a user, you are prompted for the password. But of course, passwords can be guessed, so there are other methods available. SSH also allows login through a trusted key pair. This involves generating a key on the client, and copying the public part of the key to the SSH server's authorized_keys store. This is a useful way to quickly connect without needing to remember a password, but you can also turn off the password option on the SSH server. First make a key and copy it to the server: --- ssh-keygen -t dsa scp ~/.ssh/id_dsa.pub servername:.ssh/authorized_keys2 ,,, This assumes that you are logging in with the same username on both boxes. You'll need to edit the /etc/ssh/sshd_config file and change the line: --- PasswordAuthentication yes ,,, to --- PasswordAuthentication no ,,, Make sure you can log in with your key before you try this, especially on a remote server! While you have the file open, there are a couple of other tweaks to try. Find these two lines (they aren't together in the original): --- PermitRootLogin yes ... Protocol 2,1 ,,, and change them to: --- PermitRootLogin no Protocol 2 ,,, This prevents anyone from logging in directly as root. For root access, you will have to log in as a normal user and use su to get root access. The simple reason for this is that instead of having just one password to crack (or key, in our superhard example), any potential cracker will need to know two passwords and the name of a user account on the system - just a little bit harder to do. The second line there forces the server-client to use the more secure protocol for SSH communications. I can't think of a client that doesn't support it, so set this option now! In addition to forcing a user login, you may wish to restrict the individual users who can log in, since it is easy to guess some of the account names on any Linux box. --- AllowUsers eric jeff mike degville ,,, A simple space-separated list will restrict the accounts that can be accessed. If you want to be really harsh, you can link the accounts to particular sources, by appending a domain name of the originating server (be careful with this, as access from some sources may not always appear to come from the same IP address): --- AllowUsers mike@linuxformat.co.uk eric@*.ac.uk ,,, That should keep the evildoers out of SSH at least. Back to the list ****** Move Debian on to software RAID setup Q:: I have been happily running Debian Etch for a few months now, and would like to move it from hdb to a software RAID 1 setup on hde and hdg, My current setup is: --- /boot on hdb1 using ext2. / on hdb2 using ReiserFS. /home on hdb3, also using ReiserFS. ,,, I can move things to put /boot on md0, / on md1 and /home on md2 but how do I set up Grub? I know that Grub reads hard disks and partitions from zero, but what about RAID setups? How do I reconfigure and reinstall Grub for my new RAID setup? A:: You'd be surprised at how easy this is to do with RAID 1. The individual disks can be accessed as though they were standalone disks because the RAID data is not written to the rootblock of the disk. If /dev/ md0 is constructed from /dev/hde1 and /dev/hdg1, either of those can be used as the boot disk. They are probably labelled (hd1,0) and (hd2,0) because /dev/hdb will be (hd0), but you can find this out with Grub's find command. --- touch /boot myraidbootdisks grub #switches to grub prompt here find /myraidbootdisks ,,, This will show you something like this: --- find /myraidbootdisks (hd1,0) (hd2,0) ,,, You can use either of these to boot from, but I would recommend setting up Grub on each of them and including two menu entries, one for each disk. That way, if one of the boot filesystems is corrupted, you can boot from the other disk. Once the kernel is running and the RAID array created, it should repair itself. You can use any file in the find command, but creating a file instead of searching for a kernel avoids the possibility of confusion from your old boot disk. If you intend to remove hdb after setting up Grub, remember that your disk numbering will change and your Grub menu files will need to be amended. If you have set up Grub on both disks as recommended above, the first menu entry should still work, although it will be referring to the second disk by then. That may seem confusing, but it will make sense when you try it and will allow you boot without hdb in order to change the menu files. Back to the list ****** Installing software from magazine DVD Q:: I am brand new to Linux, and have successfully installed the OpenSUSE. Awesome! So far I am very impressed by what little I've seen in the Linux program. My question, however, will reveal my ignorance: how do I get the other programs on a magazine DVD (like FLPhoto) from the DVD to my computer, and have them become usable? I have found no instruction on moving, installing, incorporating or otherwise getting the programs 'installed' and making them usable. I am sure this is because it is assumed that anyone using the DVD and Linux knows how to do this, but I have no clue. A:: Installation methods vary, according to how the software is packaged. In the example you give, there are two files containing FLPhoto: flphoto-1.3-source.tar.gz and flphoto-1.3-linux-intel.rpm. RPM is the package format used by SUSE, so the latter file is the one you want. You can install it using SUSE's Yast -SUSE's 'do everything' administration program - by double-clicking on the file. You'll be asked for the root password, which is needed because installing software involves writing files to system directories, then the installer will pop up and you need only to click Install. If there is no RPM file, you are left with the option of installing from the source code tarball (these files generally end in .tar.gz or .tar.bz2). Installing from source requires a compiler, which SUSE doesn't install by default. It is on the installation DVD though; all you need to do is fire up Yast, click on Software Management, type gcc into the Search box, select only the gcc package and click on Install. The gcc package will also install any other components needed to be able to compile software from source. Back to the list ****** Best Linux distro for a USB flash drive? Q:: Can you suggest a good USB key distribution? I've looked at Feather Linux, which is a desktop-type distro, but as I don't think I'll use it that regularly, it would be nice if I had some kind of rescue-type disk, preferably including functionality to mess with partitions, troubleshoot Windows/Linux disks, and maybe even reset Windows passwords because I use Windows about as much as Linux for work. Do you have any suggestions for me? I haven't found many USB-specific links online after distrowatch.com appeared not to be searchable in this way. A:: The System Rescue CD, which is located at www.sysresccd.org, can be installed onto a USB memory stick and booted from there, assuming the BIOS supports such a boot method. From within the System Rescue CD, you can mount filesystems and perform basic recovery operations. If you have a sufficiently large disk, you could also install a distribution such as Knoppix or SUSE's Live CD onto the memory stick, and then you'd have a complete Linux system, handily contained on a single memory disk. Back to the list ****** Optus internet and Siemens SpeedStream 4200 problems Q:: I have broadband with Optus in Australia. I was unable to get Linux to connect to the modem (a Siemens SpeedStream 4200) until I got Gentoo 2006.1. Once it worked I found the difference was that it used dhcpcd and the others used pump or dhclient. Mandriva 2007 offered all three, but only dhcpcd worked. When I checked, I found that dhcpcd uses the -h hostname option. I haven't been able to get the other programs to work, but I have been able to get other distros (DSL-N, Knoppix and Ubuntu) by mounting the Gentoo partition and running dhcpcd, which brings up the net immediately. The others get what looks like a valid IP address but don't connect or drop the connection before I can use it. I think Optus has a special version of the modem, but it does use other brands of modem. I originally used the Windows setup disk on XP until I found Gentoo worked. What is the difference between the programs? A:: It seems that this is a fairly common problem with this modem. When used with some DHCP programs, it does exactly what you describe: it gives out an IP address and then drops the connection. There are two possible solutions, the first of which you have already discovered. By using dhcpcd, you can pass a hostname to the modem with the -h option. There is an equivalent option with pump, -u or --hostname, but not with dhclient. It would appear that either the modem or your ISP is very picky about the format of any DHCP requests you send. Given the apparently flaky nature of the DHCP support in this modem, the second solution would be more reliable: to use static addressing. You need to find the IP address of the modem, which you can do after a successful dhcpcd negotiation, by connecting through Windows or by trial and error. The default address varies according to the ISP it is intended to be used with, but the default for OptusNet should be 10.1.1.1. Once you know the modem's IP address, it is easy to configure your computer's Ethernet interface to use a static address. Pick an address on the same subnet as the modem, say 10.1.1.2 and set the gateway and DNS server addresses to that of the modem (10.1.1.1). The netmask needs to be set to 255.255.255.0. With a setup like this, you should have no more problems. DHCP is a great time saver when working with larger networks, or when moving from one network to another with a laptop. For a small home network, it is usually simpler to just give each device its own static address. Back to the list ****** Share and sync mailbox with Thunderbird Q:: I've been using Thunderbird as a mail client and am very happy with it. I've now acquired two laptops and would like to be able to access my mailbox from all three machines - ie for them all to share the same contents. My requirements in a nutshell: I want to download all mail from the ISP once (I don't want to leave it on the ISP). I want to use Thunderbird as the mail client on all machines. All machines should share the same set of mailboxes so that I can, for example, send email from laptop 1 and be able to see the sent emails on laptop 2 and desktop as well. It should be able to run on Mac OS X 10.4 as well as Linux. It should be open source. I've tried simply sharing out the mailbox directory using Samba, but this doesn't seem to work - it seems to screw up index files. A:: There are two ways you can achieve this. One is to use POP3 to collect mail from the server and synchronise the mail storage directories on the two machines. Unison (www.cis.upenn.edu/~bcpierce/unison) is excellent for performing this task, as well as for synchronising any other part of your home more than one box. Unison is best suited to keeping two computers in sync - I use it to keep my laptop and desktop up to date with each other. It uses the rsync protocol to save bandwidth and time but, unlike rsync, it can handle situations where each computer has had files updated since the last sync. Using it with three machines would require a little more effort to begin with, but would certainly be workable. Your other option, which applies only to email, is to run your own IMAP server on your desktop machine. Here you would run Fetchmail to pull messages from your ISP and store them locally, then point the mail programs on all the computers to the IMAP server on the desktop (you do this on the desktop too, setting the server to localhost). Your mail is stored on the server and so is status information, so when you read a mail from one computer it is marked as 'read' on all of them. Unlike with POP3, with IMAP you leave your mail on the server and can read it from anywhere with an internet connection. Most mail clients have an option to synchronise their local store with the server, so you can also keep local copies of mails for reading when offline. I prefer to use Dovecot (http://dovecot.org), but the easiest choice is probably to use whichever IMAP server your distro defaults to, as that will be largely set up on installation and have the most support from your distro's forums or mailing lists. For something as straightforward as your needs, you shouldn't have to move very far - if at all - from the default configuration. The Dovecot wiki, at the above address, has plenty of information on there is more than one person in your household, is that you can get Fetchmail (possibly with the help of Procmail) to sort your mail into separate mailboxes for each user. Then each can access their mail using the same IMAP server (but a different login name of course). Back to the list ****** Switchdesk or su in Fedora? Q:: I've finished installing Fedora, and have downloaded the 200-plus upgrades. I need to get out of root but can't locate the switchdesk command. I don't want to reinstall to get this. A:: You shouldn't be running the desktop as root in the first place! There is never any need to do this, which is why some distros make it difficult to load a root desktop. The system administration programs can all be run from a standard user desktop. When they need root privileges, they will ask you for the root password, then drop root privileges when they no longer need them. If you need to run any other programs as root; open a terminal, type su - to become root then run whatever programs you need from there. This is far safer than running the entire desktop as root, although it goes without saying that you should quit any programs run as root as soon as you have finished with them. Switchdesk is still available. Select Add/Remove programs from the Applications menu and type switchdesk into the Search tab - you will probably want switchdesk-gui as well as switchdesk. Once it's installed, you can run it from System > Preferences > More Preferences > Desktop Switching Tool. However, this is not the correct way to run programs as root; switchdesk is intended to allow users to switch desktops, hence the name. Keep the root user where they belong: locked in a box only to be let out when needed. You should rarely need to reinstall a Linux distro. The computer I am using now is three years old, as is the Linux installation running on it - it has been frequently updated but never reinstalled. Reinstalling doesn't fix problems, it merely removes the whole environment containing the problem... until the next time it occurs. If you fix the problem itself, instead of wiping the whole system, it should go away forever or, even if it doesn't, be easier to fix the next time it occurs. Back to the list ****** Compiling software from source code Q:: I am very new to Linux, although there do seem to be some similarities to the Amiga of years past. After a few attempts I have finally installed Fedora, dual booting with Win XP. I have tried to install FreeBasic, with no success so far! It does not seem to recognise ./configure and other instructions. A:: One significant difference between Linux shells and the Amiga shell is that Linux does not include the current directory in the path by default, whereas AmigaDOS did. The Linux way is more secure, but it means you have to specify the path when running a script or program from the current directory. The current directory is denoted by '.' so ./configure means "run the program or script called configure in the current directory" It should now be clear that the command ./configure only works when the file configure exists in the current directory. Compiling from source usually involves unpacking the tarball, changing to the directory created by the previous step and running ./configure, followed by make and make install -something like this: --- tar xf foo-1.2.3.tar.gz cd foo-1.2.3 ./configure make make install ,,, While this applies to more than 90% of Linux applications, there are many exceptions. After running cd, look for files called README or INSTALL. These contain specific instructions on compiling and installing that particular application. In the case of FreeBasic, if you want to install from source, you have to do the configure-make-make install dance several times, after downloading two archives. Alternatively, you may have the pre-compiled binary archive - FreeBASIC-v0.16b-linux.tar.gz - which uses a completely different installation method with its own install script. Read the file readme.txt inside this archive for precise details on installation. We ask you to read the file rather than reproduce the instructions here, because there may be subtle changes in the installation process between versions. The readme.txt file should be considered authoritative. Always look for installation instructions when installing from an archive (as opposed to using a distro's package manager), as you are executing commands as root that could have an adverse effect on your system if done incorrectly. Back to the list ****** OpenOffice.org desktop-integration package installation problem Q:: Regarding OpenOffice.org 2.1 installation: I am very new to Linux so I've no idea how obvious the answer to my problem is, and any answer probably needs spelling out to me. Following the instructions I tried to install it into OpenSUSE 10.2. Everything went well until I entered --- su -c "rpm -ivh *" ,,, which returned the message: --- 'desktop-integration: not an rpm package (or package manifest): Is a directory' ,,, This is where my scant knowledge fails me. I did try what seemed the obvious course of action and moved the desktop integration folder elsewhere, but that didn't seem to work. I did try tinkering around with some other stuff but I was really stumbling around in the dark. A:: When the shell sees a * on the command line, it replaces it by all matching files - * means "match any string" In this case, it matches all the RPM files and the desktop-integration directory. The solution is to be more specific and use --- su -c "rpm -ivh *.rpm" ,,, This now matches anything that ends in .rpm, which is what you need. If you also want to install the RPM files in the desktop-integration directory, extend the command to include these: --- su -c "rpm -ivh *.rpm desktop-integration/*.rpm" ,,, Note that adding desktop-integration/* will not work, because not all of the files in that directory are RPM packages and you will end up back at your original error. Back to the list ****** Mandriva not recognising Sitecom DC-009 USB modem Q:: I have installed Mandriva 2007 to a Dell 5150. The Dell has no serial or parallel ports, just USB. I have a Sitecom 56k V.92 USB modem, model DC-009, but I cannot get Mandriva to connect to it. The USB keyboard and mouse work fine. I can get the cdc_acm module to load with modprobe but it does not seem to connect to a tty, although KPPP sees ttyS0 and ttyACM0 and reports 'modem busy' when I query the modem on either ttyS0 or ttyACM0. I have trawled the net to no avail, including www.linux-usb.org. The modem worked out of the box in Windows XP. These issues should surely be a thing of the past by now? A:: It would appear that this modem either is not fully supported or needs some kind of firmware file. This particular modem failed to show up on a web search, but that is not too surprising. Many of these devices are made by one manufacturer and branded by another. However, all such devices need to be approved by the FCC (Federal Communications Commission) for sale in the USA, so you can find out what it really is from its FCC ID code. Type the code into the box at the bottom of www.hardwaresecrets.com/page/fcc to find out who really made your modem. Once you are in possession that information, a search of sites like www.linux-usb.org should prove a lot more fruitful. There is a further complication that has a bearing on your situation: USB modems can be problematic because they are not truly standardised, with only some of them conforming to the CDC-ACM specification. Your USB keyboard and mouse work well because they all conform to the same standards (USB HID). One way to sidestep this problem is to use a serial modem via a USB serial adapter. I have a couple of these devices, using different chipsets and both bought cheaply from eBay, and they both work very well with the majority of the serial devices I have tried (a UPS being the only exception). This way you can use any serial modem, as well as any other serial devices you may wish to use with this computer. It seems that we have managed to get rid of parallel ports and floppy disc drives, but the old serial port just won't go away. Back to the list ****** Installing Qemu with GCC 3 Q:: The Qemu emulator looks interesting, but it is impossible to get working. It says it needs GCC 3 to compile, and OpenSUSE 10.2 has only GCC 4. Is there any way round this? A:: Qemu is one of the very few programs that still fails to compile with GCC 4, but it is unfortunate that distros like OpenSUSE no longer have GCC 3 packages available. It is still possible to install GCC 3 on your computer, either directly from source or by using the RPM packages from Fedora, which are reported to work with OpenSUSE 10.2. However, this is a lot of work for a single program, and there are precompiled packages available. One OpenSUSE 10.2 user has compiled Qemu and made it available from www.hasanen.com/files/linux/qemu.tar.gz. Now there is also a package on SUSE's website. Point your browser at http://download.opensuse.org/distribution/SL-OSS-factory/inst-source/suse/i586 and click on the Qemu file (currently qemu-0.9.0-3.i586.rpm but it may have been updated by the time you read this). When the browser asks what to do with the file, select the option to install it and wait for it to be downloaded and installed (you'll need to give the root password when asked). Alternatively, you can install it from the command line with --- su -c "rpm -ihv http://download.opensuse.org/distribution/SL-OSS-factory/inst-source/suse/i586/qemu-0.9.0-3.i586.rpm" ,,, The good news is that newer versions of Qemu are ikely to be compatible with GCC 4. Back to the list ****** Restrict SSH users: limit them to their own directory Q:: I want to set up my SUSE 10.2 system to allow users to connect to my OpenSSH service. I can see any folder on the system when I connect (even as a regular user). How can I have it restrict users so that they can only see folders within the home folder that I assign to them when I create their user account? Is it possible to restrict certain users to just SFTP or SCP functions? A:: An SSH login is virtually the same as a local login, apart from the fact that it operates through an encrypted tunnel. So a user has the same rights when they are logged in via SSH as they would when sitting in front of your computer. This normally means they can read system directories - otherwise they wouldn't be able to run any programs - but not modify them. It is possible to set up a system to chroot a user to their home directory on login, but this is a far from trivial task. If you want to try this, I suggest you look at Jailkit (http://olivier.sessink.nl/jailkit), a set of utilities which will make this task somewhat easier. Jailkit can also be used to restrict users to SFTP or SCP connections only, but there is a simpler alternative if this restriction is all you need. Scponly (http://sublimation.org/scponly) is a replacement shell program that refuses shell logins but allows SFTP and SCP connections. The simplest way to run this is to set the user's shell to scponly in /etc/passwd. This will prevent shell logins, but will still allow them to traverse the filesystem according to the permissions of the various directories and files. There is also a chroot option for scponly, but this also adds a level of complexity - you may be better off with Jailkit if you want this. The difficulty of setting up a chroot login is that you must provide all the files the user needs to do whatever they need, including running programs, within their home directory, without providing anything they don't need that you don't want them to see. Programs like Jailkit work very well for this when used for a specific purpose - such as chrooting a server - but not so well for interactive logins. Depending on your security needs, in a majority of cases it is probably better to keep your system secure at the local level, by ensuring that non-administrative users cannot read system- critical files, then this security will automatically apply to any SSH, SFTP or SCP connections. Back to the list ****** Cron job to delete spam comments Q:: As I'm getting rather too many comment spam attacks on my web server, I thought I'd set up a cron job to delete the comments automatically ever so often, but I can't get the command to operate. I'd welcome any thoughts on the code I'm using: --- mysql -h hostname -u username -ppassword -e 'delete from table_name where pn_cid > x' ,,, I do have more than one database on the server, and I note the above line of code doesn't have any mention of which database it should address, so perhaps there is a missing switch? A:: You do need to specify the database name (even if you have only one database), otherwise MySQL won't know how to apply your commands. The database name can be given as the last parameter on the command line, or within the commands you pass to the mysql program. These are equivalent: --- mysql -u user -ppword -e "delete from table_name where pn_cid > x" database_name mysql -u user -ppword -e "USE database_name; delete from table_name where pn_cid > x" mysql -u user -ppword - "delete from database_name.table_name where pn_cid > x" ,,, The first is simpler, but the others offer more flexibility. You can also pipe the commands to the MySQL client instead of using the -e option. This is useful if you want to run a number of commands, because you can put them in a file and do --- mysql -u user -ppword <cmdfile ,,, You also need to be aware that programs run by cron do not have the same environment as programs run from a shell. As there is no user login involved, the various environment variables in your profile are not set up. You can get around this by putting the commands in a short script: --- #!/bin/sh source /etc/profile #or ~/bash_profile or ~/.bashrc mysql -uuser -ppword ... ,,, Use whichever file contains your environment settings on the second line and set your cron task to call this script instead of running mysql directly. Using a script also makes testing slightly easier, as you know exactly the same commands are used whether you run it from a terminal or cron. Specifying the password on the command line is considered insecure because it is then available to any user on the machine for as long as the program is running, simply by looking at the output from ps. A safer option is to put the password in ~/.my.cnf, as --- [client] password=your_pass ,,, and --- chmod 600 ~/.my.cnf ,,, makes sure the file is only readable by your user Back to the list ****** Change 'from' address in Linux mail command Q:: How do I change the 'from' address when using the Linux mail command? It insists on marking mail as from user@user-laptop (user-laptop is my hostname). A:: This is not possible with the standard mail command without fiddling with the USER and HOSTNAME environment variables, which may have unwelcome side-effects on other programs running in the same shell. However, there are a number of alternative commands that will do what you want. Mutt is able to read the 'from' address from the EMAIL environment variable. This is worth knowing if you already have Mutt installed, but it is a lot more than you need if you only want to send out messages. A small alternative is SMTPClient (www.engelschall.com/sw/smtpclient), which is similar to mail in operation but accepts the --from argument to set the 'from' address. SMTPClient only passes your mail to a suitable SMTP server and defaults to localhost. If you want to use a different server, you will need to specify it with the --smtp-host command line option, or set the SMTPSERVER environment variable. --- echo "Hello world - what else?" | smtpclient --smtp-host=my.mail.server --from=hubris@wherever --subject "Hello World" someone@someplace ,,, Back to the list ****** Choosing backup software Q:: I'm responsible for a Linux server and 30 workstations, running Windows 2000 and numerous software packages. I need a backup system that can handle the Linux filesystem and, where appropriate, be able to back up a Windows 2000 client. I was advised to use either Arkeia or BrightStor Arcserve Backup and I'm trying to find out which one of these two is more appropriate for my network for best backup results. It would be much appreciated if you based your recommendation on the pros and cons of the packages mentioned above, or do you think there's another software application that would help me more? A:: Most of the applications that will do what you're looking for out of the box are only available commercially. However, Arkeia and BrightStor both seem to be very good products, with all the bells and whistles you would expect in an enterprise backup suite. You can download a free 30-day trial version of Arkeia from the company's website. You even get 30 days of free installation support and I recommend taking advantage of this. That way, if you can't get it working for whatever reason, you don't pay. Its user interface can be slightly non-intuitive, but it's very powerful. Arkeia are a very Linux-friendly company and it's worth supporting its product if they're right for you. Computer Associates' BrightStor ARCServer suite is also excellent. It runs from a web GUI that's very intuitive. BrightStor is probably easier to use than Arkeia, and it has the huge corporate backing of CA. Another popular feature is the fact that its media is compatible between Linux and Windows versions. The potential to build your own solution exists too. The problem comes with accessing files on the Windows systems. Built-in commands like tar and dump can access mounted Samba filesystems, but you'll have problems with system states and open files. Depending on your scenario, this may not be a realistic option for you. In the enterprise backup industry, I'd have to say that commercial software is the only realistic option. Back to the list ****** Wireless network not working in OpenSUSE Q:: I have downloaded and installed OpenSUSE 10.2 and everything seems to go well, except for my wireless network. I am totally new to this and would appreciate a dummy's guide to setting it up. A:: The answer to this depends on the type of chipset used in your wireless card. First, run Yast and go into the Network Card section. If your card is displayed, you can skip ahead, otherwise you'll have to identify the card by running lspci or lsusb (depending on the card type) in a terminal. You may have to type the full path to lspci, /sbin/lspci. Google is great for finding out which drivers you need. For USB devices, there is a useful list of devices and drivers at www.qbik.ch/usb/devices. Once you know the driver you need, the next step is to make sure it is installed. Some drivers are part of the Linux kernel, others need installing separately. Run /sbin/modprobe -l in a terminal to see a list of all in- kernel drivers. If yours is not listed, use the Search box in Yast's Software Management section to find a suitable package. If it is not in Yast, you need to go to the homepage of the driver to follow the instructions for installing from source. If the driver is present on your system but the card is not recognised, you probably need a firmware file. These are generally extracted from the Windows drivers. Once again, see the driver's website for details. As an example, lsusb identifies my Edimax USB device as a ZyDAS device. The above website confirms this uses the zd1211 drivers (included in recent Linux kernels) but the device also needs firmware files, available from http://sourceforge.net/project/showfiles.php?group_id=129083. SUSE expects the firmware files to be in /lib/firmware/zd1211 (other distros may use /lib/firmware), so create this directory, unpack the firmware archive and copy all the files to /lib/firmware/zd1211. Now go back to the Network Cards section of Yast, or start paying attention again if your card was already recognised. Your wireless device should appear in the list - select it and click the Edit button. Select Automatic Address Setup on the Address tab and set Device Activation on the General tab to either On Hotplug if it is a plugin USB device or has a switch (this will cause the wireless network to connect when you connect or turn on the device); otherwise use the Manual setting and control the device from the Network Manager icon in the taskbar. Press Next and Finish to exit the configuration. Go to the Network Manager applet in the taskbar, which should show a list of available networks. Select yours. If the network is secured with WEP or WPA encryption, you will be asked for the passphrase. If possible, consider turning off encryption on your access point while setting up the connection: let's get the connection working before we add an extra level of complexity! Once the network connection works, disconnect, turn the encryption back on and reconnect. This time SUSE will ask for your WEP or WPA passphrase and you should be connected securely. Back to the list ****** Expand Linux partitions to overwrite old Windows partitions Q:: I'm new to Linux, and I have decided to completely wipe Windows XP from my laptop and just have Linux. I am dual-booting XP and Ubuntu; could you please tell me how to remove Windows and just have Ubuntu? How would I expand the Linux partitions to take over the space where Windows XP used to be? As I am a bit of a newbie, would it be easier just to totally format the drive and reinstall Linux? A:: To answer your last question first, reinstalling Ubuntu from scratch and taking the option to use the whole disk would indeed be an easy way to do this, but you'd lose your existing setup and data. Removing the Windows partition and allocating the space to Linux would leave your existing Ubuntu setup intact, and you'd learn more about how Linux works in the process. Removing Windows is easy. The first step is to delete the Windows partition (usually hda1) using the Gnome Partition Editor available from the System > Administration menu. If this isn't available, you should install GParted from the Synaptic package manager. The Windows partition is usually easy to identify, because the filesystem is NTFS (or possibly FAT), and Linux doesn't use these filesystems. Next click on the unallocated space this leaves and press the New button to create a new Linux partition of type ext3 (the default settings should be correct for this). Now, with the new partition still highlighted, go to the menus and select Partition > Format To > Ext3 (see screenshot, right). Press Apply to make these changes. The next step is to remove the Windows entry from the boot menu. Open a terminal and type --- sudo -i gedit /boot/grub/menu.lst ,,, to load the boot menu into an editor. Towards the end of the file you'll find a line starting 'title Windows' Delete from this line down to the next blank line and save the file. Your boot menu is now Windows-free. Adding the space you've just freed up is somewhat less straightforward. Linux partitions can only be resized by moving the end position, yet the space you've freed up lies before the beginning of the Linux partitions, because the Windows partition was the first on the disk. Fortunately, Linux allows you to use multiple partitions - in this case we can use the space previously taken by Windows as your home directory (an advantage of this approach is that if you reinstall or switch to a different distro, you can keep your personal files because they're on their own partition). You tell the system to use this partition for home by adding a line to the file /etc/fstab (filesystem table). In the terminal you've just used, type --- gedit /etc/fstab ,,, Add the following line and save the file: --- /dev/hda1 /home ext3 defaults 0 0 ,,, Before you reboot, which will activate the new home partition, you need to copy your existing files across. Still in the terminal, type --- mkdir /mnt/tmp mount /dev/hda /mnt/tmp mv /home/* /mnt/tmp/ reboot ,,, This mounts the new partition somewhere temporary, moves your home directory over to it and reboots the computer to make the changes permanent. After this, there will be no sign of Windows at the boot menu, and when Ubuntu comes up, the space previously used by Windows will be available for storing your own files. Back to the list ****** Force all web traffic to go through a proxy server Q:: I've been running a Squid (and SquidGuard) web proxy on my Fedora box. I've set up SquidGuard blocking rules to protect my children from undesirable content. What this means is that on their (Windows XP) machine, I set the internet to route through my proxy server (192.168.100.100:8080), and all is well. What concerns me is that my eldest is becoming quite savvy and it won't take him long to realise that if he unticks the box marked Use Proxy Server and switches to a direct connection to the internet, he'll get unfiltered access. Can I force all traffic to go through my (always-on) FC6 machine - perhaps by setting up port forwarding on the router (to which only I have the password) - so all web traffic has to go through the proxy server and if he switches to a 'direct' connection he will get no internet? If so, how? I've tried redirecting port 80 and 8080 to the IP of my PC but that doesn't seem to work. A:: By "the internet" I take it you mean the world wide web, which is all that Squid normally handles. However, you can force all internet traffic to go through your FC6 box and then through SquidGuard with three steps. First, and how you do this depends on your router, you have to configure your router so that it only allows your FC6 box to connect to the internet. The port forwarding you set up only affects incoming connections, so remove that. Secondly, you need to set your FC6 box up as a default gateway, so all internet traffic (not just web traffic) goes though it. Edit the file /etc/sysctl.conf, as root, and change the line --- net.ipv4.ip_forward = 0 ,,, to end in 1 instead of 0. Now run service network restart You should now reconfigure your children's computer to use the IP address of your FC6 box as its network gateway. Because you have disabled their access via the router, this is now the only way they can connect to the net. That still leaves the problem of your children removing any proxy setting, so now we use a feature of Squid called transparent proxying. This forces all web requests going through the machine and you've already forced that with the previous steps - to go through Squid 's proxy and hence through SquidGuard. Edit the Squid configuration file (usually /etc/squid/squid.conf) and find the line(s) starting 'http_port' This probably reads http_port 8080 in your file. Change this to --- http_port 80 transparent ,,, The 80 sets it to work on the standard HTTP port. The transparent option makes Squid intercept and handle all requests, regardless of whether the browser is configured to use a proxy server or not. You should either remove the old proxy settings from the browsers or add a line to handle requests to the old 8080 port. --- http_port 8080 transparent ,,, There is an alternative way of handling this. You can leave http_port set to 8080 and use an Iptables rule to forward all port 80 requests from addresses that you want to proxy to port 8080. This is more complex but it gives more flexibility, such as allowing some machines to bypass the proxy altogether. There are details on this on the Squid website at www.squid-cache.org. You could also use Iptables, or one of the many front-ends such as Firestarter, to block outgoing traffic to all but the common ports (such as HTTP, HTTPS, POP3, SMTP and FTP). This will prevent your children from using a remote proxy that works on another port. You could possibly do this on the router; however, implementing it on the FC6 box would allow you to block them but still have unrestricted internet access for yourself. Back to the list ****** Rename files and change timestamps according to EXIF data Q:: I have a photo collection that has got out of hand - several gigabytes' worth. I need to organise them so I can get a good backup. Do you know of a program that will rename a file based on the EXIF date of the image and change the Modified Date of the file to the same EXIF date? My last attempt at a backup before I wiped my PC managed to set all the file dates to when the DVD was burned. Also, I've managed to get myself several duplicate images spread across my entire collection (yep, I really messed up), each with different filenames. Any idea how I could sort them (maybe with EXIF data again) without having to look at a few thousand photos? If it helps, I'm using Fedora 64-bit and I'm not scared of the command line. A:: There are several programs capable of working with EXIF data. My favourite is ExifTool (www.sno.phy.queensu.ca/~phil/exiftool). ExifTool can read and manipulate just about any EXIF information, including extracting the Date/Time Original or Create Data EXIF tags. You can use this information to rename the files or change their timestamps. For example: --- find -name '*.jpg' | while read PIC; do DATE=$(exiftool -p '$DateTimeOriginal' $PIC | sed 's/[: ]//g') touch -t $(echo $DATE | sed 's/\(..$\)/\.\1/') $PIC mv -i $PIC $(dirname $PIC)/$DATE.jpg done ,,, The first line finds all *.jpg files in the current directory and below. The next extracts the Date/Time Original tag from each file (you may need to use Create Data instead, depending on your camera) and removes the spaces and colons. The next line sets the file's timestamp to this date the horrible looking sed regular expression is necessary to insert a dot before the final two characters, because the touch command expects the seconds to be separated from the rest of the time string like this. The final command renames the file, using the -i option to mv in case two files have the same timestamp. This will stop any files being overwritten. It's also possible to do this with most digital photo management software without going anywhere near a command line - DigiKam, KPhotoAlbum, F-Spot and GThumb all have options for manipulating files based on the EXIF data. The disadvantage of using these programs for this is that they generally only work on a single directory at a time, whereas the above shell commands convert all JPEG files in a directory and all of its sub-directories. If you have several gigabytes of photos in the same directory, your collection is more out of hand than renaming some files will fix! The solution to your duplicates problem is a program called fdupes (available from http://netdial.caribe.net/~adrian2/fdupes.html or as an RPM for FC6). This compares the contents of files, so it will find duplicates even if they have different names and timestamps. --- fdupes --recurse ~/photos ,,, will list all duplicate files in your photos directory. There are also options that you can use to delete the duplicates: --- fdupes --recurse --omitfirst --sameline ~/photos | xargs rm ,,, Be careful of any option that automatically deletes files. Run without deletion first so you can see what's going to happen. Back to the list ****** Firefox failing to recognise Rockwell IQ148 modem connection Q:: I cannot get Firefox to 'see' the modem connection that I've painstakingly set up. I'm fairly sure that it's working correctly, as running pon from the command line causes the modem to dial out, and poff makes it hang up. However, activating Firefox from the desktop is the problem. The Ethernet connection to broadband works fine, but disabling it and making the modem the default connection brings up the 'server not found' screen. The modem is a Rockwell IQ148 and I'm using Ubuntu Dapper 6.06. I'm trying to set up the computer for my partner, who doesn't have broadband but has been gradually converted from XP by using my machine. A:: This is almost certainly a general problem with your internet connection and not specifically related to Firefox. It sounds like your system is still trying to use the Ethernet connection. Type this in a terminal: --- route -n ,,, The line we're interested in is the last one beginning '0.0.0.0' as this is the default route for all non-local connections. I suspect it looks something like this: --- '0.0.0.0 192.168.0.1 0.0.0.0 UG 0 0 0 eth0' ,,, The last two numbers in the second string will probably be different, but if it ends in 'eth0' (or anything but 'ppp0') this is the cause of your troubles. You need to make sure the eth0 settings are purged from your system, especially if you'll no longer be using Ethernet with your broadband provider, by selecting it in the Network Settings window and pressing the Delete button (the middle of the three obscure-looking buttons at the top-right of the window). Another possibility is that your modem connection hasn't completed. The fact that the modem dials out doesn't guarantee that a connection is made. Try running --- sudo plog ,,, after an apparently successful modem connection. This will show you the last few lines of the connection log: it should be obvious if anything has gone awry here. You can also check the status of your network connections with --- /sbin/ifconfig -a ,,, If the eth0 interface appears, it shouldn't be marked 'UP' nor have an 'inet addr' entry. Equally, ppp0 should be marked 'UP' and have a valid address. It's also possible that you're connected to your ISP but not able to look up internet domain names. Run these two commands in a terminal: --- ping -c 5 www.google.com ping -c 5 216.239.59.104 ,,, The first attempts to contact Google by name, the second bypasses the DNS lookup and goes directly to its IP address. If only the latter works, your DNS information hasn't been correctly set up. You'll need to contact your dial-up ISP and get the addresses of the DNS servers, then put them into the file /etc/resolv.conf. It should look something like: --- nameserver 1.2.3.4 nameserver 1.2.4.5 ,,, You can either edit the file directly or use the DNS tab of the Network Settings tool. It's possible you still have your broadband ISP's name servers in here. These should be removed. Back to the list ****** Using command line mail in Ubuntu Q:: I'd like to use the shell for my email. Can you tell me how this can be set up? I currently use Ubuntu 6.10. A:: Do you mean you want to run a mail client within your shell, or do you want to be able to send mails from shell scripts? There are several terminal-based mail programs, the most popular of which is Mutt (www.mutt. org). Mutt is included in Ubuntu's main repository, so you can install it from Synaptic. If you want to send emails from a Bash script, the mailx command is the simplest solution, and is probably already installed on your system. This program mails whatever it receives on standard input to a specified address. For example: --- echo "Hello World" | -s "Obvious example" me@example.com ,,, The subject of the mail is given with -s (use quotes if it contains spaces), and everything received on standard input forms the body of the mail, so it's good for mailing program output. Back to the list ****** OpenGL not working on Sony VAIO laptop Q:: I have Ubuntu 7.04 Feisty Fawn installed on a Sony VAIO VGN-FJ250P laptop. I'm satisfied with almost all aspects of this distro, with just one or two niggling problems. The one I've been putting the most effort into recently regards OpenGL. It seems not to work on this Linux system. I know that it's supported by the Intel video chipset, because I dual boot with Windows XP Pro, and OpenGL applications run fine there. One of the affected applications is Planet Penguin Racer, which ran fine on the Live CD but doesn't run when installed on the hard drive. Attempting to run it from the menu produces nothing, while attempting to run it from the command line in a terminal produces the following error message: --- '*** ppracer error: Couldn't initialize video: Couldn't find matching GLX visual (Success) Segmentation fault (core dumped)' ,,, . A:: The good news is that OpenGL works on your hardware with the Live CD, so the hardware is supported and the software present on the CD. This is a configuration problem, almost certainly in xorg.conf, caused by the installer not setting up your graphics card correctly. Boot from the Live CD, mount one of your hard disk partitions or a USB pen drive, and copy /etc/X11/xorg.conf to it. Now boot from your hard disk and compare its copy of xorg.conf with the one you just saved. The most likely cause is that your hard disk version of the file is either using the wrong driver (the Driver line in the Device section of the file) or that the GLX module isn't being loaded. Before you make any changes to this file, save a backup copy: you don't want to make things worse. The correct driver for your hardware should be i810, although using whatever is in the Live CD version of the file will work. The GLX module is loaded by including this line in the module section of xorg.conf: --- Load "glx" ,,, If both of these are set correctly and OpenGL doesn't work, you could work through the two files looking for differences and trying to identify which one is the cause. Or you could simply replace the installed file with the Live CD version, knowing it will work. Back to the list ****** Limit email relaying to one user Q:: We currently have an email server running Postfix, and users either use Outlook Express or the web-based SquirrelMail (running on the server). This works fine at the moment, and only clients on the internal network can relay email to the outside world. We recently appointed someone who needs access 'on the road' via a smartphone. This is fine, as we've got IMAP open externally for his folders, and he can use our Postfix SMTP server to send email - but only to local recipients (to prevent us being a spam relay). We'd ideally like for said person to be able to sendemail to anywhere. What part of Postfix would I go about changing to allow only him to relay email to other domains and from outside of $my_networks, without affecting the current rules allowed by webmail or internal clients? A:: The answer lies with SMTP authentication, which will allow users to authenticate themselves before sending mail. Postfix can be configured to relay only mail from authenticated users. Postfix uses Cyrus-SASL for authentication, so make sure this is installed and that the saslauthd service is started when you boot. To configure Postfix to use Cyrus-SASL, edit /etc/postfix/main.cf and make sure that mydomain, myhostname and mynetworks are correctly set. Now add the following lines to the end of the file: --- smtpd_sasl_auth_enable = yes smtpd_sasl_security_options = noanonymous smtpd_sasl_local_domain = $myhostname broken_sasl_auth_clients = yes smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,check_relay_domains ,,, The fourth line is optional, it is required with some versions of Outlook Express and Microsoft Exchange. If your user is only using his smartphone, try without this line. Restart Postfix, or force it to reload its configuration and any valid user on your system should be able to use your SMTP server from anywhere, provided they set their mail program to use SMTP authentication. Users on your network will still be able to send mail without altering their mailer configuration. There is a detailed HOWTO on this subject at http://postfix.state-of-mind.de/patrick.koetter/smtpauth. It also covers using TLS to encrypt communication between your user and the server. This should be considered essential, otherwise your users could be sending passwords as clear text. You can also use SASL for authentication from inside your network. For example, you could configure Postfix on a school network so that all users can send mail within the network but only teachers can send mail outside. Back to the list ****** Connect two monitors to Nvidia FX5200 video card Q:: I've just bought a new LCD monitor as a replacement for my old CRT. My Nvidia video card has two outputs - so is it possible to connect both monitors to the card and expand my KDE desktop to fill both of them? I'm using Gentoo 2006.1 with an Nvidia FX5200 video card. A:: The answer is yes. There is a standard way of combining two screens as a single X display, called Xinerama. This is normally used with two graphics cards, but the Nvidia drivers contain a feature called TwinView that lets you display two screens from one card, one on each monitor output, while remaining compatible with Xinerama. Enabling TwinView (and Xinerama) is simple, assuming you're using the Nvidia drivers and not the open source nv driver. First emerge nvidia-drivers and make sure X is running on the Nvidia drivers - the most obvious indication is the Nvidia logo that pops up when X starts. Next run nvidia-settings from a root terminal. This is a separate package on Gentoo, so you'll need to emerge it if you haven't already. Select X Server Display Configuration from the list on the left and you should see both displays, although one may be marked disabled. If one display isn't available, click on the Detect Displays button, select each display in turn and pick the correct resolution. It's best if both give the same resolution, but any Xinerama-aware window manager can handle different sized displays. Now set the Position for each screen. You can do this with absolute positioning for maximum control, but it's usually best to set one display to Right Of the other and the opposite for the other display. Click on the Save To X Configuration File button, log out and restart X. You should now have a desktop that spans two monitors, but it may need some tweaking. Make sure all applications are built with Xinerama support. If you don't already have xinerama in your USE flags, edit /etc/make.conf and add it, then rebuild all affected packages with: --- emerge --update --deep --newuse --ask world ,,, This may take a while, but when it is finished you can restart KDE and begin tuning it to suit your tastes. For example, the desktop can have a single, wide wallpaper or different ones for each monitor. Or the Kicker panel can be on a single monitor or stretched across both. The Multiple Monitors section of the Desktop settings allows you to set how windows behave and which is the default display for opening new windows. Pick your LCD monitor here. A useful feature is the Advanced > Special Window Settings menu option available when right-clicking a window's title bar. This lets you override default window manager behaviour for specific windows or applications. It's useful with a single display but even more so with dual displays, especially as it can force specific windows or applications to open in a particular position. For example, I get Gimp to open its toolbox on one display while opening the images on the other, so I can use a full screen window to edit an image without obscuring the toolbox. Back to the list ****** Get a remote X connection to work Q:: I got into Linux many years ago after installing Red Hat 5.1 on my Amiga 4000. While managing to get to grips with it fairly well, I have never succeeded in getting a remote X session to work. I can log in via SSH and use the shell, but I really want to access my remote machine with X. My remote machine runs MythTV on Kubuntu, and the one I want to access it from is running Gentoo. I only want to access the desktop for simple administration tasks (not viewing MythTV), so it shouldn't be impossible, but I've got so confused as to which is considered client and server or remote and host that I'm lost! I'm using AMD64 and some don't seem to like it. A:: I too started using Linux on an Amiga 4000 (with Red Hat 4.5) things were nowhere near as easy back then as they are now. Remote X access is relatively straightforward, and useful with MythTV because the mythtv-setup program can run on a remote back-end but opens an X window. The client-server thing can be confusing with X if you are used to the web model of considering the remote machine to be the server and your desktop computer the client. The X server is the program responsible for creating the X display, so it runs on the local machine. The clients are the programs running on that display. So your Gentoo desktop is the server and the programs on the MythTV box are the clients. Running anX program on a remote server over SSH is straightforward and works with the default SSH settings in Gentoo and Kubuntu. SSH into your Kubuntu machine from your Gentoo box with the -Y option. You can then run X programs and make them open their windows on your Gentoo desktop. For example, doing --- [user@gentoo]$ ssh -Y kubuntu user@kubuntu's password: [user@kubuntu]$ mythtv-setup ,,, will run the mythtv-setup program from the Kubuntu box on your Gentoo desktop. You may occasionally find that you cannot log out of the SSH session after running an X program. This can be caused by the program having started other processes that are still running; for example, KMail opens a couple of communication sockets. Run ps in another SSH session to identify these, then kill them and you will get your prompt back. The other applications you refer to are probably desktop-sharing programs, which mirror or open an X desktop on a remote machine. These require X to be at least installed on the remote computer, and in the case of programs that mirror it, the desktop must be running. As you are using KDE, the simplest of these is KDE's own krfb and krdc. The former is a server, run on the remote computer and configured in the KDE Control Centre. The latter is run on the local box to show the other computer's desktop in a window. Both are installed by default in Kubuntu; you will need to emerge kde-base/krdc on your Gentoo system. VNC works differently by opening a desktop screen specifically for the remote display, separate from any local desktop screen running. Back to the list ****** Has my Linux box been hacked? Q:: I have a server that has been acting strangely lately. I sometimes need to press Enter twice at the end of each line when logged in over SSH from home, but this never happens locally. Also, some of the system commands, like lsmod, are giving me segmentation faults. Actually, it's only lsmod. I think my system might be compromised. I have no problem reloading from CD because this is just a test environment, but what can I do to confirm that I've been hacked? Also, if this was my live system, what could I do to recover from this? A:: Unfortunately, there are many types of system compromises around today. From the information you've given me, it's difficult to tell what state your system is in. In a distribution that comes with precompiled binaries, system files such us lsmod definitely shouldn't be segfaulting. This could be put down to bad hardware but you would probably see more commands causing these problems if that was the case. Let's assume the worst but be sure to rule hardware out. If you find that the server has been compromised, the best thing to do is re-install your operating system. Even if you're extremely skilled at routing out the attacker, you can never be absolutely sure that you've got every single backdoor secured. If re-installation isn't an option then knowing exactly what has been done should help you get your system back to a usable state. If you have access to your bandwidth stats, now would be a good time to check them out. Of the compromised servers we see, most of them are used to launch further attacks, send spam or carry out other illicit activities. If you see any sudden increase in traffic, you should get a rough idea of when an attacker gained access. This should enable you to narrow your search down somewhat. From the clues the bandwidth charts may have given you, go through your log files. Check /var/log/ messages for any strange ssh activity. Also, /var/log/maillog may show lots of mail leaving your server. Apache's logs can give you a clue if Apache was used to compromise the server, so look for lines containing wget, cmd, ftpget or cat. It could be that one of your pages allows remote execution of commands. If you get a status of 200 to any of the above commands, they successfully ran the command. dmesg may show if somebody has tried to put a network card into promiscuous mode or if any strange kernel modules have been loaded. You could also look at lastlog to see if there are any users you weren't expecting to log in that did so. If you use one of the RPM-based distributions you could do an RPM verify (rpm -Va). This will show you any file that differs from the installed RPM package. Any binary files should get your attention here. There are several toolkits you can use to check for rootkits. Two of my personal favourites are chkrootkit (www.chkrootkit.org) and rkhunter (http://rootkit.nl). It's worth opening /etc/passwd to look for non-root users who have a UID of 0. While you're there, check if there are any user accounts you don't recognise. You may be lucky enough to find a 'hax0r' or 'r00t', although it could also be a service name that's slightly misspelled. Open the .bash_history file for any users that have logged in to look for any suspicious commands. The last thing I'm going to cover is processes. Tools like netstat, top and ps will all show you if there are any unusual programs running. It's worth noting that these are often the first files an attacker will overwrite, often with a version that will cover his tracks. Make sure that top's CPU and memory usage are in line with the processes it shows. Check netstat for sshd (or other processes) running on an unusual port number. Be especially cautions of the high ports (above 1,024) because these don't require root privileges to open. This is a very broad topic and my discussion is by no means intended to be definitive - entire volumes have been written on this subject and nothing will beat good, solid research here. Back to the list ****** Can you use CentOS repositories with RHEL? Q:: According to the CentOS website at www.centos.org, CentOS "aims to be 100% binary-compatible" with "a prominent North American enterprise Linux vendor." That got me thinking. Can you point Yum on an honest-to-goodness install of Red Hat to the CentOS repositories? I've noticed when upgrading my CentOS box that a lot of the packages still have the Red Hat name (such as patch_for_foo-RHEL-6.3.2). So it would seem that this could be a way to keep a server up to date after your Red Hat service runs out. I know it would not be the ideal way to do things, but would it work? A:: This would seem to be possible, according to reports from the CentOS forums, provided you are using equivalent versions, such as going from RHEL 5 to CentOS 5. You have the choice of either using the CentOS repositories instead of the Red Hat ones or converting your installation from Red Hat Enterprise Linux to CentOS. Before you do anything else, you should make sure you are no longer registered with Red Hat Network. Put this in your Yum configuration to add the CentOS repositories: --- [CentOS5 base] name=CentOS-5-Base mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=os gpgcheck=1 enabled=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5 [CentOS5 updates] name=CentOS-5-Updates mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=updates gpgcheck=1 enabled=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5 [CentOS5plus] name=CentOS-5-Plus mirrorlist=http://mirrorlist.centos.org/?release=5&arch=$basearch&repo=centosplus gpgcheck=1 enabled=1 gpgkey=http://mirror.centos.org/centos/RPM-GPG-KEY-CentOS-5 ,,, Disable your RHEL repositories by changing the enabled=1 line to enabled=0 for each of them. Those settings have gpgcheck turned on, so each package is verified against the CentOS GPG keys before installing. You can install these keys with --- rpm --import http://isoredirect.centos.org/centos/5/os/i386/RPM-GPG-KEY-CentOS-5 ,,, If you want to switch over to CentOS completely, you need to install two small packages from the CentOS repositories, either centos- release-5-0.0.el5.centos.2.x86_64.rpm and centos-release-notes-5.0.0-2.x86_64.rpm or centos-release-5-0.0.el5.centos.2.i386.rpm and centos-release-notes-5.0.0-2.i386.rpm, depending on whether you are running a 32-bit or 64-bit system. You should also make sure that you remove any Red Hat *-release-* packages. You may get conflict warnings from Yum because you still have the RHEL versions of most packages installed. The best long-term solution to this is to install the CentOS packages, turning your system into a pure CentOS one. As you no longer have a RHEL support subscription, there is no benefit in keeping the Red Hat-branded packages installed, and moving over to a pure CentOS system will make it easier if you need support from the CentOS community. Back to the list ****** Shrink Ogg files Q:: Is it possible to reduce the bit rate of OGG files? I encoded at 458kbps, and they are taking up too much disk space. A:: The Ogg Vorbis specification includes the ability to reduce the bit rate of a file without re-encoding, but the current software does not do this, so you need to uncompress and recompress each file. This does mean that there will be some loss of quality compared with encoding at the lower setting to start with, although it is likely to be minimal when coming down from such a high bit rate. Where you still have the original sources, re encoding from scratch is the best option. Otherwise, this will decode and re-encode a single file: --- oggdec oldfile.ogg -o - | oggenc -q N -o newfile. ogg - ,,, Use whatever N quality setting you want. Replace the -q with -b if you prefer to specify the average bit rate instead of quality level. You can convert all files in a directory with --- mkdir -p smalleroggs for f in *.ogg for FILE in *.ogg do if oggdec "$FILE" -o - | oggenc -q N -o "smalleroggs/$FILE" - then vorbiscomment -l "$FILE" | vorbiscomment -w -c /dev/stdin "smalleroggs/$FILE" fi done ,,, This re-encodes each file and copies the tags from the old file to the new one. If you want to recurse into a directory structure, you will need the find command to locate all *.ogg files. This version also deletes the original files, so use with care. --- find -name '*.ogg' | while read FILE NEWFILE=${FILE/.ogg/_new.ogg} if oggdec "$FILE" -o - | oggenc -q N -o "$NEWFILE" - then vorbiscomment -l "$FILE" | vorbiscomment -w -c /dev/stdin "$NEWFILE" mv -f "$NEWFILE" "$FILE" fi done ,,, Back to the list ****** Execute PHP files locally Q:: I'm having trouble with browsing .php files on my Linux (Mandriva 2007 Free) machine. It keeps trying to open them with KWrite instead of just running them. As I'm currently trying to teach myself PHP, when I'm running an HTML file that calls a PHP process I really don't want to look at the code - I want the PHP to just, well, run. A:: You don't run PHP files from a file manager. PHP is a server-side scripting language, so you need to load the PHP page from a web server into your browser. Locally, they are just text files, and your file manager will perform whatever action it is configured to do on text files - in your case to load them into KWrite. This means you need to run your own web server, which is nowhere near as scary as it sounds. Fire up the Mandriva Control Center, go into the software installation section, type 'mod_php' into the Search box and select apache-mod_php-5 for installation. This will also install various other packages that you need to serve PHP files. When the installation is complete, go into the System section of the Control Center and select the System Services item. Ensure that httpd (the Apache process) is set to start on boot and, if it is not running now, start it. Point your browser at http://localhost and you should see the Apache test page, or maybe just an 'It works!' page, confirming that you now have a working web server. Now all you need to do is put your PHP files in the web server's DocumentRoot, the directory where it looks for files to serve. Mandriva defaults to using /var/www/html for this, so save the following as /var/www/html/test.php: --- ,,, Load http://localhost/test.php into your browser and you should see some information about the server and the system running it. If so, Apache is not only installed, it is set up to serve PHP pages and you can continue learning the language. Good luck! You may run into permissions problems editing files as your normal user for inclusion in the DocumentRoot directory. This can be solved by adding your user to the Apache group and setting the directory to be writable by member of that group, by typing this in a root terminal: --- gpasswd -a yourusername apache chgrp apache /var/www/html chmod g+w /var/www/html ,,, You will need to log out and back in again for this to take effect. Back to the list ****** Sync Linux laptop with Buffalo LS-250 LinkStation Q:: I have just installed a Buffalo LS-250 LinkStation [a networked storage device] on my home network (me running Kubuntu Dapper and three Windows XP machines). I have no problems at all copying files to and from my Dapper laptop and it was very easy to set up. But! What I would like to do is to sync my laptop with the LinkStation, and I'm not sure how to do it. I've successfully set up Unison between my laptop and one of the Windows XP machines, but I don't know if this is possible with the LinkStation. I've looked at rsync, but that too seems to need a software installation on both the laptop and the LinkStation. A straightforward command line copy would do me, so that I could write a script to copy only new files each way, but rsync now seems to be the default for that. Also, on the XP machines I can open and edit files on the LinkStation, but Samba only lets me open a copy on the Dapper laptop. Can this be changed? A:: You actually have two Linux computers on your network, because the LinkStations run Linux too. There is an active community at http://linkstationwiki.net with plenty of information on the various LinkStation models, including your LinkStation Pro. Of most interest to you will be the replacement firmware project. FreeLink replaces the standard firmware with a Debian variant. This is more extreme than OpenLink but gives more flexibility, although you currently lose the web interface. OpenLink is based on the stock firmware but adds some software. The most interesting of these are SSH and rsync. However, the LS-LG that you have is a new model, and OpenLink did not support this at the time of writing, although that may have changed by the time you read this. If you don't wish to mess with your firmware, there is a much simpler solution. If you mount the device using Samba you can use rsync without installing anything on the remote machine as you are effectively syncing two local directories. --- rsync -avx ~/myfiles/ /mnt/buffalo/myfiles/ ,,, You should be able to work with files directly on the device over SMB. As you use KDE, you should try the KIO slave route first, opening a file as smb://name/path/to/file. Try to browse the files in Konqueror and open them in your editor. If this fails, it is probably down to the share permissions and Samba setup. If you run the programs from a shell, you should be able to gain more information from the error message printed there. For example: --- kwrite smb://name/path/to/file ,,, Back to the list ****** SUSE LCD monitor problem: 'not supported' Q:: I hope there is a simple answer to this simple hardware-related question. Every time that I try to load SUSE 10.2 with my new 19-inch flat-screen monitor, I get the message 'not supported' How do I get over this? The computer works fine with an old 14-inch CRT monitor. A:: A hardware issue that doesn't involve proprietary driver woes? Makes a change! Right, is this a single message right in the middle of your screen with nothing else displayed? If so, it is a message from your monitor telling you that the computer is sending a signal that is out of its normal range. It usually means the computer is trying to display too high a resolution or with too high a frequency. This is caused by the installer incorrectly recognising the monitor, so its idea of what it thinks the monitor can handle is different from the monitor's. There is a simple answer, as this affects only the installer, and that is to force the installer to use a lower resolution. Press the F3 key at the boot menu screen to select a different resolution. Work your way up the menu (lower resolutions are towards the top of the list) until you find a setting that works. As a last resort, you can install in text mode. This is less attractive and takes getting used to, but you end up with an identical installation. This problem affects only the installation, you will be able to choose suitable video settings to ensure you have a graphical desktop. It may well detect your monitor correctly at this stage. Back to the list ****** Force X.org to use a higher resolution Q:: I want to access a computer, running without a monitor, via remote desktop connection (krdc). Because the remote machine boots without a monitor, X.org drops back to VGA (640x480). Is there any way I can force X.org to use a higher resolution? I do not want to use X forwarding, I need to view the whole desktop. The computer is running Debian Etch and I have attached my xorg.conf. A:: This drop in resolution is caused by your X.org configuration. Here is the offending part of your xorg.conf: --- Section "Monitor" Identifier "BenQ T701" Option "DPMS" EndSection ,,, As you can see, once this part is extracted from the whole file, it gives no details about the monitor's capabilities and limitations. This is becoming a standard approach and generally works well with modern monitors that support EDID (Extended Display Identification Data). This is where the software queries the monitor and gets back the information needed to set up a suitable display. Since it is possible to damage a monitor by sending it a signal at too high a frequency or resolution - although most monitors have protection against that sort of thing these days - X.org falls back to a safe 640x480x8-bit display if it gets no response to its EDID query. The solution is quite simple, add the information on horizontal and vertical frequencies that X.org needs, and it will stop trying to ask the nonexistent monitor. You need to add HorizSync and VertRefresh options to the Monitor section above. If you ever connect a monitor to that computer, you will find the values in the monitor's manual. If you are never, ever going to connect a monitor to this system, you can use any reasonable figures, otherwise get them from the monitor's manual to make sure it works when you to want to use it. After restarting X, you should find it opens a display at the resolution given in xorg.conf, 1,280x1,024. Back to the list ****** Can't access Windows hard drives in Mandriva Q:: I have just installed Mandriva 2005. This is the second time I've done this. The first time I could read my Windows hard drives but this time I can't. I appear to be locked out. How can I get access to these disks as I did last time? The previous installation was on another hard drive, which I don't have any more. A:: The solution to this depends on two things: the type of filesystem you are using on your Windows partition and what you mean by "locked out" If you had full read and write access to the Window partition before, it is most likely using the FAT32 filesystem. In that case, if you mean you are able to mount the partition but not write to it, or read into directories, this is a simple permissions problem. Fire up the Mandriva Control Center, go into the Mount Points section and select Create, Delete And Resize Hard Disk Partitions. Select your Windows partition, go into Expert mode and press the Options button. The box in the middle of the Options window will probably contain 'defaults'. Tick the box labelled Umask=0, followed by OK and Done. You now need to remount the partition to apply the new settings. You could do this by rebooting, but this is Linux, not Windows, so open a terminal and type --- su -c "mount /mnt/windows -o remount" ,,, replacing /mnt/windows with wherever your Windows partition appears. Give the root password and you can now read and write to your Windows partition. The reason for this is the umask=0 that you added to the partition's mount options. The Windows FAT32 filesystem doesn't have any file permissions of its own. This option tells the system to treat all files and directories as readable and writable by everyone. If your Windows partition uses the NTFS filesystem, the situation is more difficult. While read access for this filesystem has been around for a while, full read/write access has only recently become really usable. Read access can be enabled by following the steps outlined above, but replace the remount command with --- su umount /mnt/windows chmod 777 /mnt/windows mount /mnt/windows ,,, You should now be able to read from - but not write to - your Windows partition. While it is theoretically possible to enable write support with this distribution, this is rather limited and more trouble that it is worth. Mandriva 2005 is generally considered to be rather old now, and in the intervening time things have moved on a lot in this area. I recommend you upgrade to the latest release. Back to the list ****** Squid permissions problems: winbind_privileged Q:: I had a Squid box working fine, but a power spike took out the boot sector of the disk. I have reinstalled (and taken the time to upgrade to Debian Etch). My problem is that the winbind_privileged folder is dynamically created at boot time. When it is created, the permissions are wrong. I need them to be root:proxy so that the proxy server can use the AD [Active Directory] authentication. How can I go about fixing this problem? A:: The only reliable solution to this appears to be the slightly kludgy one: to allow the directory to be created and then change the group ownership. Using your favourite text editor, as root, edit the file /etc/rc.local and add the following before the final exit line: --- if [[ -d /path/to/winbind_privileged ]] then chgrp proxy /path/to/winbind_ privileged fi ,,, Use the correct path for the winbind_privileged directory, of course. This script is run right at the end of the boot process. If you need to issue this command sooner, say immediately after Squid starts, you need to create a separate script. Put these lines into /etc/init.d/fixsquid: --- #!/bin/sh if [[ -d /path/to/winbind_privileged ]] then chgrp proxy /path/to/winbind_ privileged fi ,,, Once again, use the correct path for the winbind_privileged directory. Now make it executable and set it to run at just after Squid is started with by running these commands as root: --- chmod +x /etc/init.d/fixsquid ln -s ../init.d/fixsquid /etc/rc2.d/S35fixsquid ,,, Init scripts are run in alphanumeric order, and Squid is run from S30squid, so this runs it soon after that (the S means a startup script; names that begin with K are run on shutdown to Kill the process). Back to the list ****** D-Link USB dongle not working in Linux Q:: I have a problem with a D-Link DWL-G122 Rev C USB dongle. I tried NdisWrapper on another D-Link DWL-G122 Rev B, and it works with the Windows drivers that comes with it. However, with Rev C, I just can't get it working. Is there any way to determine which is the exact INF file to be used? Google tells me that the Rev C is using the Ralink RT73 chipset. How can I confirm it locally, with Linux (Mepis)? Is this RT73 the same as any of the RT2x00 chipsets? A:: Sadly, this is an all too common problem. Manufacturers will change the internals of a product while leaving the outward appearance and name the same. This does not affect Windows users as long as they use the driver disc supplied with the device. You need to take the same approach with NdisWrapper - use the INF file from the disc that came with the device, probably rt73.inf. You can identify the device with the lsusb command. This will give you two hexadecimal numbers for the manufacturer and product IDs. For example, my D-Link shows --- 'Bus 001 Device 005: ID 2001:3700 D-Link Corp. [hex] DWL-122 802.11b' ,,, where 2001 and 3700 are the manufacturer and product IDs respectively. With these numbers you can find out more information at http://qbik.ch/usb/devices. The RT73 and RT25xx are different chipsets but the RT2x00 project supports the RT73 too, and the RT61, yet another variation. There is also a standalone RT73 package from the RT2x00 site at http://rt2x00.serialmonkey.com. This is marked as a legacy package, but it is probably easier to install, so give it a try first. Download the rt73-CVS tarball, unpack it and follow the instructions in the README file. If this does not work for you, try the new RT2x00 driver set, which pulls the RT2400, RT2500, RT2700, RT61 and RT73 drivers into a single package. Being so new, it has to be downloaded from the project's Git repository, there is a link to full instructions on doing this on their downloads page. Another option is the Linux driver for the RT73 available from Ralink's website, currently at www.ralink.com.tw/data/RT73_Linux_STA_Drv1.0.3.6.tar.gz. The archive contains full installation instructions. You will need to compile the driver from the source code in the tarball, which means you will need your kernel source package and GCC installed, from the standard Mepis packages. You will also need to install a firmware file from www.ralinktech.com.tw/data/RT71W_Firmware_V1.8.zip. The situation should become a lot clearer as the RT2x00 driver package matures. I am no fan of NdisWrapper, but it is easy to see why people use it when it so often appears to 'just work'. Back to the list ****** Updating Debian offline Q:: We have a number of computers running Debian that do not have full internet access. Some are not networked at all. What is the best way to keep these up to date? Currently we copy updated Deb files to a CD and install them manually on each computer, but there must be a better way. We thought about a local Debian mirror, but that would consume a lot of bandwidth to keep up to date and still wouldn't help with the non-networked systems. A:: The answer lies in a useful package called APTonCD. This creates a repository on a CD (or DVD) that you can use to install or update non- networked PCs. APTonCD is also a useful backup and replication tool because you can use it to create CDs or a DVD containing all the packages currently installed on a computer, then use those discs to reinstall that computer or install the same set on packages on another machine. If you use Ubuntu Feisty you can install APTonCD via Synaptic, but it is not in the standard Debian Etch repositories, so get it from http://aptoncd.sourceforge.net and install on each of your computers with --- dselect --install aptoncd_0.1~rc-0ubuntu1_all.deb ,,, The easiest way to use this is to have one internet-connected computer that you keep up to date and use this to build CDs or a DVD to update the others. First run the program on your internet-connected computer and click on Create APTonCD. It'll scan your system for all packages in /var/cache/apt/archives, which is all the packages you've installed unless you've cleaned out this directory. You're then presented with the full list of packages, all selected. Remove any you don't want from the list (you may wish to do this to ensure it all fits on a single disc) and add extra packages. APTonCD will add the dependencies of any package you add, unless you tell it to not do this. APTonCD can burn to CDs or DVDs and will create as many discs as are needed to hold the files. Press OK and APTonCD will create one or more ISO images ready to burn to disc with your favourite CD/DVD burning app. The program will offer to burn the disc as soon as it has finished writing the ISO image(s). Once you have written the images to a CD or DVD, put it in one of your non-networked computers, run APTonCD and select the Restore tab. The first two options deal with restoring a system from the CD, which may be of interest at some time but isn't what you are looking for in your question. The third Restore option adds the disc as a repository, which can then be used by apt-get, Synaptic or other package management tool to update the computer. If you look in /etc/apt/sources.list, or select Settings > Repositories in Synaptic, you will see that your new CD has been added to the available software sources. Run the Update Manager and you can see and apply any updates to this system. It is a good idea to clean out your sources.list the next time you create and add a disc from APTonCD, otherwise you'll end up with several CD entries in here, one for each time you update. Back to the list ****** Triple-booting Grub configuration Q:: I love VMware but I don't have enough CPU and memory on my laptop. What I want to do is have three distros on one hard disk and I'm guessing Grub will be my best option. I have an 80GB hard drive on my laptop. Since I have to use Windows 2000 for work, I already have this on the first partition. Installing, say, FC2 as a dual boot option is simple enough using Grub, so here's what I'd like to know. Since I want to install a third OS, where should I install the boot loader? Does it really matter? When installing the third OS, what do I do at the end of the install when it asks me where to install the boot loader (mbr/boot sector)? The last time I installed the third boot loader, it wiped the reference to one of the OSes so I could only dual boot. Finally, how do I get Grub to recognise the third OS? A:: If you're booting three different operating systems and two of them are recent Linux distributions, both of which use Grub, it 's easy to build the appropriate boot loader configuration and install it on the MBR. You can do each of the installs and the final Linux installation will pick up the other two operating systems on the disk. It may take a little manual editing of your menu.1st file to make sure that Grub loads each kernel from the appropriate disk, but it should be as simple as copying the section of the file from one filesystem to the other. Of course, you'll only want to maintain the MBR through one of the distributions, otherwise you'll simply blow away your configuration every time Grub is reinstalled onto the inactive distribution. Back to the list ****** Remote printing Q:: I'm a teacher in a school and when I started to take care of the computers in the teachers' room they all ran Windows. Now I'm preparing to install Ubuntu on one of them, but the job is difficult because of one 'minor' detail - the printer! They have a PC running Windows Server, connected to a switch. The server is running exclusively to serve the printer. I'm running Ubuntu Feisty Fawn and I can't print. Ubuntu detects the printer, a Samsung CLP-500, and I have installed the drivers, but nothing prints. Do I have to use Samba? A:: According to the OpenPrinting database at www.linux-foundation.org/en/OpenPrinting, this printer needs the SpliX driver, available from http://splix.sourceforge.net. While this driver works well with some Samsung lasers (it's great with my mono laser), it is only reported as working 'partially' with the CLP-500. This appears to be because it is limited to 600dpi printing. Samsung also provides a Linux driver that you can download from http://short.zen.co.uk/?id=792 (the full URL is ridiculously long). SpliX is included with the current Ubuntu, so it's just a matter of installing it via Synaptic and then picking the right driver in the printer configuration tool. CUPS can talk to Windows printers - it uses the Samba client libraries, so you need Samba installed, but you do not have to configure it yourself. Ubuntu installs Samba by default, so there's nothing you need to do in this respect. All you need to do is install the SpliX package from Synaptics then run New Print in System > Administration > Printers and select the correct printer when asked. Back to the list ****** Linux can't read files on Vista partition Q:: After problems with Vista, a friend has asked me to put Linux on their PC. My PC is running Fedora, and I've set up a shared drive in Vista so I can pull off the files that my friend needs saved. But I need help getting the files off. I can access the shared drive but when I go to open up the folders to get the files, Linux comes up with a message that it can't read the folders on the Vista PC. Can you access a shared drive in Vista and pull files off it with Linux? I have no problems accessing a shared drive on XP, 2000 or 98 from Linux. A:: You can admit to owning a Vista PC yourself - we'll still try to help so there's no need to blame it on a 'friend'... The best way to do this is to use the shell to mount the drive, then you should see clear errors when it fails. Do this as root: --- mkdir -p /mnt/windows mount //PCNAME//C /mnt/windows -o user=USERNAME ,,, replacing PCNAME with the network name of the Windows computer and USERNAME with the name of the admin user on that computer. After giving the user's password, the C drive should be mounted (assuming that's the drive you're trying to share). Do not try turning off password-protected sharing in the Windows control panel, it actually makes things more difficult, not easier as you might expect. You also need to turn on Public Folder Sharing in the Network And Sharing section of the Windows control panel. Even with these settings, you'll still be unable to enter and copy some directories. Vista has protected directories inside the user directories, such as USERNAME\PrintHood. However, you should have no difficulties copying your friend's documents and other data files now. Because you've mounted his shared drive, you can use any file manager you like to do the copying. You haven't said whether you're trying to do this with a direct cable link or over the internet. It should work the same, apart from the speed, but bear in mind that the data won't be encrypted in transit. You may also need to open port 139 in his firewall or router to make a connection over the internet. This also allows anyone else to attempt a connection, so use a good password and close the port as soon as the job is done. If possible, take your computer to his house (or his to yours) and use a local ethernet connection. Alternatively, you could use the Windows backup program to back up the data to a file or DVD and copy that over to your Fedora system. Windows backup files are zip archives that an be unpacked with the Linux unzip command, which is installed on Fedora. Back to the list ****** Set up Linux Nvidia driver for multiple monitors Q:: I'm attempting to set the propriety Nvidia driver up for single, dual and twin view, and after much searching, I've finally managed by creating the xorg.conf files directly (as the Nvidia GUI keeps complaining about overlapping meta modes and reporting wrong refresh rates). But though I now have the three xorg.conf files ready and working - one for each view that I need (dual, twin and single) - I can't seem to find any information on how to integrate these in a single environment where I can switch between them. I need to be able to switch between these three types of view on the fly, ideally with a keyboard combination. As it is, I manually stop the X server, swap the xorg.conf file and restart X. I'd guess that I need to merge my three different xorg.conf files into one, but how? And how do I tie restarting the X server with an alternative view to a keyboard press (or any functionality, be it menu, file or whatever - as long as it's one-click or as near to as possible)? I'm using KDE on Fedora and would appreciate some guidance on this, but please be gentle - so far I've only been on the Linux wagon for a week. A:: You can combine the various portions of the separate xorg.conf files into one, providing you give them different names. The Monitor sections can just be put one after the other, but you'll need to make sure that each of your Screen sections has a different name, with a separate section for each of the layouts. Most of the other entries in xorg.conf are the same for all; things like keyboard, mouse and font settings. Then you create a separate ServerLayout section for each layout, with a different name, so you'd have something like: --- Section "ServerLayout" Identifier "SingleScreen" Screen 0 "SingleScreen" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection Section "ServerLayout" Identifier "TwinScreen" Screen 0 "TwinScreen" 0 0 InputDevice "Mouse0" "CorePointer" InputDevice "Keyboard0" "CoreKeyboard" EndSection ,,, The first ServerLayout is the default, or you can specify it with: --- Section "ServerFlags" DefaultServerLayout "SingleScreen" EndSection ,,, Now X will start up in single mode by default but can be started in twin mode with: --- startx -- -layout TwinScreen ,,, The '--' means 'end of startx options, pass anything else along to the server' In order to bind this switch to a hotkey, you need a short shell script. Save this script somewhere in your path, say as /usr/local/bin/restartx: --- #!/bin/sh if cut -c3)" == "5" then sudo /sbin/telinit 3 else sudo killall X fi sleep 2 startx -- -layout $1 ,,, and make it executable with chmod +x /usr/local/bin/restartx. As some of the script needs to run as root, you'll also have to edit /etc/sudoers, as root, and add this line: --- yourusername ALL = NOPASSWD: /usr/bin/killall X,/sbin/telinit 3 ,,, Now you can switch layouts with: --- nohup /usr/local/bin/restartx newlayoutname ,,, The nohup is necessary or the script will be killed when the desktop closes. As you're using KDE, you can bind any commands you want to hotkeys in the Regional & Accessibility/Input Actions section of the Control Centre, so set up one to switch to each layout in your xorg.conf file. Finally, you'll probably want KDE to remember your open applications after switching. To do this, go to Control Centre > KDE Components > Session Manager and select Restore Manually Saved Session. This adds another option to enable you to save your session and you can get the script to do this automatically by inserting this as the second line: --- dcop ksmserver ksmserver saveCurrentSession ,,, This is the only KDE-specific part of this exercise, and you'll find that the rest will work with any desktop. Back to the list ****** Fedora only delivering local mail Q:: I'm using a Fedora system and thought of upgrading to the latest version. Before doing this I loaded it onto a separate machine to see how it was configured off the disk. I found that sendmail was set up to deliver mail but I couldn't deliver mail to the box from outside the box. On Google I found that the distro was shipped with the ability to receive mail from external sources turned off. Why? I also set up some shares in Samba and still have the following problem: if I set up a directory - say, /backup - with the same permissions and ownership as /var, I can connect to it from another machine and share the contents, create and update as well as remove. If I change the entry from /backup to /var then I'm not able to connect to the directory. I guess I have another pre-shipped parameter to change but which one? What I want to do is set up the share to access /var/www/html in order to play with HTML and PHP files. All this works fine on the old system and didn't require changes. I will get to the new version sometime but not until I've solved these and other issues in a standalone system. Just one other point. When I've performed upgrades the process takes hours so I thought it would be easier and quicker to do a new install and copy the relevant config files and data, but now I'm not so sure. A:: It looks like you've opted for security when installing the new Fedora. As such, it's been set up to deliver only local mail, which you were able to switch easily enough, and to prevent sensitive directories being shared. While it is possible to alter this so that /var can be shared, you really should reconsider. Blocking the sharing of /var is for a good reason - a lot of sensitive information is stored on /var and it's easy to render a system unbootable with a modicum of malice, incompetence or plain carelessness. The question shouldn't be 'how can I share /var?' but 'do I need to share all of /var?' - to which the answer is no. If you want to access /var/www/html remotely, then share only /var/www/html. In doing this, you'll avoid the potential risks associated with sharing /var/log or /var/lib but still be able to do what you want. There are also alternatives to using Samba. If both computers run Linux, you could use NFS to mount /var/www/html on the remote computer. If you're using KDE on the editing computer, you could avoid using any form of remote mounting or directory sharing by using KDE's FISH implementation. This uses SSH to communicate with the remote computer, so putting fish://hostname/var/www/html into Konqueror's (or Krusader's) location bar will load the directory's contents into a file manager window, from where you can load files into a KDE-aware editor. Going from very old Fedoras to the latest release is a huge step. Many key components will have changed, so an update is likely to consume more time than the hours required by the package manager when you have to fix other problems. A fresh install is the best approach, but making a jump of a few years in major components is likely to result in differences in the way things work, as you have discovered. Back to the list ****** BT broadband won't connect with 3945abg drivers Q:: I have recently switched to BT broadband and I'm trying to connect to the BT Home Hub using Wi-Fi. I have installed the Intel/PRO 3945abg drivers and iwconfig shows the network interface as up, but KNetworkManager won't connect to the hub. I've set the encryption system to Open System and entered 40/104-bit hex key. The network manager hangs at 28% and then re- prompts for the WEP key. The BT Home Hub docs say that the encryption is 128 -bit. Any pointers as to how to connect to the hub would be greatly appreciated. Here's the output from iwconfig: --- eth2 IEEE 802.11g ESSID:c Mode:Managed Frequency:2.412 GHz Access Point: 00:14:7F:BE:0D:9D Bit Rate:54 Mb/s Tx-Power:15 dBm Retry limit:15 RTS thr:off Fragment thr: off Encryption key:xxxx-xxxx-xx Security mode:open Power Management:off Link Quality=77/100 Signal level=-57 dBm Noise level=-58 dBm Rx invalid nwid:0 Rx invalid crypt:65 Rx invalid frag:0 Tx excessive retries:0 Invalid misc:126 Missed beacon:0 ,,, A:: The iwconfig output looks good except for the encryption key, which is too long for 64-bit and too short for 128-bit, so this is probably an encryption problem. The first thing to do is turn on encryption on both the Home Hub and your computer. Wireless encryption is good generally, but it gets in the way when you're trying to configure your connection. It's easiest to configure an unencrypted connection first and then apply encryption when the connection is working. You turn off encryption for the Home Hub in its web administration page. The manual will tell you the address to type into your browser, and the default password, to access this. While you're there it's a good chance to change the password if you haven't already. Your iwconfig output indicates that this should work with no problem. Once you've verified it works by connecting to an external web page (try www.linuxformat.co.uk because Mike likes to see the hit count go up) you can turn WEP encryption back on. WEP uses so-called 64-bit or 128-bit encryption. 'So called' because 24-bits aren't available to you to change, which is where the 40-bit and 104-bit figures come from. The 128-bit key should be entered as a 16-character hexadecimal string, usually broken up with dashes to make it more readable, as in XXXX-XXXX-XXXX-XXXX. If you can't get this to work with KNetworkManager, try running iwconfig directly from a terminal, as root. This may provide you with some useful error messages. The commands that you need are: --- ifconfig eth2 up iwconfig eth2 key open XXXX-XXXX-XXXX-XXXX iwconfig eth2 essid "BTHomeHub-8AF2" dhcpcd eth2 ,,, Once you have it working through the terminal, you can plug the details into KNetworkManager, or turn off NetworkManager in Yast and use the standard Yast network configuration instead. Searching the internet for information on this brought up far more problems than success stories. The consensus seems to be that this isn't a particularly good wireless hub (even though it's styled to look like a smart Apple accessory), and that a wireless access point/router from one of the standard networking companies would actually be a much better bet. But given that this unit comes free with your connection, it's probably worth spending at least some time trying to get it working acceptably. Back to the list ****** Make Ethernet cards remember names between reboots Q:: How do you get Ethernet NIC cards to remember their names between reboots on a SUSE distro? I'm running SUSE Enterprise 9 on my Linux router/firewall, which has three NICs installed; one for the external internet port, one for our internal network and one for our DMZ which carries all of our externally accessible resources such as web, mail and FTP servers. In most respects this installation operates beautifully. The problem is that the Ethernet device names seem to a) get randomly allocated on reboot (so hat was 'eth0' last time the system rebooted often becomes 'eth1' on the next reboot), and b) any persistent names assigned to these devices such as 'nic1' or 'nic2' are frequently ignored (even though PERSISTENT_NAME="nic1/2/3" is defined in the device files in /etc/sysconfig/ network/ifcfg-eth-*). The upshot of this is that I almost always have to run ifconfig when I restart the router and patch the device IDs in the iptables definitions to suit the current (pretty much random) device configuration. This is a problem because the router rarely recovers from any outage condition without intervention. I have attached the config file of the DMZ NIC in /etc/sysconfig/network/ifcfg-eth-id-00:02:96:00:3f:8e. This card usually comes up as 'eth2' and has (theoretically) been assigned the persistent name 'nic2' for the purpose of our iptables firewall definitions. When the system boots, it occasionally notices that the device should be called 'nic2' but, more often than not, it ignores the PERSISTENT_NAME definition. Unfortunately, I don't have enough LAN cards to try this in another box (with a different distro) and I can't afford to take the server down for the time I may need to resolve the issue. A:: This is odd - your config file looks correct, and works with SUSE here. The fact that it works occasionally indicates that some fundamental piece of software is not missing. Have you upgraded this system so it now uses udev? That could be forcing the names in spite of your settings in /etc/sysconfig/network. If so, the easiest and cleanest way to fix this is to use udev naming rules. Create the file /etc/udev/rules.d/10-network.rules, as root, and add these: --- SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="00:02:96:00:3f:aa", NAME:="nic0" SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="00:02:96:00:3f:bb", NAME:="nic1" SUBSYSTEM=="net", DRIVERS=="?*", ATTRS{address}=="00:02:96:00:3f:8e", NAME:="nic2" ,,, replacing the strings after ATTRS{address} with the MAC addresses of the three cards. While the SUSE system had a problem with re-using the standard names, udev does not as this renaming is done before any names are applied, so you could use eth0/1/2 here if you wished. You may find you already have a file in /etc/udev/rules.d containing net persistent naming rules, in which case you should edit this file to add the above assignments. An alternative approach is to use the nameif command to rename the interfaces. This must be done before the interfaces are brought up. Create the file /etc/mactab with its contents a list of interface names and MAC addresses, like this: --- nic0 aa:bb:cc:dd:ee:ff #internal nic1 00:11:22:33:44:55 #external nic2 66:77:88:99:00:aa #dmz ,,, The nameif command will read this file and rename the interfaces accordingly. This should be considered only if you're not using udev, as udev rules provide the best way to handle persistent naming of network interfaces, and just about anything else. Back to the list ****** Which kernel for Belkin USB wireless stick? Q:: I am trying to set up my system to use my Belkin USB wireless stick with ndiswrapper. The notes tell me I need a certain kernel as a minimum. I'm a new user, so can you tell me how I find this information? Also, can you give me any advice on setting up this item? A:: There are various GUI tools that will tell you which kernel you're running: the KDE Control Centre shows it as 'release' on the startup page, or you can use your distro's package manager to find the version of the kernel package (some distros call it 'linux'). The simplest way is to open a terminal and type one of: --- uname --kernel-release uname -r ,,, You may not need to use ndiswrapper as some Belkin wireless devices have native support. In this case run: --- sudo lsusb ,,, in a terminal to find out more about your device. Then search Google or your distro's forums for information on this device. You may also find details of which driver would be best for you to use at http://qbik.ch/usb/devices. If there's no native driver for your device you'll have to use ndiswrapper. The most important point to remember when doing this is to use the driver that came with the device. Manufacturers have a habit of changing the internals of devices while leaving the model number the same, so a driver for an apparently identical device may be useless. If your distro (you don't mention what you're using) has a tool for configuring wireless devices, use this rather than trying so set it up manually. Some, such as SUSE's Yast, will also set up ndiswrapper for you. Back to the list ****** Configure Apache to have personal websites in home directories Q:: I want to set up Apache so that users have personal websites in their home directories, with /homes/user/website linking to www.blah.com/~user. I know I can do this using the userdir module. However, the problem is that users mount their home directories from a Windows box. As such, when they drop files into this folder, it does not give Apache any permissions to read the files they put in. How can I set this up so anything the user drops into their public folder is readable by the Apache user automatically? I've seen mention of something called mod rewrite but this doesn't seem to be the answer. Neither do I want the users to have to change permissions (too low-level for them!) or run some script every couple of hours to check their permissions! Is there an Apache module that can do something like this? A:: mod_rewrite is a very powerful tool, but the wrong one for this job as it alters redirects-requested URLs based on regular expressions. You were right with your first choice of the userdir module. Your problem boils down to making sure the HTML and other files that users drop into their web space are readable by the server without making the whole user directory world readable, which is easily done with some carefully chosen ownerships and permissions. Working with the default Apache userdir configuration, http://hostname/~username/ is mapped to /home/username/public_html/. The first step is to make sure that the user directories are readable by the users only: --- chmod 711 /home/* ,,, Then the public_html directories need to be readable by the group under which Apache is run. This is usually 'apache', but some distros run the server as 'nobody' Look for the Group directive in the httpd.conf file: --- chgrp apache /home/*/public_html chmod 750 /home/*/public_html chmod g+s /home/*/public_html ,,, Now the users' directories can only be read by the users themselves (chmod 711) while the public_html directories belong to the 'apache' group and can be read (but not written) by members of that group. The third command makes the directory setgid, so any files created in here will automatically belong to the apache group instead of the user's normal group. Ownership of the file is still with the user. If you want to use a different directory for the user's file instead of public_html, edit the relevant part of your Apache configuration. This can vary from one distro to another but one of your config files will contain the line: --- UserDir public_html ,,, Change this to wherever you want the HTML files to be kept in each user's home directory. Back to the list ****** Convert DVDs to PlayStation Portable (PSP) format in Linux Q:: My son has a PlayStation Portable. I'd like to convert some DVDs and other video files to MPEG4 so he can watch them on long journeys. I'm sure Transcode or Mencoder should be able to do this, but their man pages are full of jargon. Is there an easy way to convert videos for the PSP? A:: Yes there is! When converting from DVD, the easiest program is normally dvd::rip, a graphical front-end to Transcode, MPlayer and the like. However, it can't handle the variant of MPEG4 that the PSP uses, so you need FFmpeg, another command line program but one with less confusing options than Transcode or Mencoder. A GUI for FFmpeg, called Vive, can be found at http://vive.sourceforge.net. It only comes as source code but is very easy to install so long as you have the compiler toolkit installed. Download the latest tarball from the site, currently 2.0.0-beta1, and install it with --- tar xf vive-2.0.0-beta1.tar.gz cd vive-2.0.0-beta1 ./configure make su -c "make install" ,,, Give the root password when asked. Ubuntu users should replace the last command with the following and use their own password --- sudo make install ,,, Vive should now be in your KDE or Gnome menu, or you can run it from the command line with vive. Vive uses presets to collect settings for types of output. There's a sample settings file that's not installed by default; install it with --- mkdir ~/.vive cp /usr/share/doc/vive/examples/preferences ~/.vive ,,, This file contains a preset for iPod/PSP videos, but doesn't generate PSP- specific files, nor does it handle widescreen videos. Add this to the preferences file --- [PSP] format=psp vcodec=mpeg4 maxrate=768000 bitrate=700000 bufsize=4096 aspect=4:3 width=320 height=240 acodec=aac ab=64 ar=24000 comment=Encoded by Vive ,,, For widescreen videos, copy the block, alter the name to, say, PSPwide, and make the aspect, width and height values 16:9, 368 and 208. When you run Vive, you can select either a DVD title or a file to encode - press Load to have Vive read the list of titles from the DVD. Then choose the output file and a preset to use. You can also alter the values for video and audio encoding from the defaults of the chosen presets. Video files must be saved in the /MP_ROOT/100MNV01 directory on the memory stick and be named M4V00001.MP4, M4V00002.MP4 and so on. The Vive GUI can only convert one file at a time, but the program can be run from the command line for batch processing. To convert all the AVI files in a directory, try --- for FILE in *.avi do vive -p PSP -i $FILE -o ${FILE/.avi/.mp4} done ,,, Back to the list ****** Convert DVB to DVD Q:: I've got my DVB-T stick working but my wife still won't look at a computer screen; is there some way I can convert files saved from the stream into something that can be played on our DVD player through the television? A:: DVB and DVDs use two variants of the video codec, MPEG2. DVB uses MPEG2-TS while DVDs use MPEG2-PS; Transport Stream and Program Stream respectively. The main difference being that Transport Stream is designed for use over an unreliable connection, like radio transmission, so it has more redundancy and error correction, resulting in files that are around 30% larger. Transcoding from MPEG2-TS to MPEG2-PS is simple and fast because it only involves the error correction data, the video itself doesn't need to be re-encoded. There are a number of programs you can use to turn a DVB MPEG into a DVD. One of the simplest, albeit rather slow, is tovid (http://tovid.wikia.com), the todisc command in this package takes a list of video files in almost any format and converts them to a DVD ISO image. If you want a GUI for this, a couple of programs that you may find useful are dvdstyler (www.dvdstyler.de) and qdvdauthor (http://qdvdauthor.sourceforge.net). However, if you only want to create a DVD from a single MPEG2 file, these are overkill, when a shell script will do the job more quickly: --- #!/bin/sh mplayer -dumpfile title.audio -dumpaudio $1 mplayer -dumpfile title.video -dumpvideo $1 mplex -f 8 -o title.mpg title.{audio,video} dvdauthor -x title.xml mkisofs -dvd-video -o title.iso dvd ,,, Where title.xml contains: <dvdauthor dest="dvd"> <vmgm /><titleset><titles> <pgc><vob file="title.mpg" /></pgc> </titles></titleset> </dvdauthor> This separates the audio and video stream, then recombines them with the data necessary for DVD authoring, but without the DVB extras, before creating a DVD file structure and writing that to an ISO image. Before writing the ISO image to a DVD, you can test it with: --- mplayer -dvd-device title.iso dvd://1 ,,, You will need mplayer, mjpegtools and dvdauthor installed to do this, all of which will be in your distro's repositories, most are probably already installed. Alternatively, if you use MythTV to record and watch the programs, install the mytharchive plugin which does DVD exports. This application can combine several programmes onto a single disc - re-encoding if necessary to fit more on one disc (but though that takes a lot longer, it's worth it if you are going to do this regularly and don't want to become overwhelmed with lots of discs) and offers a choice of menu styles and layouts. This is what I use most of the time. Back to the list ****** eth0 networking woes Q:: I get the 'bringing up interface eth0 FAILED' error as Mandrake 10.1 is loading on my system. I'm dual booting Mandrake with Windows XP, and XP has no problem whatsoever with the network card - there's a cable plugged in and it works fine in XP. I've fiddled with the Hardware Configuration utility in Mandrake but this hasn't helped. The network interface is a Via Rhine 10/100 (or something like that) built into the motherboard. It's connected to a switch, which is also connected to an ADSL router and one other PC. A:: When a system detects an interface but fails to bring it up, it can often be due to an issue with the chipset telling the kernel it can interrupt in a specific way, when in fact it can't. You can try adding 'noapic' to your kernel command line because this will switch the kernel back to using basic old PIC, which occasionally works better. VIA chipsets aren't always the greatest. However, once you have the workaround in place, you can add it to your boot loader configuration to avoid having to type it each time the system boots up. Switching from APIC to PIC isn't going to impact on any major parts of the system, unless you're starting to run out of IRQs, so it's safe to run the system long term with 'noapic' set. Back to the list ****** Ugly Firefox buttons on web pages Q:: I'm new to Linux, having got rid of Windows XP, and am now using PCLinuxOS 2007 on my Fujitsu Siemens Amilo xi 1546. When using Firefox, the radio buttons on web pages are ugly and not as smooth or round as in Internet Explorer. Is there a fix to make them look better? I searched the net and found something about putting two radio button images in the /Firefox/res folder, and add some code to /Firefox/res/forms.css, but the links to the code and images are gone because of the dated thread I found them on. Being a newbie to Linux, can you make it simple? A:: The default Firefox widgets do have a rough appearance. The fix you mention is probably the one by Osmo Salomaa, that you can download from http://users.tkk.fi/~otsaloma/art/firefox-form-widgets.tar.gz. To install it, exit Firefox, copy firefox form-widgets.tar.gz from the DVD to your home directory then open a terminal and type --- tar xf firefox-form-widgets.tar.gz cd firefox-form-widgets.tar.gz su cat res/forms-extra.css >>/usr/lib/firefox-2.0.0.3/res/forms.css cp -a res/form-widgets /usr/lib/firefox-2.0.0.3/res/ exit ,,, You will need to be root to modify system files, which is handled by the su command. The exit command switches you straight back to a normal user, as it's unwise to remain as root for any longer than is absolutely necessary. Incidentally, Ubuntu users get a graphical installer for these widgets, courtesy of one of their forum users. Find this at http://ubuntuforums.org/showthread.php?t=369596 Back to the list ****** Safe updates, and knowing which data to back up Q:: I am the IT manager for a small company that provides web services to international branches, VPN solutions and other services, all on CentOS, as well as internal services such as Samba and CUPS. Patching Linux servers is a relative unknown to me but I have to do it now. The paralysis brought on by fear of breakages can't continue - it will result in a less secure system. I've read book after book, article after article. They all seem to gloss over this topic with a catch-all "back up your data" Which data? It's not as simple as tarring up a home directory when it comes to enterprise services - they're all over the OS, with libraries that other services are dependent upon. What if an update breaks something? How do I roll back? I understand that the major server distributions spend a great deal of time making sure that their repositories are self consistent, however there are things that never make it to the distros - certain CRMs for example, third-party webmail solutions etc. Anything more than one package with similar functionality could feasibly mean that I end up chasing dependencies by hand if something goes wrong. The ideal solution is, of course, to apply the patch to a test environment first. In truth though, how many people have a mirror of every live service available all the time? A failover box may be available, but I'd rather not change the one that thing I know should work if everything else fails. Virtualisation seems to be the way to go. Virtualise your environments, take a snapshot, apply the patch, roll back the entire operating system if something goes wrong. This seems a little inelegant though - like changing your car when you run out of petrol. A:: The car analogy seems a little strange -rolling back to a snapshot only undoes the changes made since the snapshot was taken, it is like an undo function but to a fixed time rather than a single operation. With critical production servers, you do really need to test everything on a separate system before applying it to the live servers. You are thinking along the right lines with virtualisation, but you can use it for the test environments. That way you could effectively have test versions of all of your systems on one or two machines. This has a number of distinct advantages. First, you can use a single box with a number of virtual machines on it, which would require no more resources than a single box running any one of those servers, with the obvious exception of disk space. When you want to update a particular system, load the virtual machine, apply and test the updates and replicate them on the production server when you're completely satisfied that they work reliably. If there's a problem, revert the snapshot and try again, all the while your production server is reliably doing its job. Another advantage of testing on a separate system first applies when you're installing from source. You don't need to compile on the production system, so you don't need a full compiler toolchain on that box. This reduces the number of packages installed on the remote server and so improves its security. You can use checkinstall (http://checkinstall.izto.org) to build RPM packages of the program for installation on the production systems. Back to the list ****** Files disappearing on Apache Tomcat server using RAID Q:: We've set up an Apache Tomcat server with two 500 GB drives using software RAID 1. I made a few changes to some files, restarted the server to test them and found the changes I had made to the files were gone. Some files I had deleted had also reappeared. I checked my mail and had received errors from mdadm. --- A DegradedArray event had been detected on md device /dev/md0. The /proc/mdstat file currently contains the following: Personalities : [raid1] md1 : active raid1 sda2[0] sdb2[1] 1959808 blocks [2/2] [UU] md0 : active raid1 sda1[0] 486424000 blocks [2/1] [U_] unused devices: <none> ,,, I'm making a backup of all the important information, but if possible I'd like to salvage the server, since the setup was very specific and time consuming. I'm new to the world of Linux administration, and unsure where to start. A:: The contents of /proc/mdstat indicate that a drive has failed on the md0 array (/dev/sdb1?). Your machine will continue to function with a degraded array, but with slightly reduced performance and no safeguard against another disk failure. There are a number of tools available to test the disk, but the safest option is to replace it and rebuild your arrays. This will also mean replacing /dev/sdb2 of course, so the other array will have to be rebuilt too. Fortunately, this is a simple task and largely automatic, but it can take a while. You can also continue to use the computer after replacing the faulty disk while the arrays are being rebuilt, but this will result in noticeably reduced disk performance. It is easiest if you can add the new disk before removing the old one as this means you can rebuild md0 first, then switch md1 to the new disk at your convenience. Assuming your new disk is added as /dev/sdc, connect it up and reboot. Then partition the disk as you did for sda and sdb, setting the partition types to Linux Raid Autodetect. Now run these commands as root, to remove the faulty disk from the array and add the new one: --- mdadm /dev/md0 --fail /dev/sdb1 --remove /dev/sdb1 mdadm /dev/md0 --add /dev/sdc1 ,,, When the new disk is added to the array, the RAID driver will synchronise it with the existing disk. This can take a while, monitor the contents of /proc/mdstat to follow the progress. When the process is complete you'll have both your arrays working correctly, but using three disks, one of suspect reliability, so repeat the above commands for md1, sdb2 and sdc2 to transfer the other array to the new disk. Now you can power down and remove the faulty disk when it suits you as it is no longer in use. Needless to say, as with any critical disk operation, you should ensure your data is backed up before you do any of this. You can check the old disk with either smartmontools (http://smartmontools.sourceforge.net) which is probably available in your distro's repositories or check the manufacturer's web site. Most of them provide a diagnostic tool that runs from a bootable floppy disk, which you will need if the disk is to be returned under warranty. If the computer has no floppy drive, most of the diagnostic programs can be run from the Ultimate Boot CD (www.ultimatebootcd.com). Back to the list ****** Reorganising hard drive letters in Windows after installing Linux Q:: A few months ago, having installed Mandriva on my system, I replaced it with SUSE 10.2. I have two internal hard disks both split into two partitions. Windows 2000 shows them as C and D on the 0 disk and F and G on the 1 disk. SUSE is installed on drive F. I also have an external disk which is drive J. When I originally installed Linux, I unfortunately had drive J switched on and unless this is switched on at start-up, cursoring down the available items on the start menu is not possible. What I wish to do is uninstall Linux from the system completely and start again with another, larger, secondary hard disk. However, nowhere in the Help tabs in Linux software does there appear to be any means whereby I can go back to my simple Windows 2000 and HD 0. I suspect that if I wipe the Linux software from HD 1 (drive F in Windows), which I am tempted to do at the moment, there will be no menu appearing at switch on, so I am completely baffled. I'm hoping that you could please tell me how to uninstall Linux completely, so that I am able to go back to where I was before installing Mandriva. Given the choice at switch on between Win2000 and XP, and not having to have the external drive switched on always I would be very grateful to be able to sleep soundly again. A:: The boot menu would probably start to work if you left it a while, it would appear that the Grub bootloader is trying to read the missing disk and should time out eventually. You do not need to reinstall to fix this, only edit the bootloader configuration. You should be able to do this from YaST. Boot with the external drive connected then unmount and disconnect or power off the drive. Run YaST, go to System > Boot Loader > Boot Loader Installation and select Propose New Configuration from the popup menu at the bottom right of the window. This should scan your disks (which no longer include the external drive) and set up a new menu for your Windows and SUSE installations. Go into the Section Management tab to make sure everything is as you wish and then click Finish. If you really want to remove Linux from these disks, select Restore MBR of Hard Disk from the same popup menu, which will replace the bootloader code with whatever was there before you installed SUSE. If this was Windows, fine, but if you went straight from Mandriva to SUSE, this will replace the Mandriva boot code, which you don't want. In this case, you should boot your Windows CD in rescue mode and run fixmbr, which will wipe all Linux bootloader code and replace with the Windows bootloader. Alternatively, you could simply replace the secondary disk, which probably would mess up your hard disk booting without doing any of the above, and boot straight from the SUSE install disc, install it and let it set up a new boot menu for you, making sure you leave the external drive disconnected this time. SUSE, as with all current Linux distros, is quite capable of detecting the external drive when you connect it after installing the operating system. Back to the list ****** Does Linux need security and anti-virus software? Q:: I've been toying with getting into Linux for a couple of months now. I tried downloading a distro, but struggled with the amount of technical jargon involved. I've loaded Ubuntu 7.04 and I love it. I'm still struggling to get my head around the fact that it is free and so is a load of other software that came with it, but I'm sure I'll get used to this. As I'm new to this, I need to double-check that what I am doing is safe and I'm not opening my PC up to external hackers. Are there steps that I should be taking to put in a firewall and virus checking software? I've installed Ubuntu 7.04 as a dual boot with Windows XP Home edition. On XP I have F-Secure 2007 combined firewall and virus checker. I connect to the internet using an external modem-router via an ethernet cable. A:: Viruses are not a real problem with Linux, although it is good to be prepared. The most popular anti-virus program for Linux is ClamAV (www.clamav.net), which is included with Ubuntu and can be installed with the Synaptic package manager. ClamAV detects Windows viruses as well an any targeting Linux, which, combined with the plugins available for most mail programs, means you can also use it to make sure no nasty attachments reach your Windows setup via your Linux mailer. Firewalling is handled differently on Linux to Windows. The lack of spyware, and the virtual impossibility of embedding it in open source software, means that it concentrates on keeping out intruders. The Linux netfilter software is built into the kernel, so the various firewall programs you see provide more or less easy ways of setting up, testing and applying the filtering rules. There are several packages in the Ubuntu repositories that are well worth looking at, including: Firewall Builder (www.fwbuilder.org), Guarddog (www.simonzone.com/software/guarddog) and Shoreline Firewall (www.shorewall.net). The first is a GTK program that fits in well with the default GNOME desktop while Guarddog is a KDE program. They offer similar features but with a different approach. Shoreline Firewall is a script- based program that is definitely harder to set up the first time but provides more flexibility. Any of these are capable of protecting your system, so try them and see which you like best. You should also reduce the chances of intruders even reaching your firewall. Your router is the first line of defence, so turn off any port forwarding services you do not need. You should also disable any unnecessary services in Ubuntu's System > Services window, although be careful about what you disable here, some services are needed for normal operation of the computer. If unsure, turn off services individually and keep track of what you have done so you can turn them back on if you experience problems. Although Linux is inherently more secure than Windows, this should not be relied on, Linux programs can have security holes too. These are usually fixed promptly, so keep your system up to date. The four steps of blocking at the router, disabling unnecessary services, running a firewall and keeping your software updated will mean you can safely use the Internet with confidence. Back to the list ****** Accidentally made a Vista partition into a swap partition Q:: I was messing about with trying to create a swap partition on an old flash mp3 player, and I accidentally made a swap file on my Vista partition. I have not switched the swap file on, and used the mkswap command to make the swap file. My Vista still works, although I have to boot in through the recovery partition, so I am guessing it has only affected the start of the drive. Is there any way to reverse the mkswap command to enable me to fix the Vista partition? I have checked in GParted, and it reports the drive as a swap drive. Fdisk shows the partition as NTFS, which it should be, but there's no * under the boot heading. Does that mean that if I can restore the Vista boot info to the disk, it should work? A:: If Vista still works, the partition must be OK. It looks like you have changed the partition type setting, probably to Linux Swap, and cleared the boot flag. This means that Windows cannot recognise the partition and that the bootloader does not think it can boot from here. Use whatever partition editor you prefer to set the partition type to NTFS (07) and set the bootable flag for it. I find cfdisk easy for this, and it is on just about every live CD I have ever tried. Boot from a live disc, open a root terminal and run cfdisk with: --- cfdisk /dev/hda ,,, When you've done this, select the partition, press t to set the type and choose NTFS from the list of alternatives, then press b to make it bootable. Finally press W (that is a capital W) to write the changes to the disk. You can also do this with a graphical editor like gparted or qtparted, but I find cfdisk faster for this. You don't even need to wait for a desktop to load if your favourite live disc already has an option to boot straight to a shell prompt (Knoppix users, for instance, can type knoppix 2 at the boot prompt). Back to the list ****** External USB hard drive always mounting as root? Q:: I have set up a small server running Debian Etch, mainly to use as a fileserver but also eventually for some web-based stuff. I have a USB hard drive that I want to use as shared storage via Samba. My problem is that no matter what I do the drive is always mounting as root. If I set the mount point permissions to 777, user=guest and group=users and then mount it as a normal user, the permissions stay the same but user and group both revert to root. So I still can't write to the drive. If I mount as user root I have no problems accessing locally but in either situation Samba then won't let me write either. Someone suggested this was maybe a udev issue and that I needed to play with that so that the permissions are altered when it mounts. I'm not up on udev so don't know where to start. The drive is sda with partitions sda1 and sda2. A:: Udev only handles creation of the device node (/dev/sda1 or whatever) not the mounting, so this is unlikely to be at fault. It is possible that udev is creating the node with restrictive permissions, but this would only stop users mounting the device (not root) it wouldn't affect the mounted filesystem. The user mount option doesn't take the name of a user, it simply allows any user to mount that filesystem, nor does it affect the permissions of the filesystem. The solution to your problem depends on the type of filesystem you are using. If this is a Linux-type filesystem that supports user permissions, setting the ownership and permissions of the mount point should suffice, but you have to do this after the filesystem has been mounted, otherwise you only affect the mount point, not the mounted filesystem. In Windows filesystems, particularly FAT32, you can add the option umask=002 to /etc/fstab to make all files user and group readable and writeable. Then use the uid and gid options to set ownership of all files in the filesystem. You can use numeric values here or user and group names, eg: --- /dev/sda1 /mnt/somewhere vfat umask=002,uid=guest,gid=users 0 0 ,,, Back to the list ****** Only allow FTP access from certain hosts Q:: I have an external server that acts as an FTP server for the company personnel and also as an anonymous FTP server for our clients. It's been a good little server until recently, where we find that it's getting abused by folks other than our clients and is causing some network bandwidth issues for us. We don't have issues if the server is running after hours, as that doesn't affect the staff that use the same network during the day. So I put the service in a cron job and have it stop in the morning and restart when the office closes down for the day. This solved the network bandwidth issues, but then caused another problem. The staff that need to update the files on the FTP server need to be able to do so during the working hours of the office. I need to have it running for the local staff but not running from the outside. I have some thoughts but they deal with modifying either hosts.allow/hosts.deny files or using some kind of xinetd trick to get them to work. I'm not sure what would be a good solution for this. The server is running CentOS 4.5, using vsftp running as a standalone daemon. The machine only has a single network card and IP address and is only visible via that address. A:: While it is possible to do what you want, only making the public server available out of office hours, this is not the ideal solution. It is reasonable to assume that those abusing your server are not always putting legal material there, which could lead to legal action against you or losing your Internet connection. Remember, this is your server and you are responsible for the content available from it. Providing anonymous upload and download capability is asking for trouble. If you must do this, keep the upload area separate from the downloads, so people cannot download material that has been anonymously uploaded, it has to be moved over by someone with a login account. A better solution is to disable anonymous uploads altogether and provide your clients with their own FTP accounts. If you really want to continue offering unrestricted anonymous access out of office hours, you can use the hosts.allow and hosts.deny files in /etc. You enable this by putting --- tcp_wrappers=YES ,,, in /etc/vsftpd.conf. Then ensure your local network has access - add this line to /etc/hosts.allow --- vsftpd: 192.168.1. ,,, Note the trailing "." on the address to match the whole subnet, change the address to match your network. Now put these two lines into /etc/ cron.d/vsftpd --- 0 18 * * 1-5 root sed -i '/^vsftpd/d' /etc/hosts.deny 0 8 * * 1-5 root echo "vsftpd: ALL" >>/etc/hosts.deny ,,, and force cron to reload with --- killall -HUP cron ,,, This will deny modify hosts.deny to deny all addresses, except those specified in hosts.allow, between 0800 and 1800 Monday to Friday and clear the block at other times. Back to the list ****** Best distribution for web/mail small office server? Q:: I'm setting up a web/mail server. I'm a novice, so what would you recommend as a user friendly, secure Linux OS and web/mail server software with an easy interface for sharing Windows files? Preferably something with a large user manual, it will be used in a small office. A:: Oh dear, all I can say with any certainty is that whatever I recommend will produce howls of disagreement from those who favour other distributions. While the popular distributions are general purpose and all suitable for desktop, workstation or server use, there are a number of smaller distributions that are specifically designed for the sort of thing you want to do. One such distro is ClarkConnect (www.clarkconnect.com). While ClarkConnect is best known as an Internet gateway, a means of connecting a network to the Internet, with suitable content filters and access controls, it can also be used as an intranet server. As you admit to being a novice and will be using this in a commercial environment, I strongly suggest that you consider taking one of the paid for versions, although you could install a free version first to try it out. ClarkConnect provide Community, Office and Enterprise versions. The Community Edition is completely free while the other two have 30-day trial periods. The paid-for editions offer extra features and, most importantly, support. ClarkConnect needs a keyboard and monitor attached to the server for installation and basic initial configuration, after that everything is done via its web interface. Administration is done over a secure SSL connection using a non-standard port, you need to know the IP address of the ClarkConnect server, which you can see after logging in with the root password, then you can use any web browser on the network to connect to http://ip-address:81. A Linux installation is as secure as you make it. Using a server-oriented distro with a minimal number of packages helps, but it is still up to you do ensure that software is kept up to date, use the Software Install menu in the administration interface, and that you set up a firewall. The firewall is included and set up from your browser. Even if you are not using this as an Internet gateway, it is wise to protect the services and data on the box with its own firewall, in addition to whatever you have on your Internet gateway or router. It is also possible to use a general purpose distro for this task, by opting to install server instead of desktop packages when you install it. Most distros include the excellent Webmin program that can be used to administer servers with a web browser, but for a separate server machine, especially when your experience is limited, a purpose-built distro is probably the best choice. While ClarkConnect doesn't come with a printed manual, there's plenty of documentation on the website, both in the detailed user guide the various How Tos. There are also user forums for peer group help. Back to the list ****** Booting problem: MP-BIOS bug 8254 error message Q:: I bought what I thought was the greatest money-for-value-wise PC, and I still think it is; one thing bothers me though. When I start up the PC and select Ubuntu from Grub, a message is printed telling me that there is MP-BIOS bug 8254 and some timer is not connected. Also, booting shows the text --- MP-BIOS bug: 8254 timer not connected to IO_APIC Kernel panic - not syncing: OI_APIC + timer doesn't work! ,,, I searched on Google and Ubuntu's help, but all I could find was a bare bone description that my timer doesn't work. I am guessing it has to do something with my NVIDIA 7300LE (I knew that cheap things turn to be expensive in the end), but what exactly? Another fact that could help probably. Every thing I tried in 3D is very fragile and buggy on my machine. Do I need to buy a better video card?! A:: This is not caused by your video card, but may well be the cause of your video problems. APIC (Advanced Programmable Interrupt Controller) handles timing and interrupts for various components on your motherboard, including disk controllers and video card slots. It is common for computers to have ACPI controllers that break the specifications, many manufacturers consider "it works with Windows" to be an acceptable alternative to following the standards. You have already discovered that you need to append the keyword noapic to the boot parameters with live CDs, but you also need to do this when booting from your hard disk. Before you do that, check the manufacturers website for a BIOS update, it may be that this has already been fixed in a later firmware. If not, you need to alter the boot menu to always use noapic when booting. Ubuntu doesn't include a configuration program for the boot process, so you will have to edit the configuration file manually. Press Alt-F2 and type --- sudo gedit /boot/grub/menu.lst ,,, This will open the boot menu configuration file in an editor. Most of the lines start with a #, these are comments that you can ignore. Go to the first line starting with title; this is the first option on the boot menu. You need to change the first kernel line below this, add noapic to the end of that line, making sure there is a space between the previous last word and noapic, and save the file. When you reboot, the BIOS error message should not appear and your 3D graphics should be more stable. You may notice other improvements, because buggy APIC firmware can cause all sorts of things from poor disk drive performance to clocks running at the wrong speed. Back to the list ****** Can a RAM disk replace a swap partition? Q:: I'm trying to perform an experiment on my Fedora box, to set up a RAM disk to use as a swap device to replace the swap partition. There seems to be some debate over whether there's any point to this so I thought that I'd set it up and test the performance with a few games of Doom 3. Can someone explain how I might go about this? A:: We've been trying to figure out why you would want to do this, but we really have no idea. It's easy enough to build a RAM disk, then install a swap filesystem on the block device. You can make a swap filesystem on the RAM device, assuming it's compiled into the kernel, with: --- # mkswap /dev/ram0 # swapoff -a # swapon /dev/ram0 ,,, You can monitor the use of the swap device with 'free', and the actual paging with 'vmstat' as you're doing your testing. We'd be very interested to see what results you get from your testing, though, because we don't think that anything exciting is going to happen at all! Back to the list ****** Inteo PRO/Wireless 3945ABG network not working in Linux Q:: I previously used Linux in 1996/1997 to run a Unix application on a laptop, as Linux was free and a Sparcbook cost £10K. I recently thought it would be great to use it again and so installed it on my home Dell XPS m1210. I looked around the web and it looked like Slackware 10 was the best for my machine. I have now successfully installed it and use Lilo to dual boot between Windows Vista and Linux. Except I cannot get the wireless to work, or more precisely: I have no idea how to get the wireless to work! I look on the web and there are solutions out there, but it all seems like a foreign language; I have forgotten so much over the last 10 years, I feel like a complete novice. My wireless card is: Intel PRO/Wireless 3945ABG Network Connection A:: There is a driver for this wireless card, available from http://ipw3945.sourceforge.net. This is an official driver project created by Intel, but it requires a fairly recent kernel to run, 2.6.13 at least. Slackware 10 is more than three years old - much older than this driver - and uses a 2.4 kernel. In order to use modern hardware reliably, you need a distro, and particularly a kernel, that is at least as new as the hardware. If you want to stick with Slackware, use the newly-released 12.0, which is the first Slackware released to default to a 2.6 series kernel, something your wireless card needs. The packages you need to use this card with Slackware 12.0 are available from ftp://ftp.slackware.at/slackware-12.0/extra/intel-wlan-ipw3945. Alternatively, you could install any distro that carries ipw3945 packages in its software repositories. Ubuntu would be a good choice because the ipw3945 drivers are included in the default installation, so it should "just work". Newer Fedoras also have ipw3945 available, but in this case you need to add the ATrpms repository to the package manager before you can install the packages. Details on how to add the repository are at http://atrpms.net/install.html. The ATrpms site contains a lot more than wireless drivers, there are plenty of packages of all descriptions, so it is well worth adding to the list of repositories. Back to the list ****** Sync a Windows CE PDA with Linux Q:: I would like to switch to Linux, but I fear that syncing with my PDA with Microsoft Outlook may not work. Also I have Money for PPC on my PDA and that is important to me so does GnuCash provide a similar feature? A:: SynCE (www.synce.org/index.php/SynCE-Wiki) is a framework that allows Linux software to synchronise with Windows Pocket PC devices. This works, with varying degrees of user-friendliness and success, with various programs. One of the easiest to sync with is the KDE PIM suite of KMail, Kontact, KAddressBook and KOrganiser. To do this you need to install the synce-kde package, which comes with most distros, although not all of them install it by default. After installation, run the package manager and install synce-kde if it is not already marked as installed. Then you will be able to sync mail and contacts. Of course, this means you will need to be running a system based on the KDE desktop, such as Mandriva, Kubuntu, PCLinuxOS or SUSE. You can find links to them - and many other distros - at www.distrowatch.com. Syncing your financial records is another matter. GnuCash is able to import standard QIF accounts files, but not export them. However, KMyMoney (http://kmymoney2.sourceforge.net) does offer QIF import and export, so you should be able to import files from your PDA and then transfer them back after making modifications. Unless you have some formal bookkeeping training or accountancy experience, you'll probably find KMyMoney easier to learn than GnuCash. KMyMoney is also a KDE program and should be available with any of the previously mentioned distros. Back to the list ****** PNG to GIF URL rewriting in Apache Q:: I am writing a website that uses .png images throughout - most of these images use transparency. It works great on all the latest browsers but (as expected) not IE6. To compensate for this I've created .gif versions of each of the graphics (as well as a custom style sheet) which should load in place of the .png images if the user is still on IE6. To achieve this I want to use mod_rewrite and .htaccess to make it transparent - so that images/png/image1.png is rewritten as images/gif/image1.gif. This is my .htaccess file --- RewriteEngine On RewriteCond %{HTTP_USER_AGENT} "MSIE 6" RewriteRule /images/png/([A-Za-z0-9])+\.png$ / images/gif/$1+\.gif RewriteCond %{HTTP_USER_AGENT} "MSIE 6" RewriteRule css/style.css css/iestyle.css ,,, The CSS rewrite works perfectly but the image replacement (png to gif) doesn't. A:: You have the right idea in using mod_rewrite to change the URLs. It is falling over because you are using a + to join strings, but mod_rewrite works with regular expressions, where + is a pattern matching character, not an operator. With regular expressions, you don't need to join strings, instead you use parentheses to mark the parts you want unchanged and $1, $2... to include them in the destination, as you have done, and everything is either text or regular expression characters. So to replace the last occurrence of foo in a string with bar, you would use --- /(.*)foo(.*)/$1bar$2/ ,,, In your case, you want to change anything starting with images/png and ending in .png, replacing both occurrences of png with gif. You can do this by replacing your first RewriteRule line with one of --- RewriteRule /images/png/(.*)\.png$ /images/gif/$1\.gif RewriteRule /(.*)/png/(.*)\.png$ /$1/gif/$2\.gif ,,, The first is easier to read, but the second will work with images in other directories too. Back to the list ****** Can NTFS partitions be defragmented on Linux? Q:: I have a external hard-drive, NTFS-formatted. I need to defragment it but I don't want to lose the data on it. Can you defragment NTFS on Linux? I run Ubuntu Feisty Fawn on an old PC2800 computer. A:: The short answer is no, not really. Why is the drive using NTFS in the first place? If it contains a Windows bootable installation, any attempt you make to defragment it in Linux will most likely render it unbootable in Windows. But if is does contain Windows, why not use that to defragment the drive - Windows can be useful for more than playing games. If the drive is used purely for data, then you can severely reduce fragmentation by copying all the data off, reformatting the drive and copying the data back. This requires an NTFS filesystem with full write support, either the commercial Paragon NTFS for Linux application or NTFS-3G, which is included in the Ubuntu repositories. You'll also need the ntfsprogs package, so fire up Synaptics and install both of those. Now you can do the whole job by opening a terminal, changing to a directory with enough space to hold the contents of the NTFS drive and running --- tar cf ntfs.tar /mnt/ntfs && umount /mnt/ntfs && mkntfs /dev/sda1 && mount /dev/sda1 /mnt/ntfs -t ntfs-3g && tar xf ntfs.tar -C /mnt/ntfs ,,, This is all on one line. The two tar commands and mkntfs take a while, so chaining the commands together like this means you don't need to babysit the machine, yet each command will only be executed if the previous one was successful (you don't want to reformat the drive if copying the data failed). This example assumes your drive is at /dev/sda1 and mounted on /mnt/ntfs. Make sure you change these to the correct paths suitable for your machine before you run it. If you are short of space to save the contents, you can create a compressed archive, but this will take longer, particularly when copying for the drive. Do this with tar czf ntfs.tar.gz /mnt/ntfs && umount /mnt/ntfs && mkntfs /dev/sda1 && mount /dev/sda1 /mnt/ ntfs -t ntfs-3g && tar xf ntfs.tar.gz -C /mnt/ntfs If you are using NTFS so the drive is readable in Windows (why else would you use it?) and you will only use it with your own Windows computers, a better solution would be to format the disk as ext2 and install the ext2 driver from www.fs-driver.org on your Windows computer(s). Then you no longer have to worry about disk fragmentation and you will get better disk performance in Linux. The above commands will do this is you replace mkntfs with mke2fs and remove --- -t ntfs-3g ,,, from the mount command. Back to the list ****** Lightweight distro needed Q:: I am looking for an OS suitable for an AMD K-6/200. I thought NetBSD might be a good choice, but it turns out the basic install results in an OS that is command line only. Xfree86 (not Xorg) needs to be set up separately. I'm disabled and the extra effort is a problem for me. Is there an 'easy' version, like PC-BSD or Desktop BSD are easy versions of FreeBSD? I had tried DSL on a P2/400 machine and didn't care for it, but I just discovered DSL-N. This has a real word processor! How much net performance do you end up gaining if you install Gnome or KDE on either NetBSD or DSL-N? Fedora, with Gnome, runs infuriatingly slow on the P2/400. A:: A K-6/200 is slow by current standards, so you'll need a lightweight distro to give reasonable performance. Most importantly, you will need a lightweight window manager, which definitely excludes Gnome and KDE. Something using Fluxbox, Xfce or IceWM would be far more suitable. As you need word processing, Xfce may be a good choice, as it uses the GTK toolkit, as does AbiWord. With limited resources, choosing a set of applications that use the same toolkit and libraries will help your system run more efficiently. Speaking of resources, one of the best improvements for any Linux system that needs more speed is more memory. Spending a few pounds/dollars/euros pieces-of-eight on extra RAM generally gives a greater improvement than spending a similar amount on a faster processor. There are a number of distros designed for lightweight systems; you have already discovered DSL and DSL-N, but you could also consider Puppy Linux, from www.puppylinux.org. DSL is limited by the stipulation that the ISO image should never exceed 50MB, while Puppy Linux is nearly twice that. This means it includes a lot more, such as the AbiWord word processor and accompanying office software, SeaMonkey (the new name for Mozilla) for web and mail and plenty more. The main drawback of Puppy is that the hard disk installation process is rather convoluted, as this is mainly designed as a live CD system. You could also run it from the CD, using your hard disk only for storage of data and settings. Another alternative, although a little heavier, is Zenwalk (www.zenwalk.org). If you have the amount of RAM that was typically used on 200MHz machines when new, you will definitely need more to use Zenwalk, but it will give you more features than smaller distros. Running any OS on a K-6/200 is going to be a compromise between features and performance, but it is definitely possible: doubly so if you add some extra RAM. Back to the list ****** Ubuntu boot error: unable to access tty Q:: When I try to run or install Ubuntu, I get the following message after the splash screen comes up: 'unable to access tty, job control turned off' and am returned to a terminal prompt. Ubuntu apparently is trying to access my floppy drive for some reason because the floppy drive turns on until I get the error message. A:: It appears that this error is caused by the kernel being unable to find your boot drive, so the floppy drive light comes on because it is trying every device listed in the BIOS. As there are a couple of reported causes of this problem, there's more than one possible solution. One is to boot from the install disc and edit the fstab of your installed system. If your root partition is on /dev/sda1, the commands you need are --- sudo -i mount /dev/sda1 /mnt gedit /mnt/etc/fstab ,,, You should see the line that mounts your root partition in fstab, it will look something like --- # /dev/sda1 UUID=71f72f22-0a14-45b7-9057-f7b0bd9d819c / ext3 defaults.... ,,, The UUID (Universally Unique IDentifier) should enable Ubuntu to find the root partition, even if your device nodes change (such as adding another disk), but it can cause problems here. Change the UUID=xyz string back to the device node and your system should boot again. The fstab line should now look like: --- # /dev/sda1 /dev/sda1 / ext3 defaults.... ,,, The other solution is more extreme, so only try it if the fstab trick fails. You need to open your computer and disconnect any extra hard and CD/ DVD drives, leaving only your boot drive and the DVD from which you installed - turn off the computer first! Disconnect the floppy drive too, removing the power cables from the unneeded devices should be sufficient. Your system should now boot. Then you add the piix module to the ramdisk image that Ubuntu loads when it boots with these terminal commands --- sudo echo piix >>/etc/initramfs-tools/modules sudo update-initramfs -u ,,, You should now be able to shut down, reconnect the devices and start up. This bug appears to affect a small number of Ubuntu users, and only those with multiple drives fitted. It has also been reported that when the problem is caused by a floppy drive, it can be circumvented by leaving a disk in the drive, but we were unable to verify this and it sounds like a kludge anyway. Back to the list ****** Recover file backup operations after network disconnection Q:: I use IBackup, because I can have backups from my own PC (Ubuntu) and from my wife's Windows PC. She can manage her backups without any intervention from me. The problem is, that often during the backup on my PC, which is performed by cron, the connection drops. When that happens the stunnel I created collapses, which is devastating for the backup, and I end up with a backup that was partially copied to the IBackup server. Is a way to recover from such a disconnect, or even to actively reconnect, without losing what you are doing? The IBackup server does not allow setting the time and datestamps of the copied files, causing the files all to have the time and date of copying. For that reason I copy tarred files and lose the rsync ability. This might be incentive enough to switch to Rsync.net, however I will have to copy the files from my wife's PC too. With IBackup she has her own connection and URL. A:: If you are using rsync, restarting the backup should be no trouble, because rsync will simply pick up where it left off. The server may be using the time of copying as the timestamp because of your rsync options. You need to call rsync with the --times option to preserve timestamps. The --archive option combines several options useful for backups, including --times. This should remove the need to copy tar archives to the server, and therefore mean that you are copying individual files in the same form that they exist on your original machine, which makes restarting a backup easier. I tried Rsync.net after reading the article (I used Strongspace at the time) and switched to them completely. Backing up multiple machines is easy as you can do more or less what you want with the available space, so you can create a directory for each machine's backup. Rsync.net uses SSH for rsync transfers, so there's no need for stunnel, and you can use Duplicity to encrypt the data for storage. An alternative approach is to backup everything to a local disk then sync that to the remote server. This has the advantage that your first line of backups is local - making restoration faster - but it does mean that the backup computer has to be switched on whenever any computer needs to make a backup. Back to the list ****** Booting a DVD in Linux Q:: I want to install Linux an older PC, dual booting with Windows 98SE. This computer is a seven-year-old Athlon 600 in an MSI motherboard with 128MB RAM, two hard drives, one DVD-ROM drive and a CD-RW drive. The BIOS of this older PC hasn't an option to boot from a DVD-ROM drive. The boot sequence allows me to use a CD-ROM drive as the first device and I am comfortable with boot sequence changing. The forums told me to install Windows first, if dual booting is required (it is). I used Partition Magic V5 to set up both FAT and Linux partitions. I believe that Linux uses a different file format to FAT but I tried using a Windows start-up floppy to 'set-up' or 'install' the DVD but failed. Would this work if your disc had been a CD-ROM? The floppy disk from Red Hat 6.1 allowed me to start running the Red Hat CD but It demanded the Red Hat CD. I tried the Red Hat CD which worked but aborted the install because I would prefer (K)ubuntu. Do I need a Linux boot floppy with DVD drivers on it, to get it installed? A:: As far as your BIOS is concerned, booting from CD and DVD are the same, a DVD is seen as a large CD-ROM. Older Linux distros used a boot floppy to kickstart the CD installer, as a lot of hardware did not support booting from CDs at the time. Your vintage hardware should support booting from optical disks, whether CD or DVD. As long as you set your BIOS to boot from CD, you ought to have no problems. But, this is dependent on BIOS idiosyncrasies; some older BIOSes got confused when more than one optical drive was fitted. If you set the BIOS to boot from CD and still cannot boot the DVD, try disconnecting the cables from your CD-RW drive so you only have the one optical drive. It is rare to need a boot floppy to install from CD or DVD nowadays but, just in case, we have provided one on the DVD. Smart Boot Manager, in the Essentials/SBM directory of the DVD, is a bootable floppy that will transfer the boot process to an optical or hard disk. Run RAWRITE.EXE in Windows, put a blank floppy in the drive and select sbootmgr.dsk as the source. By booting from this disk, you will be able to boot from your DVD. The different filesystem formats of Linux and Windows are irrelevant at this point, as all data is coming from the DVD, which uses another type of filesystem (the same as used by CDs). Using Windows partitioning tools to create Linux partitions is known to cause difficulties. Use Partition Magic to delete the partitions you created for Linux, including the swap, leaving unallocated space. Then tell the Ubuntu installer to use the free space on the drive (free space in this context means unpartitioned space, not unused space within existing partitions). Your PC may show its age in the RAM. 128MB is not a lot by today's standards; a modern desktop, like KDE in Kubuntu, will run slowly. Back to the list ****** LILO boot problems after moving Linux installation to larger hard drive Q:: I found that I needed a bigger hard disk, so I plugged one in as hdb, partitioned it as I wanted, copied file systems from the old drive (hda), and tried to boot from the new one. Unfortunately, this operation turned out to be unsuccessful. I made and copied partitions for /, /boot, /usr, /home, among others. I also make a swap partition. /boot is primary partition 1, marked bootable. I wrote an mbr record, using lilo -M /dev/hdb1. I mounted the new /boot and / partitions, edited the new copy of /etc/lilo.conf, (now in /mnt/hdb5), and ran lilo -C /mnt/hdb5/etc/lilo.conf -b /dev/hdb1, which appeared to work. When I try to boot from the new drive, I get through Lilo's boot choice screen, and a fair amount of other stuff, ending with: --- initrd finished Freeing unused kernel memory Warning: Unable to open an input console ,,, After that, only the reset button on the box will make anything happen. This is "Mandrakelinux release 10.2 (Limited Edition 2005) for i586" A:: This is not a problem with the bootloader. Once the kernel has loaded, the bootloader's job is done. This error looks like a missing file from /dev, probably /dev/console. Although the dynamic dev filesystems, like udev and its predecessor devfs, create your device nodes in /dev automatically, there are some that are needed before devfs/udev start up. I suspect that you omitted the contents of /dev when making a copy of your root partition, either by not including it in the copy command, or by excluding all other filesystems when copying (you didn't mention how you copied the filesystems, but cp, rsync and tar all have options to exclude other filesystems). The contents of your original /dev directory are now hidden because a new, dynamic /dev/ has been mounted on top of them, but as you will see, they are still accessible. --- mkdir /mnt/tmp mount --bind / /mnt/tmp ,,, will make your whole root filesystem available from /mnt/tmp, without any of the other filesystems that are mounted at various points. So /mnt/tmp/home will be empty while /mnt/tmp/dev will contain a few device files. Copy these to dev on your new root partition and your boot error should disappear. The easiest way to ensure your new root filesystem contains exactly the same files as your current one is --- rsync -a --delete /mnt/tmp/ /mnt/newroot/ ,,, Back to the list ****** Admin MySQL through a web browser with phpMyAdmin Q:: We have a web hosting account that provides PHP and MySQL with an Apache server. We have FTP access for uploading files but no shell account, which makes setting up SQL databases and the like rather tricky. We are not able to install any extra software on this server. We could move to somewhere with shell access, but the accountant likes the price we pay here. Is there a way of gaining administrative access through a web browser to do what we need? A:: While changing to a host that allows SSH access would give more flexibility, there are solutions that remain attractive even if you cannot get a command prompt. Foremost is phpMyAdmin (www.phpmyadmin.net). As you have probably guessed from the name, this is a MySQL administration program written in PHP; it only need to be installed as a set of files in your web space, after suitable securing and configuration. Most web hosts only allow access to the database from local IPs, so the scripts must be running on the web host, not your own machine. Download and unpack one of the tarballs from the phpMyAdmin website (they differ only in the languages included and archiving method used). The traditional way of setting up phpMyAdmin was to create a suitable config.inc.php file, using the included sample as a basis, but there is now a setup script that you can run once you have copied the files to your web server. Before you do, make sure this is secure. Anyone with access to your server's phpMyAdmin directory can read or change any of your databases, so secure it with a .htaccess file (or other means) so that only passworded accounts can connect. If possible, include it in a section of your web space that is accessible via HTTPS, as you will be transferring passwords when you run the setup script. Create a config directory in the phpmyadmin directory and copy the whole phpmyadmin directory to your web space (including the .htaccess file). Go to https://www.your.webspace/phpmyadmindir/setup.php and fill out the boxes with your MySQL login details. Now you can go to https://www.your.webspace/phpmyadmindir/ and see a list of your databases. Select one and you'll see the tables in it. From here you can browse, query and modify your SQL tables to your heart's content. If you use PostgreSQL instead of MySQL, there is an equivalent program called phpPgAdm, available from http://phppgadmin.sourceforge.net. SQL databases are not all you can administer via a web interface. Webmin lets you change just about anything you are allowed to change on a *nix box, and it is by no means limited to servers. The disadvantage of Webmin in your situation is that it must be installed and run by root, because it uses its own built-in server, rather than running through the likes of Apache. Contact your web host about this: they may have installed either Webmin or its limited cousin Usermin. If they haven't, they may be willing to, as it would benefit all customers. The may also install phpMyAdmin for you, so you don't need to take up your own space and bandwidth allowances with it. Back to the list ****** Which ISPs support Linux? Q:: Have you a list of ISPs that can be accessed from Linux PCs? AOL and BT do not appear to be of any use for a PC running the Linux operating system. No doubt the answer is staring me in the face but I'd be grateful for you to point that out... A:: Almost all ISPs that provide PPP-based dial-up will work with Linux, which is just about every one, except for AOL. As AOL uses a non-standard protocol for the connection, you will be unable to get AOL to work; however, BT's service should work without any problems from Linux. BT offers several dial-up options, including BTInternet and BTConnect. They all work using PPP and CHAP for authentication, and will work happily through Linux. Many ISPs are supporting Linux now as a platform, or at least acknowledge that their service will work with Linux. Some ISPs, such as Demon Internet and UKLinux.net, actively support Linux and will provide information for configuration of their service with Linux. Back to the list ****** Gentoo Live CD renaming PC (login prompt) Q:: I am using Fedora; after trying out a Live CD of Gentoo, I notice that when in a terminal the name is now root@livecd. How can I change this back to my original name? Why did it happen? A:: It sounds like you're using DHCP to configure your network (possibly built into your Internet router). I suspect that when you booted the live CD, it said "my name is livecd, give me an IP address" Then, your router remembered this hostname and reissued it the next time it got a request from the same MAC address. If so, the answer is to have your computer send its hostname with the DHCP request, by setting its hostname in the network config. Go into System > Administration > Network, select your interface and click on Edit. Type the hostname you want to use in the Hostname (optional) box. Now Fedora will tell the DHCP server that it wants to use this hostname instead of leaving it up to the DHCP server to decide on a hostname. There may be an alternative method, depending on your router/DHCP server. Some allow you to specify the hostname or IP address given out to specific MAC addresses (a MAC address is the hardware identification for an individual network interface). If your router supports this, you can set the hostname in here - it should show you a list of connected MAC addresses, or you can find it by running ifconfig on the computer: the MAC address is on the first line of an interface's output, labelled HWaddr and looks like 00:50:56:C0:00:08. This method means that you will always have the same hostname or IP address whichever OS you boot on this computer, which can be either a good thing or a bad thing depending on your preferences. Back to the list ****** Audio CD playing silent in Linux Q:: I can't play audio CDs in either my user account or my partner's. My audio is working - I'm having no problems with digital tracks ripped onto my system. KsCD shows that an audio CD seems to be playing but there's no audio output. The CD is turned up to about 90 per cent in a mixer. The master is at 100 per cent and nothing obvious is muted. I did have to change the /etc/fstab so that /dev/cdrom had user, users, noauto listed. A:: To take your last comment first, audio CDs are not mounted so /etc/fstab has no bearing on this. Most distros use automounting via HAL nowadays, so data CDs don't usually want an entry in /etc/fstab either. To your audio problem: there are two ways of getting the audio data from the drive to your sound card. The first is the traditional method of using an audio cable. This plugs into the small, flat four-pin connector on the back of the drive, and the matching connector on the sound card (or motherboard, if you have on-board sound). This is the most efficient method because all the computer needs to do is send the start/stop/etc commands to the drive; the audio simply runs down the cable where it is mixed by the sound card. The second method uses digital extraction, where the computer pulls the CDDA (Compact Disc Digital Audio) data from the drive via its ATA/SATA cable, performs any audio conversion needed and passes the result to the sound card. This is more processor-intensive, but the load is insignificant with modern hardware, and tends to be the default setup because it saves the manufacturer a few pennies per machine on cables. There is an easy way to tell if a CD drive is sending audio directly: if it has a headphone socket, plug some headphones into it. If you can hear the music, you are trying to play via a direct audio connection. KsCD does not use digital extraction by default, this has to be turned on explicitly by enabling the Use Direct Digital Playback checkbox in the configuration window. You may prefer a different media player: KsCD is OK as a basic CD player, but if you also listen to MP3 or Ogg Vorbis audio files or Internet radio, KDE's all-encompassing audio player, Amarok, may be more suitable. Back to the list ****** Change colour of text and background on the Linux console Q:: How can I change the colour of the text and background on the Linux console in run level 3? I would like to change the colour to black text on a white background. If possible, I would like this change to be evident as soon as the kernel begins to load so all the boot messages are black text on a white background. I have a suspicion that I may need to do some kernel hacking for this and if so which part of the kernel source code would need to be changed? A:: Changing the colours of a console can be done in two ways, once it is started. You can echo ANSI codes, provided you can remember what they are, or you can use the setterm command to make the changes. To set black on white, you would use --- setterm -background white -foreground white -store ,,, The setterm man page contains full details of the various options and the valid colour values. You can run this command from a startup script, the details are distro-dependent but /etc/rc.local is often a good starting point. If you want to change the colours right from the start, you'll need to edit the kernel source to change the default colours and recompile your kernel. The file to edit is drivers/char/vt.c (it was drivers/char/console.c with older kernels); find the lines starting with --- vc->vc_def_color = 0x07; /* white */ ,,, This is at line 2739 in the 2.6.22 sources. The two hex digits represent the background and foreground, so 0x07 is the default white on black, whereas 0x70 would be the reverse setting that you want. Change the values to your preferred defaults and recompile your kernel in the usual way. The numbers for the various colours are --- 0 ................black 1 ................blue 2 ...............green 3 ...............cyan 4 ...............red 5 ...............purple 6 ...............brown/yellow 7 ...............white ,,, Add 8 to these numbers to get the 'bright' versions, but rendering of bright colours is hardware-dependent and may blink on some systems. Back to the list ****** Pluscom WU-ZD1211B USB wireless stick not working in Linux Q:: I'm trying to install OpenSUSE 10.2 on an iMac for my granddaughter. The install went well until I tried to add a Pluscom WU-ZD1211B USB wireless stick, neither the drivers that came with the stick nor from the net seem to install. When running make as root, I get lots of warnings: make [6] Error1, make [5] Error 2, leaving dir etc. In YaST hardware the stick is shown as USB2.0 Wireless/drivers: active/ modules: yes/ modprobe: zd1211rw? I've copied the firmware as the readme, and setup ssid etc in YaST advises. There are no zd1211rw drivers or ndiswrapper on SUSE 10.2 sources or spike, so I can't install with YaST! As a newbie still fighting the command line I hope you can help this is driving me mad! A:: I can confirm that the zd1211 drivers work with a PPC kernel, I used them with my iBook before there were drivers for the internal wireless card. However, you do not need to install any drivers for a zd1211-based adaptor, they are included with the standard SUSE 10.2 kernel. The following two commands, run as root, should confirm the presence of the zd1211rw module. --- modprobe -l | grep zd1211 modinfo zd1211rw ,,, You can also see the files in YaST. Go into the Software Management section, find the kernel- default package and click on the Files tab. Scroll down to see the relevant modules in drivers/net/wireless. Did modprobe run with no errors? If not, you may need to ensure the firmware is installed in the correct place. Most drivers expect to find the firmware files directly in /lib/firmware, but this one expects them to be in /lib/firmware/zd1211. You can get the latest firmware from http://sourceforge.net/projects/zd1211. If the module is loading, and the network device shows up in the output from --- ifconfig -a ,,, then your problem is most likely with the wireless settings. The first thing to do is turn off encryption. Normally, you shouldn't run an unencrypted wireless connection, but encryption only gets in the way when settings things up, and until you get things working you have no connection to protect anyway. When using YaST to set up your network, try with and without Network Manager. Some setups work with Network Manager whereas others are better off with the standard configuration method. Incidentally, should you run into similar problems when compiling from source in future, the error numbers are not useful on their own, it is the accompanying text that shows the reason for the error. Back to the list ****** Convert RAW images into TIFF with a script Q:: I'd like to have a simple command to convert all the RAW images in a folder, let's say /home/andy/photographs/new, into TIFF with LZW compression. I don't know if it's relevant, but the extension for the RAW files is .PEF - (the Pentax RAW format). A:: The program for converting digital camera RAW files is dcraw (www.cybercom.net/~dcoffin/dcraw). This converts from most digital camera raw formats to the NetPBM PPM (Portable PixMap) format. You then need another program to convert the PPM data to TIFF. ImageMagick's convert command and the pnmtotiff program from the NetPBM suite are both capable of doing this. The easiest answer to the question of which to use is "which one is already installed?" Dcraw should be in your distro's repository, otherwise download the source and compile/install in the usual way. There's no need to fill your hard disk up with PPM files when converting (PPM is a big, uncompressed format using three bytes per pixel), you can pipe the output of dcraw directly to the conversion program. To convert a directory of files using ppm2tiff, run this in a terminal. --- for f in ~/photographs/new/*.PEF do dcraw -c "$f" | pnmtotiff -lzw >"${f/.PEF/.tif}" done ,,, The -c argument tells dcraw to send the data to standard output, which is piped to pnmtotiff's standard input. The ${f/.PEF/.tiff} part provides the name of the original file with the PEF extension replaced with tif. To use ImageMagick instead, replace the third line with --- dcraw -c "$f" | convert -compress lzw - "${f/.PEF/.tif}" ,,, In this case, convert uses the standard notation of "-" for standard input. The imagemagick, dcraw, netpbm, and convert man pages detail various extra options you can add to fine-tune the process. This assumes all the PEF files are in the same directory. If your camera stores them in sub-directories, use the find command to generate a list of the names. It is also possible to store the converted images in a different directory, thus: --- find /mount/pointof/camera -name '*.PEF' | while read f do dcraw -c "$f" | convert -compress lzw - "~/ photographs/tiff/$(basename "$f" .PEF).tif" done ,,, This time we use the basename command to extract the base filename from the full path and remove the extension. The quotes around the various filenames are there in case any file or directory names contain spaces. Back to the list ****** Current Account folder disappearing in GnuCash Q:: I have used various versions of GnuCash for years with no problem. However, using 2.0.2 on either the latest versions of Ubuntu or SUSE, on two occasions the Current Account folder has disappeared between saving the account and restarting it. On the first occasion, I went back to an earlier saved account and retyped my bank statements. This time I have lost 6/52 worth of data and will not retype the lost data. Is there any way of recovering the data and/or how can this be avoided in future? Clearly the program is unusable in its current state. A:: My first response would be that I have been using GnuCash for years, and am currently using version 2.2.1, without seeing what you describe, which makes me think it is a problem specific to your setup, but that's not great consolation to you. Are you sure that your home filesystem has no errors? Running fsck on /home would be a prudent move. As far as recovering goes, GnuCash stores backups of the data and log files in its data directory; the data files are named AccountName. datestring.xac. Find the most recent undamaged one and copy it to AccountName. Do not rename it, you want the backup to still be there if the account file is lost or corrupted again. These backup files are created each time you run GnuCash and could fill up your hard drive if you left them all there, so GnuCash will remove the oldest files after a configurable period. Set this in the General section of the Preferences window, a value of 0 means your backup files will never be deleted automatically. If there are no problems with your filesystem, it would be good to get to the bottom of this error (restoring from backups is not a real solution), so run GnuCash from a terminal and the next time it does this you should see some errors in the terminal output or the logfile. Search the GnuCash mailing lists and bugzilla (both available via www.gnucash.org) for a similar problem, and file a bug report if you find none. The developers cannot fix this unless they know about it. It would be wise to try a newer version first: get GnuCash 2.2.x RPMs for SUSE from http://rpm.pbone.net. Back to the list ****** Why do distros handle USB memory stick scripts differently? Q:: There seems to be a wide discrepancy in how various distros handle scripts on USB memory sticks. Using Konqueror's execute command under the tools menu: Ubuntu runs them without fuss (initially my Feisty upgrade broke this by mounting the stick non-exec). SUSE 10.2 and Startcom consider the stick as remote and won't execute anything from it. Fedora 7 live raises a prompt for confirmation but it tries to run them from the wrong directory if Konqueror is used to open the USB device icon. The automounter mounts the /dev/sdb1 on /media/disk and Konqueror sees the URL as /media/sdb1. On execution, Konqueror tries to cd to /media/sdb1 and can't find the executable. If I navigate to /media/disk the scripts can be executed normally. Though SUSE 10.2 won't execute via Konqueror if there's an autostart.sh on the stick, KDE offers to execute this as soon as the stick is plugged in. Presumably all this stuff is configurable (surely it's not compiled in), but as HAL, udev and KDE are all involved with a multiplicity of configuration files, where does one start looking? A:: There are a number of reasons for this, partly to do with the way KDE works and partly because of the lack of real file permissions on FAT filesystems. You have already discovered that the path KDE uses to a file on a removable device is not a real path. Konqueror displays media:/media/devname but actually mounts the device on /media/volumename (or /media/disk if the disk does not have a volume name). This causes no problems with most KDE applications as they understand media:/ and system:/media URLs, but non-KDE programs -like bash - fall over here. It works with Ubuntu because they have patched things so that Konqueror goes directly to the mount point, although there has been some debate over whether the drawbacks of this outweigh the advantages. There is an easy way to fix this for all distros, right-click on a script and open the Properties window. Click on the icon to the right of where it says Type: Shell Script to open the filetype editor and use the Add button to add bash to the list of programs. Don't look in the application list for bash, just type it into the box above the list. OK this window and the properties and right-clicking on a script will now show bash in the Open With sub-menu. If you put bash at the top of the application list, you will also be able to run scripts by clicking the icon. Either of these is more convenient than selecting the icon and then moving to the top of the window to find the menu. The automatic running of authstart.sh is governed by a setting in the Peripherals/Storage Media section of the KDE Control Center. The other possibility that you refer to is editing the HAL configuration to have the volume mounted with the device name instead of the volume name. This can be done, but makes the system less intuitive, especially when using FAT filesystems created by other devices that give them a meaningful name: my camera's memory cards are always identifiable as such, as is my audio player, there's no danger of copying files to the wrong device when both are mounted, which could easily happen if they were mounted at /media/sde1 and /media/sdf1. Back to the list ****** Install software outside of distro package manager Q:: My laptop to dual-boots to PCLinuxOS. One thing that trips me up every time I try to install apps from an source code, there is no hyphen shown in the command to decompress the tar. The help shows tar xzvf /mnt, but one must actually type in tar - xzvf /mnt. Installing software is a complete boondoggle with Linux, and any explanation or justification of this situation is pointless. What would be good is if specific instructions for installing each of the cover disc programs were supplied. It would be interesting to see installation for 30 different programs shown in one document. I bet all 30 will be slightly different. For example, I am currently trying to install DEFCON. Where is it made obvious what I'm supposed to do when I type in ./configure and my terminal kicks back: --- bash: ./configure: No such file or directory ,,, A:: Installing software from outside of your distro's package manager can be confusing, because there are several different ways of supplying it. The generic instructions apply to some per cent of software, programs supplied as source code using the autotools suite. In many cases there are installation instructions on the disc, in the form of a README or INSTALL file provided with the software, and these often contain identical instructions. This should be considered gospel when installing, as these are the instructions from the programmers. The leading argument hyphen is unnecessary unless you use a VERY old version of tar. In fact, the z option has been redundant for quite a while as tar is able to detect common forms of compression. I normally use tar xf filename. The "." is shell notation for the current directory, so ./configure tells the shell to run a program or script called configure in the current directory. Once you understand that, the reason for the error is obvious, there is no configure script with this program. That is because DEFCON is not supplied as source code: this is a binary package that does not require any installation to run. Instead of a configure script you will see a defcon script; run that with ./defcon to start the program. I agree that this is not obvious, the only documentation supplied refers to using Mac OS, but this is a failing of the individual program. Back to the list ****** Can't SSH between host and guest OS in VMware Q:: I use VMware Workstation to run various virtual machines, both Windows and Linux - for web development and also to try out different distros. Recently I've lost the ability to SSH between the host (Ubuntu) and guest OSes. The ssh commands just hang until I hit Ctrl+C. I can SSH into the host or the guest from my laptop on the same network, and I can connect to the laptop from the host or any of the virtual machines, it seems to be only connecting between the host and guest on the same machine that is problematic. I can ping a guest from a host and vice versa, it onlyappears to be SSH that is affected, but I haven't changed any of my SSH settings on the host, and as it affects all guests, I don't see how it can be a settings problem there. I am using bridged networking, as I have always done and this worked previously. A:: I was hit with exactly the same problem. At first, I suspected an issue with SSH, but then I found that other protocols failed too. It turned out that anything using TCP was broken, and the problems were brought about by a kernel update. The change was in the support for the TCP Offload engine in some network cards (in my case an Attansic L1 Gigabit controller). The offload engine passes some of the load for TCP processing from the kernel to the network card's controller, which causes a problem when the network traffic isn't going through the network controller, even though it is on the same network, a situation unique to virtual machines. Bridged networking gives the virtual machine an IP address on the same subnet as your LAN, but the traffic doesn't go through the card. The solution is as simple as turning off some of the offload engine's features with ethtool. To turn off everything that is likely to cause a problem, run --- sudo ethtool --offload eth0 rx off tx off sg off tso off ,,, If this works, you can narrow down the options you disable; on my system I only needed --- sudo ethtool --offload eth0 tx off ,,, Once you have identified the correct parameters to disable, you can then add the command to /etc/rc.local (without the sudo part) to have it executed automatically when you boot. Back to the list ****** Use Sony Walkman NW-E507 in Linux Q:: I have a Sony NW-E507 Walkman and I use Ubuntu 7.04. This Walkman uses Sonic Stage for installing and removing music files and podcasts, which is what I mostly use it for. I have tried Mbnet without success. Is there anyway to use this Walkman with my OS? A:: This Walkman device is heavily tied into Sony's DRM technology and Sonic Stage is required. You can mount the player as a USB mass-storage device to upload files, but those files will not be playable. This device is Windows-only, and even there it is locked into Sony's software; Mac users are also out of luck too. Standard MP3 players are cheap nowadays, so the best option is usually to replace such locked hardware with something that lets you do what you want with the music you have bought. Back to the list ****** Rar files on Linux Q:: I've installed and set up BitTorrent on my Linux box as I just got my broadband connection set up. I downloaded software from a torrent site. One problem: it's a rar file, or rather a directory of rar files. I appear to have a lot of 'parts'- how the heck do I put them together as one whole file that can be installed? Now, I could go upstairs with my credit card and use my daughter's PC with its bloated junk OS to pay for an overpriced Windows version of rar software and then come and install the resulting file on my Linux box, so that I can then install Windows apps and games to demonstrate to said daughter that Linux is worth considering. In short, how do I convert a rar directory into an RPM? Oh, and as an aside, why the heck would anyone use rar files? I mean, what a fuss and bother it is! A:: The 'rar' file format is popular under Windows, as it allows an archive to be split across multiple files and for those files to be combined once downloaded. You can't convert a rar file into an RPM, as the rar is simply a package, just like a tar.gz or tar.bz2 file. But you can unpack it using the 'unrar' program, which will be available from your ISP's FTP service, or on the original installation media. You will also have to install Wine and any other required software to be able to run programs for Windows under Linux, which may be harder work than actually just finding Linux alternatives to the Windows software. Almost everything that can be done in Windows can be performed in Linux using freely available software, including playing many games. Back to the list ****** Is there a game of Bridge for Linux? Q:: Do you know if there is a game of Bridge for Linux? All I can find is versions of Bridge to play over the Internet with other players. A:: After some searching we found one bridge game where you can play against the computer, but it is not open source. Does this mean that open source programmers do not play bridge, or that they prefer to play real opponents? I'm sure it would make a good topic for an apparently pointless sociological study, but that doesn't help you. The game we could find is Xcontractbridge, a commercial game supplied on CD for $29. You can find more information at www.xcontractbridge.com/xbridge.htm along with a demo version for download. This is a pre-compiled static binary that should work on most i386+ or x86_64 systems, it certainly ran without problems on my Core2Duo machine, though my bridge skills are limited to distinguishing between red and black cards (so much for heredity, my Dad was a Bridge master). Back to the list ****** Use Linux as mail proxy Q:: I want to download my emails from my ISP account automatically and use my Windows laptop to retrieve them from the Linux box, thus not having to download them on one machine and then also the next and also leave them on my host's server. I would just alter the MX records to point to my own box, but my IPS doesn't offer a static IP service (ruling out dyndns). A:: Running a local MTA (Mail Transport Agent) without a static IP address is risky: changes in dynamic addressing could cause your mail to be delivered elsewhere. Some ISPs block port 25: it's a potential point of entry for would-be attackers. But, you don't need to run your own MTA to achieve what you want. You can use fetchmail to pull mail from your ISP's mail server, store it locally, then run a POP3 or IMAP server to serve that mail to your local PCs. With a text editor, create .fetchmailrc in your home directory with these contents --- set daemon 300 poll mail.myisp.com with proto POP3 user 'myispuser' there with password 'mypass' is 'myuser' here options keep mda '/usr/bin/procmail -d %T' ,,, and set it to be readable by only your user with --- chmod 600 ~/.fetchmailrc ,,, The first part tells fetchmail to run in daemon mode and poll your ISP's mailserver every 300 seconds, the second gets the mailserver to contact and username and password to use, and the local username to save the mail for. The keep option leaves mail on the server (remove once you're sure everything works). The last line tells fetchmail to use procmail to deliver mail instead of running it through sendmail. Tell procmail to deliver the mail by putting this in ~/.procmailrc --- MAILDIR=/var/spool/mail DEFAULT=$MAILDIR/$LOGNAME/ ,,, The trailing / is important, it tells procmail to use maildir storage, which is needed by the IMAP server later. Create the mail directory with --- mkdir -p /var/spool/mail/myuser chown myuser:mail /var/spool/mail/myuser chmod 770 /var/spool/mail/myuser ,,, Test it with fetchmail --daemon 0 -v. This turns off the background mode and shows all that is happening. If it works, set fetchmail to auto-run by selecting System > Preferences > Personal > Session, pressing New in the Startup Programs tab and typing fetchmail in both boxes. Fetchmail will now run each time Gnome starts. Now the mail is on your system, you need an IMAP server to be able to access it from your LAN. Once Dovecot is installed from the Fedora repositories, make changes to /etc/dovecot.conf. Find lines --- #listen= [::] #mail_location = ,,, and change them to --- listen = * mail_location = /var/spool/mail/%u ,,, You also need to allow ports 110 (POP3) and 143 (IMAP) through your firewall. For outgoing email, each computer can be left set to communicate directly with your ISP's SMTP server. Back to the list ****** Batch process creating new users Q:: I have been running CentOS since version 3 in the school where I work. I have input 500 pupils into the system one by one. I have set up a domain logon with Samba. Is there any way to bulk input so many user accounts in one go? A:: Of course, there is, this is one of the areas in which textual command line tools excel, in being able to use data from one file or program in another. Are you trying to create system users or Samba users, or both? Use the useradd command to create system users and smbpasswd to create Samba users. Both of these commands can be used in a script to read a list of user names from a file. Put your new user names in a file called newusers, one per line, like this --- username1 password1 real name1 username2 password2 real name 2 ... ,,, You can create this file manually or extract the information from another source, such as using sed or awk on an existing file or running an SQL query to pull the information from your pupil database. Then run this to add them as system users. --- cat newusers | while read u p n do useradd --comment "$n" --password $(pwcrypt $p) --create-home $u done ,,, You may need to install pwcrypt before you can do this, it is a command line interface to the system crypt() function. You could use the crypt command from the older cli-crypt package, but pwcrypt is more up to date and considered a better option nowadays. If you want to force the user to change their password immediately, which is generally a good idea, add --- passwd -e $u ,,, after the useradd command. This expires their password, so they need to change it when they next login. To set the Samba passwords, use smbpasswd like this --- echo -e "$p\n$p" | smbpasswd -as $u ,,, The -s option tells smbpasswd to accept the password from standard input instead of prompting for it, the echo command is in that format because smbpasswd still requires the password to be given twice when used like this. So all you need to add the users to both the system and Samba is --- cat newusers | while read u p n do useradd --comment "$n" --password $(pwcrypt $p) --create-home $u passwd -e $u echo -e "$p\n$p" | smbpasswd -as $u done ,,, You must add the users to the system password file before trying to add them to Samba, or smbpasswd will complain. Similarly, when you delete users, run smbpasswd -x username before userdel username. Back to the list ****** Hostname disappeared after upgrade to Fedora 7 Q:: I have a Linux box which I use as a server for testing web development projects (it's in my DMZ, so I use a dedicated machine rather than my main machine). When it was running an older Fedora, Windows machines were able to access the machine by its hostname. All seemed well. Now, I upgraded the machine to Fedora 7 and its hostname is unavailable - pinging webdev (the server name) no longer works but I can still access it through its IP address. A:: I suspect you're setting the address statically instead of using DHCP. When you let the computer request an address via DHCP, the DHCP server keeps track of the hostname and IP address, as DHCP servers generally act as local name servers too, so any other computer can resolve the hostname to an IP address. You have three choices, the first is to set the IP address by DHCP. I understand you may want a fixed address for this box, but many DHCP servers have an option to map specific MAC addresses or hostnames to given IP addresses, so you get an effectively static address but working within the DHCP framework. You set the hostname to be sent to the DHCP server in the network-config window. Your second option is to record the IP address and hostname in the hosts file of every other computer on the network. The file is /etc/hosts on the Linux systems and C:\windows\hosts on (yes, you guessed it) Windows. The format of the file is one line per IP address containing the address, full hostname and then any aliases, all separated by white space, like this --- 192.168.1.27 webserver.example.com webserver 192.168.1.43 mail.example.com mail ftp.example.com ftp ,,, The third option is to run your own local DNS server. This is nowhere near as complicated as it sounds, as long as you don't try to set up a full-blown Internet DNS server like Bind. Dnsmasq, which I use on my home network, is available from www.thekelleys.org.uk/dnsmasq and is very easy to set up. Its default setup is to use the /etc/hosts file from the computer running it to provide local DNS resolution and pass other requests to the name servers listed in /etc/resolv.conf. Just install it on one computer on your network (not the one in the DMZ) and set all the others to use that computer as their DNS server. Dnsmasq can work as a DHCP server too (SmoothWall Express uses it) and it is a lot more configurable than built-in DHCP servers of most routers. If you have a computer that is always on, using this to provide DNS and DHCP for your network makes life easier; make sure you disable the DHCP server in the router to avoid conflicts. The dnsmasq configuration file is well documented (see man page and comments in the file) with sensible defaults. It may need no configuration at all for you, but if it does, the options are explained clearly. Back to the list ****** Write backup script that only selects one file Q:: I use Simple Linux Backup, which produces daily backups in the following form: Backup.Mon.tar.gz Backup.Tue.tar.gz Backup.Wed.tar.gz ... Of these, the Monday backup is the full one, the others are incremental. I run two versions of Simple Linux Backup; one is run as root and backs up certain system files, the other is run as my user and backs up my data. They both produce the same file names - but one set is stored in /mnt/backup/ system/ and the other set is stored in /mnt/backup/data/. I am now trying to write a script to copy these files onto DVD. Of the files it is only the respective Backup.Mon.tar.gz (full backup) files that I want to keep - I intend to run this as a cron job each Monday after the backups have completed. The problem is that I can't just use --- growisofs -Z /dev/dvd R -J /mnt/backup/* ,,, as I get a load of files I don't want (and it wouldn't fit on one DVD). Nor can I use --- growisofs -Z /dev/dvd -R -J /mnt/backup/system/Backup.Mon.tar.gz /mnt/backup/data/Backup.Mon.tar.gz ,,, because the file names are the same! I suppose I could rename one of the files in my script before writing the DVD, but that just seems like an ugly solution. As far as I can tell growisofs does not support the -o switch to write the files as different file names. Can you offer a more elegant solution? A:: There are a number of ways you could do this, all of which use mkisofs arguments. growisofs passes most of it arguments to mkisofs to create the ISO data, only those arguments related to writing the data to the disc belong to growisofs. The one mkisofs argument that is not allowed is the one you mention, -o, because this causes mkisofs to write the ISO data to a file and the whole point of growisofs is to write data to a disc. One option is to use the -m option to exclude everything but the Monday files. This will exclude every backup where the day part of the name does not start with M. --- growisofs -Z /dev/dvd -R -J -m 'Backup.[^M]??.tar.gz' /mnt/backup ,,, A more flexible method is to use the graft-points argument. This can take a little getting used to, but it allows you to change the name or path of any file or directory you write to the disc. --- growisofs -Z /dev/dvd -R -J -graft-points system.tar.gz=/mnt/backup/system/Backup.Mon.tar.gz data.tar.gz=/mnt/backup/data/Backup.Mon.tar.gz ,,, will save the two files as system.tar.gz and data.tar.gz in the root of the DVD. An improved version of the command includes the date of the Monday backups in both the volume name of the DVD and each of the files, making it easier to see at a glance which DVD is current. --- DATE=$(date --reference /mnt/backup/system/Backup.Mon.tar.gz +%y%m%d) growisofs -Z /dev/dvd -R -J -graft-points -V BACKUP_$DATE system_$DATE.tar.gz=/mnt/backup/system/Backup.Mon.tar.gz data_$DATE.tar.gz=/mnt/backup/data/Backup.Mon.tar.gz ,,, The mkisofs man page has (a lot) more details. Remember also that if your distro uses cdrkit rather than cdrtools (cdrkit's licencing is more acceptable to some distros, particularly those based on Debian) mkisofs is replaced by genisoimage, although there is a symlink for mkisofs to retain compatibility. Back to the list ****** Transferring Thunderbird's Junk Filter settings (training.dat) Q:: If I buy a new computer, is there a way I can transfer Thunderbird's Junk Filter 'intelligence' by copying some files, rather than it having to re-learn from scratch? A:: The junk filter data is in a file called training.dat. This in contained in the randomly named directory in ~/.mozilla-thunderbird. Thunderbird keeps the name of the directory in ~/.mozilla-thunderbird/profiles.ini. Why do you want to transfer only the junk mail filters and not the rest of your data? Even if you use IMAP and keep all your mail on the server, there are still configuration files and filter controls that you would want to keep. Otherwise you will have to recreate or reconfigure everything. You only have to transfer the .mozilla-thunderbird directory to your new computer to carry on where you left off. If the two computers are networked, you can do this with scp --- scp -pr user@oldcomputer:.mozilla-thunderbird ~ ,,, Otherwise use sneakernet, with something like a USB flash drive, to transfer the files. In this case, you should tar the directory to preserve the file attributes. Copying directly to a FAT filesystem as used on most flash drives, will remove file ownerships and permissions. --- # on old computer tar czf /media/usbdrive/thunderbird.tar.gz .mozilla-thunderbird #on new computer tar xf /media/usbdrive/thunderbird.tar.gz ,,, Whichever method you use, do this before you run Thunderbird on the new system, otherwise it will create a .mozilla-thunderbird directory with new settings. Even if you are using the same user name on both systems, it is quite possible that the new distribution may be allocating a different numeric ID to the user. You can fix this by running this in a root terminal --- chown -R myuser: ~myuser/.mozilla-thunderbird ,,, Back to the list ****** User account in Ubuntu being refused sudo access Q:: My main hard disk is partitioned as a dual boot system - Windows XP on one and Ubuntu on the other. I then added a second hard disk with three FAT32 partitions for data to be shared between the two systems. To use the second disk in Ubuntu, I have to use the System > Admin > Disk Manager at each login to access the FAT32 partitions. Thinking that changing a few file permissions would give me immediate user access to them, I experimented with the group management tools (groupadd, chgrp, usermod as in chapter 14 of Ubuntu Unleashed by Paul and Andrew Hudson, Sams, ISBN 0-672-32909-3). Unfortunately, the experiment failed for me. I can no longer use sudo - when I try I get a message saying that my user name is not on the sudoers file and my attempt to use it will be reported to Big Brother. In addition the items on the Ubuntu desktop System menus whose use requires root privileges (like Disk Manager and Apt-get) have disappeared. I can get root privileges by booting into the Recovery Mode but I do not know whether the problem can be overcome by editing /etc/sudoers with visudo. The explanation in Ubuntu Unleashed (p294-5) looks risky to me - I am getting too old for this sort of excitement. Perhaps I should reinstall Ubuntu, but that is a blunt instrument - it would be better to solve the problem with a few deft keystrokes. But what does a new installation do? Does it overwrite existing files and destroy the various configuration files? If it inherits the old config files I will be no further forward. A:: A new installation will overwrite your system configuration files. Not only the ones you have inadvertently broken, but also the ones that are working perfectly, so it should be a last resort. The problem appears to be that your user is no longer a member of the admin group, probably due to incautious use of usermod, or that the admin group no longer has sudo rights. To check the latter, check that /etc/sudoers contains --- %admin ALL=(ALL) ALL ,,, and add it with visudo if not. To see whether you are in the admin group, run id or groups in a terminal, as your normal user. If admin is not listed, you need to boot in Recovery Mode and add yourself to the admin group with --- gpasswd -a username admin ,,, Unlike usermod, gpasswd adds groups to your user without affecting your existing groups. To make your FAT partitions available when you boot, you will need to add a line to /etc/fstab for each one. --- /dev/hdb1 /mnt/shared1 vfat uid=john,gid=john,u mask=022 0 0 ,,, The first three fields should be clear, the device, mount point and filesystem type. The options field is the key: uid and gid set the user and group that will "own" the mounted filesystem, umask sets the permissions (remember that FAT has no concept of owners or permissions). The umask is subtracted from 666 for files and 777 for directories, so this sets files to 644 and directories to 755 (writable by you, readable by everyone). This assumes that your username and group are the same, which is how Ubuntu usually sets things up. If you changed your primary group when fiddling with usermod and friends - id shows your primary group - change it back with --- usermod --gid groupname username ,,, substituting the appropriate values for the user and group names. Incidentally, the Big Brother reference only means that the attempt to use sudo will be recorded in the system log file for the administrator to read. You won't be forced to spend three months in a camera-infested house with a dozen strangers who become even more strange by the day, being gawped at on live TV. Back to the list ****** Read SMART signals from hard drives in Linux Q:: Would you be able to suggest a couple of programs that read the S.M.A.R.T. signals from hard drives? Keeping an eye on hard drive health is a good thing. I run FreeBSD, so the ability to run natively would be nice, but as you know FreeBSD is able to run most Linux programs directly. A:: The programs that you are looking for are in the smartmontools suite, and the instructions provided here for their operation apply to LInux as well as FreeBSD. The source is available from http://smartmontools.sourceforge.net and compiles on FreeBSD as well as Linux. The two programs are smartctl and smartd. Smartctl will run a number of tests on your drives, for example --- smartctl --all /dev/hda ,,, will show all S.M.A.R.T. information on the first drive (these tools are run with a drive name, not a partition name). Smartd is a daemon that will monitor your drives and report any problems to the system log, and mail you if you give it your address. S.M.A.R.T. - or Self-Monitoring Analysis and Reporting Technology to give it its full name -can detect problems leading to failures and even report changes in temperature. The latter can cause excessive entries in the syslog as smartd reports every temperature fluctuation. Unfortunately, in the first few days of operation, I found this filled my daily logwatch mails with a lot of noise. Adding a suitable line to /etc/smartd.conf fixed this irritation, such as --- /dev/sda -d sat -a -I 194 -I 231 -I 9 -W 5 -m me@mydomain.com ,,, The -I 194 -I 231 -I 9 -W 5 tells it to report only changes of five degrees and the minimum and maximum temperatures for the day. If changes of five degrees happen often, you have a potential problem, so this is a useful setting. The first part of the line, -d sat -a, specifies that this is a SATA drive and to run all tests. Back to the list ****** Create a separate home partition after installing Linux Q:: I've read several times about the benefits, but I didn't initially set up my hard disk like that. Is there a way to create a new home partition after the fact or do I need to wipe out everything, reinstall, and restore my home directory to the re-partitioned drive? A:: It is possible, but care is needed. As always, backup first! The danger is of the process being interrupted by a power failure or other software crashing the computer. Working with in-use filesystems should be avoided too, so boot from the live CD distro like Knoppix for this task. There are three stages: resize your root partition, create a new home partition in the space made and move your data over. The process is a lot easier if you have plenty of free space; if your drive is nearly full, move some files to DVDs or an external disk. While it is possible to resize a partition from the command line with a combination of cfdisk (or fdisk) and appropriate resizing tools for your filesystem, it is easier with GParted or QTparted (on Ubuntu and Knoppix CDs respectively). The Knoppix live DVD has both. We use GParted here, but they work similarly. Partitions cannot be resized when mounted, so if a mount point shows alongside the partition in GParted, right-click the partition and select unmount. When resizing a partition, it is normally only possible to move the end point, so start by dragging the end of the root partition to the left until it is the size you want. It is considered good practice to not fill a partition beyond 80 per cent, filling that last 20 per cent can lead to fragmentation, but remember that you are going to remove the contents of /home after resizing, so you can probably pull that slider as far over as you want. Now create a partition in the vacated space and press Apply to start the process. This is the risky part, it is best to leave the computer undisturbed while it completes this. Once you have created the new partition, it is safest to reboot, to the live disc again, to make sure the kernel knows about the new partition layout. Now you need to move the files from the old home directory to the new partition. This must be done as the root user, so open a terminal and type su to become root - or sudo -i if using an Ubuntu disc. If the root partition is on /dev/hda1 and the new partition on /dev/hda2, the commands to make the copy are --- mkdir -p /mnt/{root,home} mount /dev/hda1 /mnt/root mount /dev/hda2 /mnt/home mv /mnt/root/home/* /mnt/home/ ,,, or you can replace the last command with --- rsync -a /mnt/root/home/ /mnt/home/ rm -fr /mnt/root/home/* ,,, which is a little slower but preserves all timestamps and can be interrupted and resumed is necessary. This assumes you have a relatively small amount of data in the home directory, otherwise you will not be able to shrink the root partition as much as you need and will have a lot of wasted space after moving the file. Back to the list ****** Set custom boot options when remastering ISO Q:: I've tried building my own distro. I tried it in my HP DV9308nr laptop, added the BCM43xx firmware to the ISO image I was creating, burned it, booted from it, and everything worked as expected. How do I edit the menu.lst file under /boot/grub/ in the ISO image so that it adds the following entries to the kernel at boot time; --- vga=791 pnpbios=off irqpoll nomsi nomce ,,, This laptop will lock-up without these settings. A:: Ubuntu doesn't use Grub for its CDs, it uses isolinux instead. The configuration file for this is isolinux/isolinux.cfg on the CD. The syntax of the file will be reasonably familiar to anyone who has used Lilo. When you have completed the second step of the instructions on page 45, you will have a copy of the CD's contents in the ubuntu-rebuild directory. Now edit ubuntu-rebuild/isolinux/isolinux.cfg and find the first menu entry, which looks like this for Ubuntu 7.10 --- LABEL live menu label ^Start or install Ubuntu kernel /casper/vmlinuz append file=/cdrom/preseed/ ubuntu.seed boot=casper initrd=/ casper/initrd.gz quiet splash -- ,,, The append line contains the parameters that are passed to the kernel when booting, just like the append line in lilo.conf, so add your options to this line, before the final "--" and save the file. Now continue with the rest of the tutorial and your remastered disc will boot with the options you need. Any more problems? Let us know! Back to the list ****** Wireless adapter configuration Q:: I currently use an Alcatel modem (SpeedTouch 330) given to me by my ISP and I simply can't get it to work with Linux (I'm using Gentoo Linux with 2.6.9 kernel). I decided to build a wireless home network, so I'm planning to buy a D-Link DSL-G604T wireless ADSL router. My first question: is this hardware fully compatible with Linux or do I need to install any drivers as with the Alcatel modem? I have a notebook, and I want to connect it to the network through a wireless cardbus adapter. I want a card supported natively by a kernel module, something that could work straight out of the box with Knoppix, for example. After some reading I found that it should be a card with the Prism 2/2.5/3 chipset, but I'm confused and don't know how to find a manufacturer/vendor of a popular cheap card with that chipset. So my second question is, can you suggest a good adaptor using the Prism chipset? Thank you for your help. A:: The wireless router from D-Link will make the connection to your ISP over the DSL circuit, so you will not need any PPPoE or PPPoA support on the Linux system. You can connect straight into the router using Ethernet, and DHCP against the router for an internal address. Essentially, everything will be offloaded on to the router, making your Linux system a plain old workstation, rather than a router. As far as PCMCIA wireless adaptors are concerned, you can check out www.linux-wlan.org/ or www.prism54.org/. You'll find that Prism54 identifies devices supporting the 802.11g standard as well as 802.11b for faster connectivity. Prism2 and Prism54 cards are identified by Knoppix at boot time, and will enable you to access the internet easily without having to install a large number of supporting packages. Back to the list ****** Putting Windows at the top of the list with Lilo Q:: Firstly, a big thank you and apology to all who helped me out with my scanner woes some time back. Sorry for not replying; I took some time out to start a family. Family fine, scanner still only works if I boot into Windows first. Anyhow, I need a hand configuring my dual boot setup (XP service pack2 on one drive, Mandrake 10 on the other). I currently use a boot disk to boot Linux, but would like to dispense with that and have a boot menu where XP is at the top as a default boot (for the non-Linux users and gamers in the house) and have Linux second for me. I need this to be as painless as possible for the windows users (and for me)! Any advice would be appreciated, thanks in advance. A:: The easiest way to do this is with Lilo. Grub is good too, but Lilo is a little easier to understand. You can do this the hard way by creating a lilo.conf file by hand, or since you are using Mandrake, you can use the Control Centre to do it for you. Go to the Boot>boot loader section. Select the Lilo bootloader and install it on your main hard drive (probably hda1). Mandrake will usually detect the presence of Windows. Highlight the windows entry and select modify - then set it to the default. Back to the list ****** USB hard drive being given different device names every boot Q:: I have a Seagate 250GB USB hard drive attached to my Dell Inspiron laptop running Ubuntu 7.04. There is no problem with the drive being recognised, but every time I boot up, it appears at a different place in dev - anything from /dev/sdb to /dev/sdh and all points in between to date. This makes auto mounting of the various partitions I've defined on the drive awkward, to say the least. Is there any way I can get this drive to always appear as a specific device in /dev so that I can add it into /etc/fstab and save myself the frustration of having to find out where Ubuntu thinks the drive is today and then manually mounting all the partitions? Or is there some way of entering the partitions into /etc/fstab so that, whatever their /dev location, they will always mount at the same point in /media? Ubuntu does mount the partitions on startup, but as most of them are 40GB it's not too helpful having them called '40GB disc-1' '40GB disc-2' etc. I need them to automount in more descriptive locations. A:: It is possible to tell fstab to mount partitions by their unique ID instead of the traditional /dev/sd? node. If you look in /dev/disk/by-id, you will see symbolic links to the various partitions using their IDs. These IDs stay the same and are updated to point to the correct node when udev (the Linux device handler) starts. You could use these in /etc/fstab instead of the traditional node names, but these are really intended for use with fixed disks. The advantage for fixed disks is that if you add another partition, causing the numbering to change, /etc/fstab will still work, and Ubuntu uses this system by default. When using removable disks, it is best if you avoid putting them in /etc/fstab at all. In that case the Gnome Volume Manager will automount them when they are detected, at boot or any time later. However, this leads to the problem you have, so we use another udev feature to ensure the device names do not change. Udev supports rules that can, among other things, specify the /dev node name for a particular device or filesystem. This persistent naming means that you can also give your devices meaningful names, like /dev/ extdisk for your external disk and /dev/camera for, well, I'm sure you get the idea. To use these rules, you need to be able to uniquely identify the device, and the udevinfo command helps here. If your disk is currently /dev/sda, run --- udevinfo --attribute-walk --path /block/hda | less ,,, If you want to query a specific partition, say sda1, use /block/sda/sda1 for the path. Looking through the output you will see lines like --- ATTRS{model}=="model_code" ATTRS{vendor}=="manufacturer" ,,, that can be used to identify the device. Then edit or create the file /etc/udev/rules.d/10-local.rules and add the line --- BUS=="usb", KERNEL=="sd*", ATTRS{model}=="model_code", ATTRS{vendor}=="manufacturer", NAME:="extdisk%n", SYMLINK+="%k" ,,, the items using == are tests, all of them must match for the rule to apply. The first two make sure you are dealing with a USB disk device or partition, the next two are copied directly from the udevinfo output. The next item sets the node name to /dev/extdisk for the disk and /dev/extdiskN for the Nth partition. The final part creates symlinks to the devices using the original names that would have been used without this rule. This rule applies to the disk and any partitions on it, but you can also match individual partitions, which is useful for things like cameras, MP3 players or memory cards in your card reader that appear as a single partition. For example --- BUS=="usb", KERNEL=="sd?[0-9]", ATTRS{model}=="model_code", ATTRS{vendor}= ="manufacturer", NAME:="camera", SYMLINK+="%k" ,,, In this case only partitions are mounted by the pattern sd[a-z][0-9], which means sd followed by any single letter then any single number, and the partition appears as /dev/camera instead of /dev/camera1. There is plenty of useful information on writing udev rules at www.reactivated.net/udevrules.php You may still find the Gnome Volume Manager mounts with names like /dev/disk-n. The easiest solution to this is to give the partitions labels when you create them, then the volume manager will use these. To name an ext3 partition when you create it, use --- mke2fs -j -L Volume_Name /dev/sda1 ,,, and to add or change a volume name on an existing filesystem --- tune2fs Volume_Name /dev/sda1 ,,, Back to the list ****** How well does Linux support multi-core CPUs? Q:: With the low cost of dual- and quad-core processors, I am interested in how Linux will be able to access and use this technology now and in the near future. I have done some looking, and while some information is available, I was unable to determine how it would benefit me on a Linux desktop computer. Also is there one particular distro or version of Linux better at implementing the new processors than other versions? A:: Linux has supported multiple processors (or processor cores) for some years: I am writing this on a Core2Duo system (although Kate doesn't really need all the power of both cores to check even my spelling). Most distros support multiple processors out of the box, the important criterion is that the kernel has been build with SMP (Symmetric multiprocessing) support. Some distros enable this in their generic kernel, others have a separate SMP-enabled kernel that is used when the installer detects more than one CPU core. There are a number of easy checks for SMP support; does the output from --- cat /proc/cpuinfo ,,, shows a figure for "cpu cores"? Try running top in a terminal and pressing 1 (that's the number one, not the letter L), which should toggle the CPU usage display between showing an overall figure and individual CPU loads. Running --- zgrep SMP /proc/config.gz ,,, will show CONFIG_SMP=y if SMP support is enabled, providing your kernel has support for /proc/config.gz, otherwise you'll see an error that tells you nothing. SMP support in the kernel means improved multitasking, as programs can be run on whichever processor has the least load, but most individual programs still use only one processor. However, some CPU-intensive programs can split their load between more than one CPU core. For example, the -threads option for ffmpeg will split a video transcoding task across the given number of processors for a substantial reduction in processing time. Software compilation can also benefit, as most programs have a lot of smaller files that need to be compiled and this can be done in parallel. By setting the MAKEOPTS environment in your profile to -j3 for a dual core system, or -j5 for quad core, programs will usually compile much faster. Note that the number used here is usually one more than the number of CPUs (-j2 is used for single processor systems) to ensure the processors are loaded most effectively. If you are going to load the system with intensive tasks like software compilation or video transcoding, try using the nice command (see the Quick Reference box on page 108) to keep your desktop responsive while this is going on. Back to the list ****** Computer crashing at random points during Linux installation Q:: I have a Compaq Deskpro 650MHz with 128MB RAM and a 10GB hard disk. I am running Mandrake 10.0 on it, which is quite out of date, so I have been trying to upgrade to - or install from scratch - Mandriva 2007. The computer seems to crash every time mostly during install. Sometimes I can install but when I try to boot the system, everything locks up too. I get a lot of text on the screen that makes me think that it's something with the kernel. I have tried other distros; Fedora, Ubuntu, Slackware and Gentoo, they all have the same problem. Is there a way to install newer distros (even Mandrake 10.1 doesn't work) and make them work? A:: The fact that you experience problems with various distros, and they don't always appear at the same point, indicates that this may be a hardware problem. The most common of these are faulty memory, overheating and a substandard PSU. Before you try any of these, I recommend you try running the installers in text mode. The graphical installers require a lot of memory, far more than it needs to run the installed distro, because they load everything into a huge ramdisk so it is still available when you change CDs. A text mode install reduces the memory requirements substantially. If that fails, you need to check for the previously mentioned problems. Testing memory is easy, most distro install discs include memtest as an option on the boot menu. Select this and let is run for as long as possible. You need to let it make at least two passes, running overnight is best. Overheating can be caused by failing fans or a build up of crud (sorry to use such a technical term here) in the fans, heat sinks or vents. Use one of those cans of compressed air to blow everything clear. A failing power supply can also cause random reboots and lockups, but the only way to test it is to try a replacement. If none of that works, make a note of the last dozen or so lines of text on the screen when it fails. While some of these messages come from the kernel, many are from the various programs that are run when a system boots. Knowing the content of these messages will help to pinpoint the problem. If the messages are not consistent, you almost certainly have a hardware fault. Back to the list ****** Fedora only able to play non-encrypted DVDs Q:: My work takes me to many locations worldwide and when I have some free time, like many other people, I enjoy watching DVD movies. I am running Fedora 7 on my T60 Thinkpad and installed VLC and all other plugins from the Livna repository but I am only able to watch non-encrypted DVDs. I do have libdvdcss installed but am still unable to watch the majority of films that come my way, I have heard that the Matshita DVD drive will not play encrypted disks. Is this true? If not, is there a software fix that could cure this problem? A:: Your drive is locked to a specific region and will only play encrypted discs from that region. The DVD Consortium divided the world into six regions and a drive can only play discs of the region it is set to. The regions are numbered in the following way: --- 1 North America (USA and Canada) 2 Europe, Middle East, South Africa and Japan 3 Southeast Asia, Taiwan, Korea 4 Latin America, Australia, New Zealand 5 Former Soviet Union (Russia, Ukraine, etc), rest of Africa, India 6 China ,,, It is possible to change the region of most drives, by using something like regionset (http://linvdr.org/projects/regionset) but there is a catch. Most drives only allow you to change the region setting four or five times, then it stays locked to the last set region, which isn't much use if you are travelling from region to region and buying DVDs on your travels. With a laptop, you don't even have the option of swapping the drive for a region free one. Some drives can have their firmware updated to either allow unlimited region changes or even work with all regions. However, this is not true for all Matshita drives, so you may be out of luck. There is information on many different drives at www.rpc1.org. If it is not possible to update the firmware on your drive (some drives use encrypted firmware) you are left with the option of ripping your DVDs to another format on another computer, one with a region-free DVD drive. This would work well, and saves you taking the DVDs on your travels as you can store the ripped files on your hard drive, but it only works with DVDs you have before you travel, you still cannot play DVDs you buy abroad. dvd::rip (www.exit1.org/dvdrip) is a good choice for this. Back to the list ****** Installing OpenOffice.org: Java runtime disappeared Q:: I have installed OpenOffice.org 2.3. While most parts seem to run without any issues, when I try to open my Base files they demand an updated Java runtime which seems to have disappeared. I have tried downloading and reinstalling Java from Sun but I still can't access my database files - I get a message asking me to locate Java via Tools > Options, but I can't locate any Java installation. A:: OpenOffice.org needs Java for the database and help systems, but the rest of the suite will work without it. The best way to install Java is through your distro's package manager. The download from Sun will work, but keeping as much software as possible inside the package management system reduces dependency and conflict problems later on. Getting OpenOffice.org to work with Java can be less than intuitive. Select Options from the Tool menu and pick OpenOffice.org >Java. Enable Java by ticking the 'Use a Java runtime environment' box then wait for it to scan your system for suitable Java installations. This can take from a few seconds to a minute, depending on the speed of your system. This is Java, so it seems appropriate to make a cup of coffee while you are waiting. Eventually, you should see a list of your Java Runtime Environment (JRE) installations, select one and press OK. You need to quit and restart OpenOffice.org for the change to take effect. If OpenOffice.org fails to find your JRE, you can click the Add button and give the path to it manually. This should be something like /opt/sun-jdk-1.6.0.03/jre/bin. If you have installed through your distro's package manager (not the OpenOffice.org package manager), you can generally use that to view the contents of a package, which will tell you where it is installed. Back to the list ****** Long filenames truncated when mounting USB stick in Linux Q:: I use a USB stick in a Windows computer and I download files on the stick from the Internet. Then I plug this USB stick in a Linux computer at home and I find that all the files with long names have been renamed using "~" For example, mylongfile3.mp3 will be renamed mylong~1.mp3 If you could help me with this I would appreciate it as I haven't got a clue of what may be wrong. I primarily use grml.org, a Debian based distro, but I have found this problem also happens in other distros like Knoppix, (Debian based too). A:: This appears to be a problem with the options used to mount the USB stick's filesystem. The default filesystem on these devices is usually with FAT16 or FAT32, neither of which support long filenames directly, so they use a kludge (yes, I know, it's shocking that a Windows system contains kludges) to map the short names you are seeing to the correct long ones. The vfat filesystem in the Linux kernel handles this, but the msdos filesystem does not. Run the mount command in a terminal and you should see a line like this for your USB stick --- /dev/sda1 on /media/usbstick type msdos (rw) ,,, If the type is shown as msdos, you have found the root of the problem, now you need to make sure your stick is mounted correctly. If you are using an entry in /etc/fstab to mount this, change the filesystem type, the third item on the line, from msdos to either vfat or auto. If you are using your distribution's automounting system, then the filesystem type should be correctly identified. Does this happen with every USB stick you have tried, or only this one? If it is only this one, you could have some filesystem corruption that is causing the mount command to misidentify the filesystem. The are ways to work around it, but the best option would be to reformat the stick to get rid of the corruption (or even replace the stick if that does not fix it). I was unable to reproduce this problem with grml.org, so I suspect you may have a corrupted, or broken, USB stick. The flash memory used in these devices has a limited lifetime, it may simply be worn out. Back to the list ****** Change BIOS settings to boot from USB Q:: I have a Pentium 3, 866MHz laptop with a 20GB hard disk and Mandriva 2005LE booting with Grub. I wish to install Ubuntu 7.04 on an external USB hard disk, making the machine dual boot. I do realise that access to a USB2 enabled HD will be rather slow since the above machine has USB1.1 ports. My problem is that the laptop's BIOS cannot make a USB-HD the first booting device. How do I modify the bootloader to access above USB-HD or use a floppy to install a bootloader. I find it difficult to find appropriate info on this subject. A:: You can use your existing bootloader to boot from the USB disk. It is best to use the Ubuntu alternate install CD for this purpose, as it gives more control over installation options. Install Ubuntu on the external disk in the usual way, but do not let it write the bootloader to the MBR as it usually does. Instead, have it install Grub to the root partition, which will probably be /dev/sdb1 with the internal disk being /dev/sda. Then boot into Mandriva as normal and mount your Ubuntu partition. Mandriva 2005 still uses /dev/hd* for IDE hard disks, so your USB drive will now be /dev/ sda. Open a root terminal and run --- mkdir /mnt/tmp mount /dev/sda1 /mnt/tmp YourFavouriteEditor /boot/grub/menu.lst /mnt/tmp/boot/grub/menu.lst ,,, to load both bootloader configurations into your favourite text editor. You now have two choices, the quick and easy method, or the slicker-looking but- slightly-more-fiddly-to-set-up method. For quick and easy, add this to the end of the Mandriva menu.lst file. --- title Ubuntu menu root (hd1,0) chainloader +1 ,,, This adds an extra menu entry that runs the Ubuntu bootloader. To do everything from the one boot menu, you need to copy the Ubuntu bootloader menu entries to the Mandriva menu. Look for the main title option, the one that appears first in the menu. It will consist of three lines; the title to show on the boot menu, the kernel to load with a number of options and the initrd to use. Copy these lines to the Mandriva menu.lst and change the title to something appropriate. Then change the device paths to reflect the correct locations. Grub counts from zero, so (hd1,0) is the first partition (0) on the second disk. You can either include the path in each of the kernel and initrd lines or (my personal preference) as a separate root item. Your menu.lst entry will look something like this example (for Ubuntu 7.10): --- title Ubuntu 7.10 root(hd1,0) kernel /boot/vmlinuz-2.6.22-14 root=/dev/sda1 quiet splash initrd /boot/initrd.img-2.6.22-14 ,,, You could also copy the other Ubuntu menu entries in the same way, or leave the Ubuntu menu option as above for the rare occasions you will need anything but the default. If you get a File not found error when selecting this menu, you have probably got the paths wrong. The drive ordering depends on BIOS settings, and even then, Grub cannot boot from USB on all systems. To check the correct path for the kernel, run grub from a root terminal (or you can press C at the Grub menu) to enter the Grub shell. Then run --- find /boot/vmlinuz-2.6.22-14 ,,, The command should show the correct path for the kernel, including the drive numbers. Make sure this matches up with the root command in your menu. Back to the list ****** Recreate Ubuntu partitions after installing Gentoo Q:: I have a system with Ubuntu on a 500GB hard disk (dev/sda). I had been using Gentoo, which I like better, so I created a new partition and installed Gentoo. I used the original boot and swap partitions. Now running Gparted, I seem to have three partitions: /dev/sda1 (198.70GB) for /boot, /dev/sda3 (261.26GB) for root and /dev/sda2 (extended 5.80GB) with swap on extended /dev/sda5. Gentoo is working fine but I lost Ubuntu on /dev/sda1. I believe I didn't format when I repartitioned with Gparted, but there wasn't anything critical on the disk so I can reinstall. Can I repartition /dev/sda1, creating another partition for Ubuntu or whatever and leave a small boot partition? What size will the boot partition be and can I use the same boot partition for both distros Gentoo and whatever? Also, can I use the same swap partition for both distros? Finally, will the new partition scheme change the naming scheme in the Gentoo grub loader? A:: Gparted creates a new file system on any new partition it creates, so you formatted sda1 without realising it. You can resize sda1, but you'll have to create another primary partition, which will mean you won't be able to create any further partitions, except by shrinking your swap partition. The x86 partition table is limited to four partitions. We get more by making one of them an extended partition and creating logical partitions within that. Resizing sda1 will leave free space outside of the extended partition. You could do this, and let Ubuntu install in the space you free up, and it's perfectly safe to share a swap partition between the two distros. However, sharing /boot is more complex and generally not a good idea, especially as Ubuntu defaults to using no separate boot partition. A separate /boot is something of an anachronism, dating back to limited PC BIOSes that could only handle small disks, so the boot files had to be at the start of the disk. Nowadays, this is no longer applicable and I don't use a boot partition on anything. You have a couple of options. The best in the long term, but the most work, is to back up your Gentoo install, using either a second drive or a pile of DVDs, and repartition the disk from scratch. As you're not running Windows, there's no need for a primary partition. By making every partition logical, you make the whole disk an extended partition to give yourself more flexibility. Then restore Gentoo from the backup before installing Ubuntu. The alternative is to shrink sda1 to around 50MB and install Ubuntu in the space this frees up. In this case, Grub's root for Gentoo will still be (hd0,0) but you'll need to change the root parameter passed to the kernel, probably to /dev/sda4. Either way, installing Ubuntu will set Grub to use its configuration: this should pick up your Gentoo installation and add it to the menu. If not, you can restore your Gentoo bootloader and add an entry to the menu to boot Ubuntu. To do this, boot from the Gentoo Live CD and run the following commands: --- mount /dev/sda4 /mnt/gentoo # assuming your Gentoo installation is now on /dev/sda4 mount /dev/sda1 /mnt/gentoo/boot mount --bind /dev /mnt/gentoo/dev chroot /mnt/gentoo /bin/bash grub root (hd0,0) setup (hd0) quit exit ,,, You'll probably recognise most of this from the Gentoo handbook - all you're doing is chrooting into your Gentoo system and running Grub to set it to boot from your Gentoo /boot partition. Once Gentoo is running, you'll need to edit /boot/grub/menu.lst to add an entry to boot Ubuntu (copy it from the existing Ubuntu menu.lst file). Back to the list ****** Distro upgrade recommendations Q:: It's that time of the year again when new versions hit the download mirrors. This is all well and good but can the yearly upgrade be avoided? Each year do I have to spend a day downloading and installing a new version of my distro? Are there not distros out there that are constantly updated so all you have to do is run a command and the whole system is upgraded to the latest releases? When I say that, I don't mean every six months or whenever. I mean, when I run the command it updates to the latest release at that time. This is the only thing that annoys me about Linux, the fact I have to reinstall every year just so I can keep up to date. I then have to configure everything again. I'd just like to be able to install and configure once and then upgrade from there. Is this at all possible? A:: It is indeed possible to perform 'rolling upgrades' with some distros. Probably the most complete example of this is Gentoo, which doesn't actually have versions (only the installer discs have versions). The distro is constantly updated as new versions of the various software packages are released, with the result that a machine that was first installed five years ago is as up to date as one installed last week. If you don't have the patience or inclination to learn Gentoo, Debian and its derivatives can be updated to a new release version without reinstalling. If you're running the testing or unstable version of Debian, you'll get new packages as they're released, whereas most distros only release security updates. Even if you don't use a bleeding edge version, when a new version is released, all you need to do is edit /etc/apt/sources.list and change all references to the current distro label - such as feisty if using Ubuntu or etch for Debian proper - to the new label (like gutsy or lenny, respectively). Then run: --- sudo apt-get update && sudo apt-get dist-upgrade && sudo apt-get dist-upgrade ,,, That's not a mistake: you do the dist-upgrade step twice as some packages may not update on the first run, and if they do then the second run will do nothing anyway. If you're using Ubuntu, there's an easier way to update to the latest release, by selecting Administration > Update Manager from the System menu. When a new distro version is available, it will tell you and give you the option to upgrade. Whichever of these methods you choose, the upgrade may still be a lengthy task, but you won't have to restore your settings and software choices afterwards, and the computer should be usable while the upgrade is running. It's also possible to perform an upgrade from the install discs of the likes of Mandriva and SUSE, although people report varying degrees of success with this and conventional wisdom is to back up your data and settings then do a clean install with the RPM-based distros; a separate home partition is a definite benefit here. Back to the list ****** How to dual boot Ubuntu and Windows Vista Q:: I have Vista Home Ultimate installed and would like to dual boot with Ubuntu 7.10. An article on apcmag.com makes it appear straightforward, and up to the point of shrinking the Vista partition this is the case. I now have 62.76MB unallocated space. Rebooting with the Ubuntu CD starts off OK - it loads Linux and starts loading Ubuntu - but after a short time the screen becomes incorrect and then goes blank with the CD stopping. The only way I can reconcile the situation is to turn off the machine and turn it on again, rescuing the CD before it boots. The answer to the problem is, no doubt, very simple. Can you enlighten me? A:: Your problem is not unique: the same happened to me when installing on an oldish Fujitsu Siemens laptop. However, other systems also displayed a blank screen for a while before proceeding with the boot, so you may just need to leave it a minute or two. Either way, this is unconnected with your use of Vista. Note, though, that there's no need to resize your Windows partition: the Ubuntu installer can take care of that - although you should first defragment it in Windows. If waiting doesn't work for you, you need to use the Alternate CD, which uses a text installer without the Live CD desktop. Although the installer is text based, the installed system still has a full desktop. This is sometimes needed when the Live CD cannot handle your graphics hardware. The text installer also allows a few more choices than the Live CD and generally has a higher success rate. Back to the list ****** Vector Linux in headless mode Q:: I'm setting up an old computer to act as a web server internally on my network, to use for my web design business. I'm planning to try Vector Linux 3.2, which I have on disc, for the OS and Abyss web server. I would like to run it without a monitor or keyboard, accessing it from my main computer over the network. Here's my question: once I've installed the software, how do I set it up to log in automatically on booting up so I don't need to see what's happening? And what user name should I use for logging in? Do I need a user specifically added for the purpose or can I use the same user as my other machines? A:: Web servers will generally start through the scripts in /etc/init.d/ at boot time, so it's unnecessary for you to log in as anyone to run the service. When you install the package providing the web server process, it will create any required users necessary to run the service, so you won't need to do anything. Once installed, web services should start when the system is rebooted, allowing the box to be restarted without any user involvement. Back to the list ****** Need a Linux distribution that recognises PCI modems Q:: I work for the University of Western Ontario in Canada and I have access to a lot of old and broken computers that I rebuild and give away to charities. I've used Windows in the past but it's a bit of a pain because of copyrights and its inability to recognise hardware. It would be a lot easier to use a Linux operating system; the problem is, I need a Linux system that's able to recognise PCI modems without having to search the net for drivers. It would be nice if you could tell me which operating system would be the best for using PCI modems, or which PCI modems would be the easiest to use so that I can make these computers internet capable. A:: PCI modems are a sticky topic. Most internal modems are controller- less (also known as winmodems): they are missing the digital signal processor of a normal modem, offloading the work to the computer's CPU instead. Hardware modems, those that have a full controller, generally appear as a serial device so you use them exactly the same as an external modem, with no need for special drivers. These are your best choice, but you need to check the specifications carefully to make sure a particular modem is a true hardware device. Some controller-less modems will work with Linux. You'll find a list of supported models at www.linmodems.org along with a program that will identify your modem's chipset and indicate the correct driver. It's not possible for us to recommend specific makes and models, because manufacturers sometimes change the chipset of a modem without changing the model designation. This doesn't matter to Windows users (apart from the potential drop in performance) because the driver disc supplied with the modem takes care of the differences. Anything with an Intel chipset is likely to be a good bet. They supply drivers from their website at http://developer.intel.com/design/modems/support/drivers.htm. The distro is less important as most of them have excellent hardware detection nowadays. A more important consideration is how well it runs on older hardware. If you find a modem that works, you should get further models from the same supplier, preferably from the same stock to ensure they're all compatible. You may also find eBay a good source for older, and often more compatible, modems as so many people no longer use them. Alternatively, if any of our readers have compatible modems they no longer use, we would be happy to put them in touch with you so they can help with your efforts. Back to the list ****** Transfer home directory to another machine with settings intact Q:: I've installed Mandriva 2008 on my new machine and the network is working so I can see the Home directory on the old machine. I now want to transfer everything to the new machine, including hidden files and config files. I'm wondering about the best way to do this. Should I back up the /home directory and reinstate it on the new machine? Or just copy files over, which may take some time? A:: If networking is up and you have the SSH service running on the old computer, the best way to do this is with rsync. Open a terminal on the new computer, as your normal user, and run: --- rsync -ax olduser@oldmachine:~/ ~/ ,,, The trailing slashes are important. If the username is the same on both computers, you can omit the olduser@ part. This will copy everything in your home directory, including hidden files, and set all permissions and timestamps correctly. Even if you use different usernames, or the same usernames with different numeric IDs, rsync user after copying. You shouldn't be logged in on the old computer when doing this, as there's the possibility that some files may be changed between the initial directory scan and the copying, which would cause rsync to exit with an error. If you have a desktop running on the old computer, close it down before running rsync. Copying everything verbatim may not be a good idea, because you may overwrite newer config files with older versions. An alternative is to create a directory in your home and copy to that, with: --- mkdir ~/oldhome rsync -ax olduser@oldmachine:~/ ~/oldhome ,,, Then you can use your favourite file manager to copy over the files and directories you need and delete the older config files and the general cruft we all accumulate in our home directories over time. This is one of those jobs that's quicker to do from the command line, but if you prefer to use a GUI, there are alternatives. If you use KDE, the default desktop on Mandriva, open a Konqueror window, select one of the Window > Split View options, use one pane to create and show the oldhome directory and type fish://olduser@oldmachine into the location bar for the other pane. This should display olduser's home directory, from which you can select and copy files and directories. Select View >Show Hidden Files to display all your configuration files, then press Ctrl+A to select everything, followed by F7 to copy it. Midnight Commander is another file manager that can use the FISH protocol to access files on a remote computer. If you regularly need to synchronise directories on two computers, I strongly recommend Unison (www.cis.upenn.edu/~bcpierce/unison), which I use to keep my desktop and laptop in sync, but it's overkill for a one-off copy. Back to the list ****** Automated file copying Q:: I want to have a folder in my home directory that automatically updates that folder from the same folder on my USB stick when I insert it. If the above description is poor then all I can compare it to is having a Microsoft Windows briefcase. I'm using KDE on a Ubuntu installation. A:: KDE has a feature that will run an autorun script when a removable device is mounted, Unfortunately, it's aimed at optical media and doesn't (yet) work with USB devices. The good news is that, with a little scripting, you can do this directly from udev. The first step is to set up a udev rule for your device. This can be specific to one USB key or general enough to match anything with a FAT filesystem (USB keys always use this by default). Add this to the end of the rule: --- RUN="/usr/local/bin/synchome &" ,,, The trailing & to detach the process running the script is important - udev stops processing events while the rule is being processed and you don't want your file copying to block the whole of udev. Now you just need a script to do the dirty work of copying the files, using rsync, after making sure that certain conditions are met. Save this as /usr/local/bin/synchome (or whatever name you gave the script in the udev rule) and make it executable with chmod +x /usr/local/bin/synchome. --- #!/bin/sh MYUSER="foo" [[ ${ACTION} == "add" ]] || exit if ! mount | grep -q ${DEVNAME} then MOUNTPOINT="/media/$(basename ${DEVNAME})" mkdir -p ${MOUNTPOINT} mount ${DEVNAME} ${MOUNTPOINT} -o uid=${MYUSER} MOUNTED=1 fi if [[ -d "${MOUNTPOINT}/myfiles" ]] && [[ -d "/ home/${MYUSER}/myfiles" ]] then su - ${MYUSER} -c "rsync -ax ${MOUNTPOINT}/myfiles/ /home/${MYUSER}/ myfiles/" fi [[ "${MOUNTED}" ]] && umount ${MOUNTPOINT} ,,, The first line stores the name of the target user in a variable - an alternative approach would be to get the username from the name of the directory containing the files. Next we check that the rule has been processed because the device has been connected. Udev rules are run on addition and removal of a device, and the ACTION environment variable is set accordingly. KDE can be set up to automatically mount new devices, so the next seven lines check whether this has happened and mount it if not. Then we check for the presence of your special directory, called myfiles here, in both the USB stick and your home directory before running rsync to copy any new files from the stick. There are various other options you could use here, such as --delete to remove any files in ~/myfiles that are not on the stick. You could also use Unison (www.cis.upenn.edu/~bcpierce/unison) instead of rsync for two-way synchronisation. We use su to run rsync as the user - not only is it safer to run as little as possible as root, but it also avoids any permission problems later when you could find files in your home directory owned by root. Finally, we unmount the USB stick, but only if it was mounted by the script earlier. If KDE automounted it, we should let KDE take care of unmounting it. This is only one example: there are many possible variations. Read the rsync man page for options that may be useful but be wary of anything that could delete files, especially when it is run automatically in the background. Back to the list ****** Choose the right graphics card and CPU for Linux Q:: Finally, after years of saving, I'm going to get a new desktop or laptop computer. The only problem is that I want it to be able to run Linux so I have to get a lot of facts right before taking the jump. Using an old desktop with Debian for the past two years, I didn't previously notice that the following are 'problems' at all. Processor - Should I download the 32- or 64- bit version? AMD64 is quite clear, but is Intel Core 2 Duo 64-bit? I understand from the web that some users are using the 32-bit versions of Linux with them. I'm confused! Graphics - Nvidia or Intel, and which model? Most HP computers that I looked at use the Nvidia Go 6100, Nvidia Go 6150 and Intel X3100 chipsets. Are these graphics supported in Linux? Assuming that I've got a new computer running Debian (KDE) and an old computer running Debian (Fluxbox), how do I check for the other computer that's connected to the same network? How do I share files between these two computers since they're logged in as different users? A:: The more choices you have, the more difficult the decision! Many laptops now have 64-bit processors - the Intel Core 2 Duo CPUs are 64-bit while the Core Duos are 32- bit. It looks like you'll be using this machine for a while, so I'd certainly recommend a 64-bit system. It is true that some people run a 32-bit OS on 64-bit hardware. The main reason is that they want to use closed-source software that's not available in 64-bit - but most 32-bit software will run on a 64-bit system anyway. Even 32-bit browser plugins can be made to run in a 64-bit browser with nspluginwrapper (www.gibix.net/projects/nspluginwrapper), so I'd advise you to use a 64-bit distro on 64-bit hardware unless you have a compelling reason to do otherwise. Both the Nvidia and Intel graphics chipsets work well. Nvidia gives better 3D performance but requires proprietary drivers. The Intel chipset gives 3D with the XOrg drivers. The Intel wireless chips 'just work' with Linux in my experience too. With other chipsets, check carefully first: wireless hardware compatibility is one of the main problem areas with laptops nowadays. With a desktop, these aren't major issues because just about everything is interchangeable. Laptops are less flexible and it's worth checking www.linux-laptop.net for compatibility. If both computers are connected to the internet via the same router, the router's admin page may show the connected computers, particularly if it's acting as a DHCP server. Transferring files between the computers can be done by mounting shared drives with NFS or Samba, or by using scp to copy. As you're using KDE, the easiest way is to type: --- fish://username:password@ipaddress/path/to directory ,,, into a Konqueror window's location bar to view the contents of that directory and be able to drop files to copy them. You only need SSH running on the other computer for this to work. Back to the list ****** Edimax ADSL2+ modem not working in Ubuntu Q:: I installed Ubuntu some months ago on a new Compaq Presario that already had Vista installed. However, I couldn't access the internet. It was suggested a router would be the answer. I tried an Edimax ADSL2+ in wired mode but still got no response. I learned how to check the status of my internet connection at the command line by entering: --- /sbin/ifconfig -a ,,, I returned the following: --- eth0 Link encap:Ethernet HWaddr 00:1A:92: B5:69:A1 inet6 addr: fe80::21a:92ff:feb5:69a1/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:38 errors:0 dropped:0 overruns:0 frame:0 TX packets:340 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:4014 (3.9 KB) TX bytes:37621 (36.7 KB) Interrupt:19 Base address:0x2000 ,,, Please advise me as to what I can do next. A:: The advice to switch from a USB modem to a proper Ethernet modem was good: they work more efficiently, plus almost all of them include a router, so you can connect more than one computer to the internet. The output from ifconfig only covers your connection to the router, which acts as a bridge between your local network and the internet. The connection between the modem and the internet is separate. Unfortunately, it shows that you're not connected to the router - there is no inet addr: field showing your IP address. Make sure you're using DHCP to configure your network by going to System > Administration > Network, selecting your Ethernet interface and choosing Properties. Set Automatic Configuration (DHCP) in here, which should handle everything and enable you to connect to the router's admin page by typing http://192.168.2.1 in your browser. You'll be asked for the login and password, which will be admin and 1234 if you haven't changed them. If you can load the router's admin page, your network is correctly configured. Now you need to set up the internet side of the router. Click the Quick Start link to run the setup wizard in your browser and input the details given by your ISP. If you're unsure about any of these settings, you may need to contact your ISP for clarification. If the wizard fails for any reason, you have two choices. One is to use a Windows computer to run the wizard (once the router is set up, it will work with any OS); the other is to use the Internet section of the Interface Setup tab. Once you've successfully configured the computer, you can confidently connect other computers - up to four at a time - and have them use your internet connection. As long as they're set to use DHCP, which most distros do by default, no further setting up is required. Back to the list ****** Report the used RAM slots in a Linux machine Q:: Is there a program that could report the used RAM slots in a machine? What's in them from the point of view of RAM per chip, bus speed, and so on? Crucial has an ActiveX utility for IE that does the job, but I'm not running Windows, and I don't want to install a browser add-on that has that much access to my hardware. A:: I know of two programs that will display this information: lshw (http://ezix.org/project/wiki/HardwareLiSter) is a terminal program that shows plenty of information about your computer system, including specifics of the memory. This is the sort of information it gives. --- *-memory description: System Memory physical id: 40 slot: System board or motherboard size: 4GB *-bank:0 description: DIMM SDRAM Synchronous 800 MHz (1.2 ns) product: PartNum0 vendor: Manufacturer0 physical id: 0 serial: SerNum0 slot: DIMM0 size: 1GB width: 64 bits clock: 800MHz (1.2ns) *-bank:1 ,,, For a graphical alternative, try HardInfo (http://hardinfo.berlios.de). This shows a different selection of information about your computer. Between the two, they should tell you all they need Incidentally, the output from these programs can be extremely useful to us when we're trying to answer your questions. Back to the list ****** Send an email when IP address changes Q:: I have access to my PC at work using the corporate VPN. For years the IP address of my work PC hasn't changed, but lately it seems that it changes once or twice a week. Is there any simple way that my PC (Kubuntu 7.10) could send me an email reporting the new address when it changes? A:: Before you do anything else, check with your employer that this is acceptable to them. Having access to your work network from home is convenient; losing your job over it most certainly is not! I don't know of a specific utility for doing this but it is easy to do with this short script, which you can run via Cron. --- #/bin/sh IPADDRESS=$(/sbin/ifconfig eth0 | sed -n 's/.*inet addr:\([^ ]*\).*/\1/p') if [[ "${IPADDRESS}" != $(cat ~/.current_ip) ]] then echo "Your new IP address is ${IPADDRESS}" | mail -s "IP address change" you@your.mail echo ${IPADDRESS} >|~/.current_ip fi ,,, The real business is done in the second line, which uses a regular expression to extract the current IP address from the output of ifconfig. This is compared with the address stored on a previous run; we use ~/.current_ip to store the address here, but any location that is writable by you and unlikely to be touched by anyone else will do. If the address is different, it sends you an email using the mail command and writes the new address into .current_ip. The mail command is the standard program for sending emails from the command line or scripts, but it does require that you have a local SMTP installed. If mail and its dependencies are not already installed on your computer, it will be easier to use SendEmail, installable from Synaptic. This can use any SMTP server. Replace the mail command above with --- sendEmail -s smtp.work.com -f you@work.com -t you@home.co.uk -u "IP address change" -q ,,, The first argument is the address of the mail server at work (you can remove the -q (quiet) option for testing). Back to the list ****** Get packages and dependencies for installing offline Q:: I don't currently have a net connection to my Ubuntu box, but I do have access to a fast connection at college. Is there a tool I can use on the box that's connected to the net (a Windows PC) to grab dependencies? What I'm looking for is a way to enter the name of the software that I want to install and get back a list of dependencies that I can run in a Windows app to fetch the files and any others that the next one depends on. A:: There are at least two ways to do this. The quick and easy option is to use the 'Generate package download script' option in Synaptic. Mark the packages you want to install, then select this option from the File menu, which will generate a shell script that you can run to download the packages. Then you transfer the packages to your Ubuntu box and either put them in /var/cache/apt/archives or use the 'Add downloaded packages' menu option in Synaptic to install them. The main disadvantage of this method is that the script requires wget, so you need this installed on the computer you use for the downloading. An alternative is to use apt-get from the command line with the --print-uris option. Apt-get will automatically try to install all dependencies, and the --print-uris option outputs the URIs of all the files it needs. You can use grep and cut to extract the URIs from the output with --- apt-get --print-uris --yes install pkgspec | grep ^\' | cut -d\' -f2 >downloads.list ,,, For example, running this with 'postgrey' instead of the word 'pkgspec' creates a file containing --- http://security.ubuntu.com/ubuntu/pool/universe/libn/libnet-dns-perl/libnet-dns-perl_0.59-1build1.1_i386.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/libb/libberkeleydb-perl/libberkeleydb-perl_0.31-1_i386.deb http://gb.archive.ubuntu.com/ubuntu/pool/main/libd/libdigest-sha1-perl/libdigest-sha1-perl_2.11-1build1_i386.deb http://gb.archive.ubuntu.com/ubuntu/pool/main/libd/libdigest-hmac-perl-dfsg/libdigest-hmac-perl_1.01-5_all.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/libi/libio-multiplex-perl/libio-multiplex-perl_1.08-3_all.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/libn/libnet-cidr-perl/libnet-cidr-perl_0.11-1_all.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/libn/libnet-ip-perl/libnet-ip-perl_1.25-2_all.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/libn/libnet-server-perl/libnet-server-perl_0.94-1_all.deb http://gb.archive.ubuntu.com/ubuntu/pool/universe/p/postgrey/postgrey_1.27-4_all.deb ,,, As you can see, this includes the dependencies as well as the program itself. Copy download.list to a USB flash drive to take it to the computer with the faster internet connection. Many FTP programs and download managers will read a list of download URLs from a file, such as --- wget --input-file myurilist ,,, You can give more than one package name as 'pkgspec' However, you do need to run the apt-get update from time to time to keep up to date. If you are using another connection because your home computer is on a slow dial-up, there's no problem, as apt-get update doesn't download much. If you have no internet access at all, you can run apt-get --print-uris update and download the files elsewhere, then copy, unpack and rename the Sources files in /var/lib/apt/lists. Back to the list ****** Ubuntu settings disappearing after every reboot Q:: I have installed Ubuntu and find the setup very impressive. However, when I start the PC from scratch - having shut down completely - all my settings in Ubuntu have disappeared and I have to re install and reset everything. I lose Thunderbird account settings, all updates and even saved documents. This problem does not occur with the SUSE 0.3 installation that I have installed on the same box. Can you advise please on what to do to avoid this recurring? At present I am not shutting down for fear of losing all. A:: Are you using the same username and home directory for both Ubuntu and SUSE? If so, this problem is caused by the two distros allocating different numeric user IDs for that user. As a result, the SUSE user can write to the directory but the Ubuntu user cannot. While it is possible to change the user ID on one computer to the same as the other in order to enable the same user in both distros to write to the directory, this brings its own set of problems. The distros probably have different versions of some programs, which can cause conflicts when saving settings. A newer version of a program can usually read the settings saved by an older version, but the reverse is not always true. The safest way to work with two (or more) distros, is to have a separate user directory for each one. You can use the same user name, but you need to change the path to the home directory. For example, you could have the username 'pearse' on both distros, but make the user directories /home/pearse-suse and /home/pearse-ubuntu. To change the home directory in SUSE, start Yast and go to User Management. Select your user and click on the Edit button, go to the Details tab and change the home directory to, say, pearse-suse. Make sure the Move to New Location box is ticked and press Accept to make the changes. If you do this while logged in as your user, you may find yourself unable to log out, so press Ctrl-Alt-Backspace to kill and restart X, then log in again. It is also possible to do this by logging out of the desktop and editing the passwd file in a root console, by running vipw, changing the home directory for your user and saving the file. Then do --- mv /home/pearse /home/pearse-suse ,,, to move the directory. On the Ubuntu half of your system, you can use a root console in the same way or use the System > Administration-Users and Groups menu item to run the user manager. You should open a terminal before you do this, then go into the user manager, select your user and press Properties, go to the advanced tab and change the home directory. You should also change the user's main group to 'users' to match the settings in SUSE. Ubuntu doesn't have the option to rename the home directory, so go to the terminal you opened earlier (you cannot open one after you've changed the home directory), and run --- sudo mv /home/pearse /home/pearse-ubuntu ,,, then log out and back in again. The GUI user management tools on both distros should indicate the numeric UID for the user. Both distros default to 1000 for the first user, but if they don't match, you should change the one for Ubuntu. You will need to do this from a root console, while not logged-in to the desktop, with --- sudo -i vipw #make the change chown -R pearse: ~pearse ,,, The first line gives you root access; vipw works as above (never edit /etc passwd directly); and the last line applies the changed values to your home directory and all its contents. Finally, make sure everything in the home directories is owned by the same user by running the following as root: --- chown -R pearse: /home/pearse ,,, Of course, this now means that you have two separate home directories, with separate mail folders and other documents split between the two. While the common username and UID mean you can access both home directories from either distro, it would be easier to make common files available to both, which can be done with symlinks. From a terminal in SUSE, run commands like this --- ln -s ../pearse-suse/Mail ../pearse-ubuntu/Mail ln -s ../pearse-suse/documentsl ../pearse-ubuntu/documents ln -s ../pearse-suse/photos ../pearse-ubuntu/photos ,,, to ensure that the same directories (and therefore the same data), are available to both distros. Do not do this for any directories that contain settings information, as an upgrade of a software package on one distro could break things for one or both distros. That's the main reason for keeping the two home directories separate. Back to the list ****** Hard drive not being recognised by Mepis and PCLinuxOS Q:: I have just bought a Compaq Desktop SR5280CF and tried to use a Live CD. I tried Mepis, PCLinuxOS and OpenSUSE; all three are able to boot in the computer, but the hard disk is not detected. Neither Konqueror nor KDiskFree show hda1 or sda1. This computer comes with Vista and seems to be running OK. Is there anything wrong with this computer or Linux, or is it something I've missed? A:: We had the same problem with a new laptop once, and it was down to the SATA settings in the BIOS. Our computer lets you choose whether or not to use AHCI for the SATA interface (Advanced Host Controller Interface is a standard for SATA communication). We have to disable AHCI on the rare occasions that we need to boot Windows (what was that about Linux's inferior hardware support?) and if we forget to re-enable this, Knoppix can't find the hard disk. Ubuntu and Gentoo see the disk, but others do not. Check your BIOS settings and try changing anything related to SATA protocols. Or you can try a different Live CD - Ubuntu worked fine with our system. Back to the list ****** Getting started with chattr Q:: Does anybody use the chattr program at all? I was looking at the program's man page recently and it appears to be a very interesting utility. I'm a little surprised that it doesn't seem to get much mention as some of the attribute settings it can create would, on the face of it, appear to be very useful, such as the immutable attribute, switch '-i'. Would that be useful from a security point of view to protect key system programs from being modified (renamed, deleted, overwritten, etc)? And the '-s' switch looks good, which I take to mean 'sparse', as it apparently zeros a file's blocks, then writes them back to disk on a delete. Would that be a quick solution for people who have posted, inquiring about how to remove a file in total? The program even has an'undeletion' attribute, switch '-u', though it should be pointed out that it's not implemented as yet. There will be a total gem if that ever occurs. In fact chattr seems like a gem of a utility overall. I suppose it could make updating files a bit bothersome, if a person forgot they had set the immutable attribute on a file. But then, it would just be a matter of resetting it as 'root'. I guess a trojan or the like could do the same if it suspected it may be set. But at least it would forcethe trojan to engage in extra activity and possibly make itself known as a result. The man page suggests that chattr's designed for the ext2 FS, but it does mention ext3, and talks about a journaling switch 'j'. I'm not sure how much that would limit its broader usability. I would think it would just be one of the many concerns associated with any development project. I've been wondering what people think of chattr, and, if they've used it, if it just caused them too much bother to be bothered. To me, it seems like it's just been hiding down there in /usr/bin', waiting to be put to work. A:: The chattr utility is indeed very useful, as it allows many of the extended POSIX flags for files to be modified easily from the command line. chattr should work with many file systems, although it should be noted that ext2 and ext3 are essentially the same, with the latter being wrapped by the jbd layer in the kernel. Using chattr is very useful in situations where you don't want users to delete files from their home directory, or make modifications to them even when they can run 'chmod' over them. It can also prove handy when multiple administrators are working on a system and you want to avoid them making changes other admins don't want modifying. While anyone with root access can remove chattr settings, it does make them think and hopefully stops people before they make silly mistakes. It's worth remembering that many file systems now also have extended ACL capabilities configurable from the command line, with the current 2.6 kernel releases, to allow for very granular file access controls. Back to the list ****** Installed programs in Ubuntu - not in the menus Q:: I've installed Kmoon and KWeather in both Dapper Drake and now in Gutsy, but neither runs. They don't show up in the menus, and when I try to start from Konsole, I get "command not found" I used to love both these applets. Locate finds the files; but perhaps they are not installed in the correct directories? I'm a beginner and don't know how to evaluate (or correct) the situation if that's what is wrong. A:: Neither of these are standalone programs, they are both applets for KDE's Kicker taskbar. This means you cannot 'run' them, either from a menu item or a shell. You can see exactly what files a packages installs using dpkg. For example --- dpkg -L kmoon ,,, shows all the files in the kmoon package. A quick way to find any programs installed by a package is to search for paths containing 'bin/' since programs are generally installed into one of /bin, /sbin, /usr/bin or /usr/sbin, like this: --- dpkg -L kmoon | grep bin/ ,,, In this case it gives no output, because no executable programs are installed. To use these programs, right-click on an empty area of the Kicker panel and select 'Add Applet to Panel...' This opens a window detailing all the Kicker applets installed on your system, from which you can select those that you want to add. If there is no blank space on your panel, click on the drag bar to the left of an existing applet and go to the panel menu from there. Back to the list ****** Find out which fonts OpenOffice.org is using Q:: I often work on documents and spreadsheets from work that use the Microsoft fonts, Times and Arial, which I do not have. I have downloaded the Liberation fonts from Red Hat to use as replacements. After I installed them in my /usr/share/fonts/truetype folder the fonts are now available in OpenOffice.org. How can I find out which font OpenOffice.org is using instead of the Microsoft font and change that to the Liberation equivalent? Also, how can I get Firefox to use the Liberation fonts when Microsoft fonts are specified for a web page? I have read that OpenOffice.org creates smaller files than those made by Microsoft Office, but I am finding this far from the case. OpenOffice.org saves a blank .doc file at 65KB and a blank .xls spreadsheet at 95KB. What is going on here? I am using version 2.2 of OpenOffice.org. A:: The tarball from Red Hat contains only the fonts - you'll need a little more to use them in place of the Microsoft fonts. How you do this depends on whether the fonts are installed globally, as you did in /usr/share/fonts, or only for a user, in ~/.fonts. In the former case, add this code to /etc/fonts/local.conf, otherwise add it to ~/.fonts/local.conf. In either case, create the file if it does not exist. --- <?xml version="1.0"?> <!DOCTYPE fontconfig SYSTEM "fonts.dtd"> <fontconfig> <match target="pattern"> <test qual="any" name="family"><string>Times New Roman</string></test> <edit name="family" mode="assign"><string>Liberation Serif</string></edit> </match> <match target="pattern"> <test qual="any" name="family"><string>Arial</string></test> <edit name="family" mode="assign"><string>Liberation Sans</string></edit> </match> <match target="pattern"> <test qual="any" name="family"><string>Courier</string></test> <edit name="family" mode="assign"><string>Liberation Mono</string></edit> </match> </fontconfig> ,,, This means any program that tries to load one of the Microsoft fonts will use the Liberation alternative, so all web pages will look much like their authors intended, even when you don't have any Microsoft fonts installed. It also means that you don't need to change any documents you've created in OpenOffice.org, as they can reference the Microsoft fonts and work equally well when you send them to Windows users. You can get more information on this, and the above code, at http://uwstopia.nl/blog/2007/05/free-your-fonts. In our experience, OpenOffice.org saves much smaller files than Word and Excel, but an empty file is not a typical example. Word saves a lot of redundant information in a file, so loading a document into OpenOffice.org and saving it as a .doc file will usually reduce the size, but the greatest reduction is when you use Open Document files, as these are compressed with zip. Back to the list ****** How to exclude a port from a Linux firewall Q:: I received this answer from my friend to a question I asked. "Have you set up an appropriate Iptables rule? You need something like --- iptables -A INPUT -i eth0 -p tcp --syn -m state -- state NEW -j NFQUEUE ,,, If the box is a remote system, you should exclude the SSH port or whatever you use to connect to it." I don't get how to "exclude an SSH port" and I, can't ask him again, so I would appreciate any help you can give. A:: Without knowing the question you asked your friend, this is difficult to answer with any degree of precision, so here is a more general response on the use of Iptables. The Linux Netfilter software that provides firewalling is built into the kernel, and Iptables is the user program that sets up the firewall rules for it - the one you have given here filters incoming packets on eth0 that are requesting a new TCP connection (--syn). Iptables is very powerful, but also very low-level. This means you can give the firewall specific instructions and it will do exactly what you tell it to, irrespective of whether that was really what you wanted it to do. As a result, using Iptables without some detailed knowledge of it is quite dangerous. You could lock yourself out of a computer, or you could set up rules that you believe protect the system when they actually let all manner of potentially dangerous traffic through. To set up Iptables safely, you need one of two things. Either a good book or tutorial on the subject and the time to read and understand it, or a graphical front-end. There are a number of good front-ends available, which all perform basically the same function - provide an easy interface to tell the software what you want to filter, then generate the Iptables rules. The available packages include Firewall Builder (www.fwbuilder.org), Guarddog (www.simonzone.com/software/guarddog) and Shoreline Firewall (www.shorewall.net). The first is a GTK program that fits in well with a Gnome or Xfce desktop, while Guarddog is a KDE program. They offer similar features, but with a different approach. Shoreline Firewall is a script-based program that is harder to set up the first time, but provides more flexibility. Any of these are capable of protecting your system, so try them and see which you like best. The comment about the SSH port is because the rule you were given blocks all TCP traffic originating from outside. This is fine if you are not running any sort of server, but if this is a machine you access remotely via SSH, you would also lock yourself out. The advice is to add a rule that allows SSH traffic - that is, traffic to port 22, the SSH port - to pass, so that you can still connect remotely. This is easily done by setting an option on any of the programs mentioned previously. Of course, if this computer is not a remote server, this advice is irrelevant. If you are dealing with a remote machine, running a GUI program may not be possible. However, these programs all generate standard Iptables rule sets, so you can run them on a local box, test the rules to ensure they do what you need, then transfer the rules to the remote computer. Back to the list ****** Uninstall programs compiled from source code Q:: I usually uninstall apps by using apt-get remove in Ubuntu. Recently, I have started compiling programs from source, but then have problems uninstalling them. I tried to use make uninstall, but to do this you always need the Makefile present. So do I always have to save my Makefiles to uninstall apps that I've installed by compiling them? That isn't really practical. A:: You can recreate the Makefile by unpacking the source tarball again and running ./configure in its directory. Note that if you passed any options to ./configure the first time you unpacked the tarball, you must give the same options again. Then you can run make uninstall from inside the source directory. However, there is a better solution; one that integrates self-compiled programs with your package manager, so everything can be uninstalled (or updated) in the same way. Install CheckInstall, and use this instead of make install. The installation process then becomes --- ./configure make sudo checkinstall --type=debian --install=yes ,,, As you can see, the call to CheckInstall replace make install. This runs make install, then builds a Debian package and installs it with dpkg. The result is that the software you've just compiled is not only installed but visible in Synaptic, from where you can uninstall it. There are many more options for CheckInstall it's not limited to packages that use make install (all of which are described in the documentation), but this is enough to get you going. Back to the list ****** Create a floppy disk to force booting from USB Q:: I want to load Linux on to my USB 2.0 external hard drive and use it to boot my laptop. Trouble is, my laptop is an old IBM Thinkpad, so I think I will have to use a floppy disk to force it to boot from the external drive. I'll need to create this setup on my desktop, because the CD drive doesn't work. How do I set up the floppy disk to force booting from USB? Is booting a full sized distro rather than a lightweight distro from a USB port any different to set up? A:: Not all external USB hard disk enclosures are bootable: work through the following instructions to find out if yours is. Booting from a floppy and passing the boot process to a hard disk is not that different to doing the same with a pen drive. The easiest way is to install as usual, by booting from the distro CD/DVD and installing to the USB drive. When you get to the bootloader section, have it install to the MBR (Master Boot Record) of the external drive, not the internal one. Recent kernels use the SCSI library for all hard disks, so your internal disk will be /dev/sda and the external /dev/sdb. We tried this with PCLinuxOS; it has a specific option for installing to USB drives. Once you have installed in this way, the distro will boot on any PC that supports booting from USB hard drives. Otherwise, you need to create a floppy disk containing the Grub bootloader. You can get a pre-made one from http://tinyurl.com/2f62dt. Download pdlfloppy.img.gz and write it to a floppy disk with --- gzip -dc pdlfloppy.img.gz | dd of=/dev/fd0 ,,, This floppy is set up to boot a Pendrivelinux installation, so edit the configuration file at /mnt/floppy/boot/grub/menu.list to match the corresponding file on the USB hard drive. For PCLinuxOS, the first menu entry looks like: --- title linux kernel (hd0,0)/boot/vmlinuz BOOT_ IMAGE=linux root=/dev/sdb1 acpi=on resume=/ ev/sdb5 splash=silent vga=788 initrd (hd0,0)/boot/initrd.img ,,, The references to /dev/sdb are probably correct unless your BIOS makes the USB device the first in the chain (sda) when you boot from it, but the Grub partition labels all refer to the first partition on the first disk, so change these to (hd1,0). Now boot from the floppy and select your new menu entry. If you get a 'file not found' error, press C to reach the Grub console and type --- find /boot/vmlinuz ,,, This will give you the number of the partition containing your kernel, and this is what you need in the kernel line of the menu. Highlight the menu entry and press E to edit, then do the same on the kernel line. Change the path and press B to boot the changed entry. If this works, make the changes permanent in the menu.lst file. Another approach is to add this to menu.lst: --- root (hd1,0) chainloader +1 ,,, This simply runs the bootloader on your external drive - useful if the drive has multiple distros. Back to the list ****** Server monitoring Q:: I have three Ubuntu servers all running different services (Apache, MySQL, FTP, etc). These computers do not have very reliable hardware, so I was wondering if there is any open source software out there that can monitor multiple servers. I would prefer to get the output in a web page, so I could access it from my PDA via the internet. Will I need to hand-code it or is there anything out there ready-made? A:: There are a number of programs that will do what you want, with varying degrees of sophistication. At the harder end of this range is Nagios (www.nagios.org) but something a little less complicated should be more than adequate for your needs. Monit (www.tildeslash.com/monit) is mainly intended for monitoring programs on the machine running it, though it can watch remote servers too. It's generally a good idea to run it on a different machine to the one you're monitoring, otherwise a problem on the server could also bring down the monitor, leaving you with no warning that anything had happened. Monit can be told which services to test and what to do when they fail, so you don't have to rely on remembering to check a web page to see that something is wrong - Monit can send you an email. Even better, it can execute an external program or restart the service. The latter option is intended for local services, but you could make the restart command --- ssh remote.server /etc/init.d/service restart ,,, provided you've set up key-based SSH on the remote server so that this can run without pausing for a password. Other possible external actions would be to use the xsend script from xmpppy (http://xmpppy.sourceforge.net) to send an instant message to your PDA, alerting you immediately, or send an email via an email-to-SMS gateway to alert you with a text message. It all depends on how urgently you need to know when a problem arises, and which way is most likely to reach you first. Here is an extract from a working config file --- set mailserver mail.example.com set alert me@example.com set httpd port 2812 and allow admin:monit check host slartibartfast with address 192.168.13.27 if failed icmp type echo count 3 with timeout 3 seconds then alert if failed port 3306 protocol mysql with timeout 15 seconds then alert if failed url http://example.com then alert ,,, The first part covers global settings, including where to send email alerts and enabling the web interface. This allows connections from anywhere, but controlled by a login, so you can use your PDA wherever you are. The second block performs three tests on a remote host, and sends an email alert if any of them fails. If you want to have Monit restart services, you will need a separate block for each service, like this --- check host example.com with address 192.168.1.27 if failed port 3306 protocol mysql with timeout 15 seconds then exec "/usr/bin/ssh root@example.com /etc/init.d/mysql restart" ,,, The exec command sends an alert too, so you'll know that the service has failed and has been restarted. Monit also has a web interface, so you can check and reassure yourself that all is well from time to time. Monit can do a lot more than watch servers: it can check CPU load and disk space or watch for changes to the contents or permissions of files or directories. The example configuration file covers most uses - simply uncomment it and edit the ones you want to use. Back to the list ****** Automounting with Udev Q:: I have a car PC with a USB port on the front of it, and I've loaded a slimmed down version of PCLinuxOS on to it. What I would like to do is (at init level 3) have Udev automatically mount any USB mass storage device placed into the USB port to /media/removable. I've tried writing the following rule, but it doesn't work: --- SUBSYSTEMS=="usb", ATTRS{product}=="Mass Storage Device", SYMLINK:="removable", RUN+="/bin/mount /dev/removable /media/removable" ,,, Can you see anything wrong with this? I've read through some Udev tutorials, which have lead me to the conclusion that this should work, but /dev/removable is never created. A:: The first step is to try the rule without the RUN command, to test whether it is even matching. Only when /dev/removable is being created and you can run the mount command manually should you add it to the rule. Udev is smart enough to notice changes to the rule files without a restart, so it's easy enough to keep the file in your text editor and tweak the rule while unplugging and plugging the device. When writing Udev rules, bear in mind that the attributes you match on must come from the same block of output from udevinfo. If you want to match on any USB mass storage device (which may not all have the same product attribute), try --- SUBSYSTEMS=="scsi", KERNEL=="sd[a-h]1", SYMLINK:="removable", RUN+="/bin/mount /dev/removable /media/removable" ,,, This will mount the first partition of any USB storage device you connect. You don't need to create the /dev/removable symlink for a rule that mounts a device, though you may have other uses for this, so your rule could be simplified to --- SUBSYSTEMS=="scsi", KERNEL=="sd[a-h]1", RUN+="/bin/mount /dev/%k /media/removable" ,,, because %k contains the kernel's name for this device, such as sda1. If your system uses the SCSI layer for hard disks, this will also match your hard disk when you boot. The solution is to explicitly exclude the hard disk from the rule. Run udevinfo to get the drive's model attribute, then add something like this to the rule: --- ATTRS{model}!="superduper 500G" ,,, Most USB mass storage devices, especially pen drives and memory cards, are set up with a single partition, so this will match on those. If you want to connect something like a USB hard drive with multiple partitions, you will need something a little cleverer, like --- SUBSYSTEMS=="scsi", KERNEL=="sd[a-h][0-9]", SYMLINK:="removable%n", RUN+="/usr/bin/pmount /dev/removable%n" ,,, This uses pmount instead of mount, which is more sophisticated and is used by most automounters. One of its advantages is that it only needs the device node as an argument, and creates the mount point in /media automatically. Its counterpart, pumount, removes the mount point when unmounting, keeping media clean. The %n in the above rule is replaced with the kernel number, so the third partition (sda3) would be mounted at /media/removable3. Back to the list ****** Get USB serial port replicator working under Linux Q:: I would love to junk Windows and run a pure Linux system. I have found equivalent software for everything I do and I know from the Live DVDs that my system will work. However I am studying for a Cisco CCNA qualification and use a serial port replicator connected to a USB port on my system. The serial end then connects to the console port on the Cisco Router via a serial-to-RJ11 crossover cable. Do you know how to get a USB serial port replicator working under Linux, and can you recommend a replacement for the Hyper Terminal and SolarWinds TFTP Server apps? A:: By port replicator, I take it you mean a converter. A port replicator usually connects a port to an output of the same kind, as used on laptop docking stations. There are a variety of USB-to-serial converters out there, most of which are supported by Linux. I have two myself, both bought from eBay and using different chipsets and drivers, but both working perfectly under Linux. The kernel has a number of modules to drive these devices, which should be automatically loaded when you connect them. Run this command as root, or prefix it with sudo if running Ubuntu, before connecting the device --- tail -f /var/log/messages ,,, When you connect the device, you should see something like this: --- usbcore: registered new interface driver usbserial drivers/usb/serial/usb-serial.c: USB Serial Driver core drivers/usb/serial/usb-serial.c: USB Serial support registered for ark3116 ark3116 5-2:1.0: ark3116 converter detected usb 5-2: ark3116 converter now attached to ttyUSB0 usbcore: registered new interface driver ark3116 ,,, Here you can see that this device uses the ark3116 module and has created a serial device at /dev/ttyUSB0. You can see which USB serial drivers are available on your system with --- ls -1 /lib/modules/$(uname -r)/kernel/drivers/usb/serial ,,, If your device is not recognised, run the lsusb command to get its ID numbers and search for it at http://qbik.ch/usb/devices. It may be that you have an unsupported device, (though that is unlikely), or that your kernel was not built with the appropriate modules, in which case you will have to recompile your kernel with the driver enabled. The standard serial terminal program for Linux is Minicom, which should be included with your distro. There are a number of TFTP servers available, including Atftp (ftp://ftp.mamalinux.com/pub/atftp) and NetKit TFTP (ftp://ftp.uk.linux.org/pub/linux/Networking/netkit) although I prefer to use the TFTP server in dnsmasq (www.thekelleys.org.uk/dnsmasq). Back to the list ****** Compiling problems - pkg-config search path Q:: I'm new to Linux and appreciate 'out of the box' solutions. For example, I use Slackware, and installed Filelight after reading the Install file and doing --- ./configure && make && su -c "make install" ,,, because the Install file made sense. Now I have tried to compile a program called NoteCase. The readme.txt file is confusing. It says, "Just unpack the archive contents and start the program. Windows users require GTK toolkit installation." Does "Just unpack the archive contents and start the program" refer to the Windows install or the Linux install? If it refers to the Linux install then there are no executables in the extracted archive named notecase-1.7.4_src.tar.gz to run. Then there is a compiling option for Ubuntu only. Since the program is for Ubuntu running Gnome, would you consider when you're writing HotPicks letting users know if it's an application for Gnome or KDE only? I tried running make and got this error message: --- /notecase-1.7.4# make ===> Compiling src/main.cpp Package gnome-vfs-2.0 was not found in the pkg-config search path. Perhaps you should add the directory containing 'gnome-vfs-2.0.pc' to the PKG_CONFIG_PATH environment variable A:: The Readme file for this application is rather confusing. It would appear that the same Readme is used for the source and binary archives, and the one you tried to use is the source one. While the instructions for compilation are for Ubuntu, this is not an exclusively Ubuntu program. Nor does the mention of Ubuntu automatically make it a Gnome program, Ubuntu is quite capable of running KDE programs too, and KDE itself for that matter. In fact, this is a GTK application, although it does require gnome-vfs to be installed. There are two possible reasons for the error you mention: either you do not have gnome-vfs installed, or you do, but pkgconfig is not aware of it. In the latter case, setting PKG_CONFIG_PATH to include the correct directory before running make would fix the problem, like this: --- locate gnome-vfs-2.0.pc export PKG_CONFIG_PATH="directory from above command" make ,,, Alternatively, there are binary packages available. The DVD contains RPM and Debian packages, and there is a plain binary tarball on the NoteCase website, which you could unpack to the root directory of your filesystem with --- tar xf notecase-1.7.6.tar.gz -C / ,,, An alternative to downloading the binary package, and one that is useful when a binary tarball is not available but an RPM is, is to use the rpm2targz command to convert an RPM to a tarball. Note that you will have to resolve any dependencies yourself when using this approach, so it is best reserved for those occasions when the source is not available. Back to the list ****** What is the best size for the swap partition? Q:: First I would like to say that I'm new to the Linux world. I've read that the swap file should be twice the size of the machine's RAM but no more than 512MB. But in another guide I've seen "... a 2GB partition hda2 for the swap partition". I'm confused. Am I reading this right? Second, on a single machine can you run a Linux image under a Linux image? I know that there are other ways to do this, such as running VMware or VirtualBox, but I would like to know if you can do this. A:: This is one of those questions that will get you a dozen different answers from ten different "experts" The traditional advice was that swap should be twice the machine's physical memory, but that comes from the days when most machines had only 64-128MB RAM. Some argue that with modern hardware having so much memory, swap is largely redundant. The opposite argument is that modern applications are capable of using that much memory, so swap is as relevant as ever. If you know you're going to be doing a lot of memory-intensive work, add plenty of swap. Losing a gigabyte or two of drive space is nothing compared with the inconvenience of your computer grinding to a halt because you have used up all the memory in a video editing session. Some systems use tmpfs for the /tmp directory, speeding up the system by keeping temporary files in memory. These are usually small, but some programs do put large files in /tmp, and having some disk space to handle an overflow is a good thing. It is also common practice to use the swap partition for suspend-to-disk storage, particularly on laptops. In this case, the swap partition has to be at least as large as your physical RAM. On balance, I would still use 1-2 times the physical RAM for swap space, especially as hard drive sizes have grown even more quickly than RAM capacity. While it is possible to run a Linux distro within a window on your existing distro - after all, this can be done in Windows - I know of no distros specifically aimed at this. This is probably because there are so many options for running virtual machines that can run any distro that creating a distro specifically for this task would be rather pointless (but if anyone knows of such a distro/app, please let us know). With a choice of VMware, VirtualBox, Qemu and Xen, you already have plenty of options for running Linux within Linux, and more securely because a virtual machine protects the host from the guest's processes. Back to the list ****** File compatibility across distros Q:: If you have a document in, say, AbiWord, or another program, can it simply be read by someone who has another distribution of Linux? In other words, if soul A writes something in AbiWord on their Debian machine, and sends it to soul B, who has SUSE, or to soul C, who wears a Fedora, or soul D, who has a Yellow Dog, can these all simply be read, say by someone who has OOo, but not AbiWord? Further, if I were to have, say, SUSE and Debian, or another distribution, on the same machine, is it simple enough to transfer that file to the Linux on the other hard drive or partition? This may be a simple question, but I recall reading years ago that information can't necessarily be easily transferred from one Linux distribution to another. It doesn't make sense to me why this would be the case, but... Right now, I'm thinking that I would like to get a machine and put either Mandrake or SUSE on it. Though I'm always reading good things about Debian, too. I realise that the focus of some of these distributions is different, or at least it is my distinct impression that they are directed toward different purposes. Any advice? A:: Every distribution is essentially the same: the Linux kernel with the GNU tools. Debian, Mandrake, Fedora and everything else is pretty much the same thing under the hood, although they have different installers and styles to the installation. As long as the file format being used is portable across different applications, then the file can be read on any distribution, or even operating system, without much hassle. Sharing files between different distributions is done all the time without any problems. Indeed, many people share files between Linux and other platforms, such as BSD, Windows, Solaris and so forth, for remote file access and portability. Back to the list ****** Installing a program - missing Perl module Q:: I have tried to install Evince. After copying the application and compiling and installing the XML2 library I tried to configure Evince and got the following error message. --- Checking for intltool >= 0.35.0....0.35.5 found checking for perl ..../usr/local/bin/ perl checking for XML::Parser configuration error XML:: Parser perl module is required for intltool. ,,, I then tried to load Perl from a DVD. Linux reported an I/O error and Windows reported CRC failure. Identical messages given when the DVD was inserted in another laptop. I tried downloading and installing the latest version of Perl but got the same error messages. Do you believe that it has any connection with the fact I have problems in reading the DVD? As other sections of the DVD are also unreadable, how do I get a replacement here in Australia? A:: Your DVD has read errors, probably caused by damage in transit. Others in Australia have reported similar problems, and we are trying to resolve this. Meanwhile, we'll send you a replacement disc. The second problem is that you are missing the Perl module XML:: Parser. This is not part of the standard Perl package but an extra module, which is why reinstalling Perl didn't help. The first place to look for a Perl module, or anything else, is in your distro's software repositories. It is likely to be there, but if not you can install Perl modules from CPAN (The Comprehensive Perl Archive Network), which you can find at www.cpan.org. In most cases you can install directly from the command line without visiting the website. Providing you are connected to the internet, running this as root will install XML::Parser --- perl -MCPAN -e 'install XML:: Parser' ,,, There are other ways of installing modules from CPAN - see the FAQ at www.cpan.org/misc/cpan-faq.html for details. Back to the list ****** Install Xubuntu alternative from CD Q:: I need to install Xubuntu on to a computer with no DVD drive. I have copied the file xubuntu-7.04-alternate-i386.iso to my home directory then burned it to a CD using Xfburn. The CD however will not boot on either of my older computers. They do boot from CDs, in fact the Dapper install CD boots and so does the CentOS install CD - but not the Xubuntu one I just burned. I checked the MD5sum on the burned image, and on the file I put into my home directory and they are the same. A:: There are a number of possible causes for this. Going for the low-hanging fruit first it could be that you have written the ISO image to the disc as a file instead of as an image. When you look at the content of the CD, do you see several files and directories or just one big file? If it is one file, you have not written it as an image. With Xfburn, you should use the Burn CD Image button. Another possible culprit is the disc. Are you using CD-RW discs? If so, try a new one, or a CD-R. CD-RW discs have a lower reflectivity, which becomes even worse when they have become scratched with use. This can be compounded with older computers, where the drive lasers may be getting weaker and the lenses dirty. This may not be bad enough to stop your drive reading the disc, but it may slow down the process enough for the BIOS to decide there is no bootable CD in the drive and move on to the hard disk. One solution for discs that refuse to boot is to use Smart Boot Manager. Write the file to a floppy disc, boot from that with the CD in the drive, then choose the CD from the Smart Boot Manager menu. Back to the list ****** Broadcom now working with SUSE on Dell M70 laptop Q:: I run OpenSUSE 10.1 on a Dell M70 laptop, but have to use NdisWrapper to get my wireless to work. I have to use Modprobe to activate the wireless, which works a treat. I believe my Dell laptop uses the Broadcom wireless chipset. I would like to use OpenSUSE 10.3 but cannot get the wireless to work - the distro comes with the drivers built-in, but I don't know how to activate the wireless to get it working. A:: The Linux kernel now includes native drivers for the Broadcom BCM43xx wireless chipset, but these are reverse-engineered and do not work well with all variants of these chips. I used it with a BCM4306 card and the drivers worked reasonably well, but some of the later chipsets don't work at all (yet). You can see an up to date list of currently supported devices at http://linuxwireless.org/en/users/Drivers/b43 (run lspci in a root terminal to see which version of the hardware you are using). If you have a supported chipset, you should be able to configure it in Yast, though you may need to persuade SUSE to load the driver when you boot. If --- lsmod | grep bcm ,,, shows the module as being loaded, you should be OK. Otherwise, do --- modprobe bcm43xx ,,, to load it now, and edit /etc/sysconfig/kernel to add bcm43xx to the list of modules in the MODULES_LOADED_ON_BOOT line to have it load automatically at boot time. If you have one of the unsupported chipsets, you'll need to continue to use NdisWrapper, just as you do with SUSE 10.1. In this case, you may need to make sure that the native driver does not try to grab the device first by adding the line --- blacklist bcm43xx ,,, to /etc/modprobe.d/blacklist. Then set up NdisWrapper as you've done in SUSE 10.1. Back to the list ****** Stop fsck running every 27 boots on Ubuntu Q:: I tried to boot into my Ubuntu Gutsy and it wouldn't go. I got Grub just fine, selected the OS, and it went to the boot splash screen and the loading bar. It got only a short way up this bar before the screen went black with a cursor in the top-left of the screen. There it stayed and wouldn't go anywhere else. I suspect that it's something to do with the regular fsck checks that it was perhaps trying to perform, but could not. If this is the case, what can I do? It seems that Ubuntu performs an fsck check every 27 times on booting. I fixed it by booting into recovery mode and letting fsck run, but it's going to happen again isn't it? How can I stop it happening every 27 boots? A:: Boot splash screens are very pretty, but they also hide any error messages that the system wants to show you. Some distros let you remove the splash screen mid-boot by pressing a key, usually Esc or F2, but Ubuntu no longer does this. You can stop the splash screen in the first place by editing the boot options. When the "Press Esc to enter the menu" text appears, do so. Then press E, for edit, highlight the kernel line and press E again. Remove "quiet splash" from the end of the kernel line and press Enter followed by B to boot with the changed setting. You should now see all the boot text and be able to see exactly what is stopping the boot process. The change you made is temporary, but you can edit /boot/grub/menu.lst to make it permanent, or copy the current boot stanza and edit that, to give you options to boot with or without a splash screen. If a regular call to fsck is stopping the boot, it means that there's either a fault on the filesystem that fsck needs your input on before proceeding, or that fsck is taking a while and you haven't waited long enough. Booting without the splash screen will tell you which. Once fsck has completed successfully, you should not have this problem again, unless there is something wrong with your system that is causing filesystem corruption. One way to avoid the automated fsck is to regularly run it manually, so you should never go 27 mounts without a check. You can also use tune2fs to set the intervals between checks. --- tune2fs -c 0 /dev/whatever tune2fs -i 2w /dev/whatever ,,, will tell it to ignore the mount count with the first command and to check every two weeks with the second. It is very unwise to disable both of these, otherwise filesystem corruption could build without you knowing until a serious failure occurs. If you want to get really clever, you could add something like this to a daily Cron script: --- #!/bin/sh MOUNTS="$(tune2fs -l /dev/whatever | awk '/^Mount count/ {print $3}') if [[ $MOUNTS > 20 ]] then echo "dev/whatever has been mounted $MOUNTS times since the last check, run fsck on it now." fi ,,, This will email you when the mount count exceeds 20, giving you the chance to run it manually. The second line looks complicated, but it just runs tune2fs to list the filesystem information and extracts the mount count with awk. Back to the list ****** Canon scanner not working in Linux - will it work in VMware? Q:: I have a Canon professional scanner that will not work with Linux. If I installed Windows in a virtual machine so I can run Windows in Linux, would the scanner be able to run in the VM Windows or does USB control not work in that way? A:: You don't say which virtual machine software you are using, but this is certainly possible in VMware Workstation and with VirtualBox, though the two apps use very different methods. In VMware Workstation, go to VM > Settings > Hardware > USB Controller and tick the box to 'Automatically Connect USB Devices To This Virtual Machine When It Has Focus' If you then connect (or power up) the scanner while the Windows virtual machine is running and has focus, the scanner will be attached to the Windows guest OS instead of the Linux host. If you don't want to use this auto-connection, or if your scanner is connected and powered up before you boot the Windows virtual machine, you can manually connect it to the virtual machine at any time. Go to VM > Removable Devices > USB Devices, where you will see a list of attached devices - select your scanner. If you are using a version of VMware Workstation older than V6, you may need to disconnect the device from Linux first by unloading the module before it is available to VMware. The later versions are able to force the host OS to relinquish control of hotplugged devices. This was needed with mass storage devices, where you had to do --- rmmod usb-storage ,,, before the device could be seen by the guest OS. The procedure with VirtualBox is different: instead of giving it the power to take control of any newly connected USB device, you need to tell it about each device you want the virtual machine to see. Go into the USB section of the settings window for your VM and make sure that 'Enable USB Controller' and 'Enable USB EHCI USB 2.0). Then click on the Add button to the right of the USB Device Filters list and select your scanner from the list that pops up. Click on OK and start the virtual machine and it should detect your scanner. Instead of adding each device individually, you can leave some or all fields blank to match a range of devices. If you use Gentoo, you will need to install the virtualbox-bin package for USB support, as the app-emulation/virtualbox package - the one that installs from the Gentoo source code - doesn't handle USB on virtual machines. Once you've made the scanner available to your virtual machine, you will need to install drivers. Even if the device is supported in Linux, it now has a direct connection to the virtual machine, so the presence or otherwise of Linux drivers has no effect. The first time you connect the device, Windows should pop up the usual 'found new hardware' message and take you through the driver installation process, so I hope you haven't lost the Windows driver disc that came with the device, as I usually do. Back to the list ****** Mandriva booting into text mode after installation Q:: I have just installed Mandriva 2008 'Free' on to my Asus PRO31F Series laptop. It runs Windows Vista Home Premium on 1GB RAM and 100GB of hard drive space. The installer successfully partitioned my hard drive and installed Mandriva. I do not appear to have lost any files in Windows. So far, so good! When the screen comes up requiring log in, all I see is: --- localhost login: johnm (my login name) Password: (I type in my password) [johnm@localhost ~] 1$ ,,, I apparently have to type in a command - but I have no idea what it should be! A:: This has long been a frustration with Mandriva (and Mandrake before it). During installation, the program failed to identify and automatically configure your graphics hardware. The summary screen, towards the end of the installation, will have indicated this, but the warning is not that obvious, so you were able to proceed without configuring the X Window System environment. As a result, you boot into a console login with no graphical desktop. The good news is that you do not have to waste time re-installing the whole distro in order to fix this. When you see the login prompt, log in as root with the pasword you set for root during installation, then run --- XFdrake ,,, The command name is case-sensitive, so the first two letters must be in capitals. This runs a text version of the X configuration tool, which you can navigate using the Tab key to move between options, the Enter key to invoke them and the space bar to select choices, such as video card. Once you have set up the correct monitor and video card (the problem is usually caused by the monitor not being auto-detected) use the Test option to make sure your settings work and Quit to save your settings. Now you can start the X desktop by running --- /etc/init.d/dm start ,,, In future, the desktop should load automatically when you boot Mandriva. If not, check whether 'Automatically start the graphical interface' is set in the Options section of XFdrake. Once X is running, you can use a graphical version of XFdrake from the Hardware section of the Mandriva Control Centre for any fine tuning you may wish to do. Back to the list ****** Reformat JBOD drive from FAT32 to ext3 Q:: I recently purchased a 1TB USB desktop hard drive from Iomega, which consists of two 500GB drives strung together using JBOD so that it appears as a single 1TB volume. Naturally it came pre-formatted with a single FAT32 filesystem, which is fine except that I want to store files bigger than 4GB and also have proper file permissions. Can I reformat the drive to ext3 (if possible with multiple partitions) without breaking the JBOD configuration, and if so is it then just a case of firing up a standard disk partitioning tool like DiskDrake and proceeding as normal? I haven't risked trying it yet as I don't want to end up with a single 500GB drive or worse! A:: When you connect this drive to your computer, what does dmesg or syslog show? Does it show one drive or two? If it shows as a single drive, the JBOD (Just a Bunch Of Disks) magic is handled by the drive's firmware and the internal configuration is largely irrelevant. If two drives show up - say, sda and sdb - the joining of the disks is handled by software. From the description on the manufacturer's website, it appears that the former setup is used and the JBOD configuration is internal. In that case, you can safely treat the device as it it were a single drive, the number of drives inside the case is of no more relevance than the number of platters in a drive. 1TB drives are still noticeably more expensive per MB than 500GB ones, so this is probably a cost-saving measure - even with the extra cost of the firmware to combine the drives, they would be cheaper to manufacture. Run your favourite disk partitioner - DiskDrake, cfdisk or GParted - on the disk and see what it shows. If it shows a single 1TB drive, you should have no problems partitioning it. To get a definitive answer, you would have to contact Iomega using one of the addresses listed at www.iomega.com/support/contact/index.html and give the exact model number of your drive, but I would feel safe partitioning it if it were my drive. Back to the list ****** Live CD changing login prompt in Linux Q:: I have a Dell Inspiron 6000 with PCLinuxOS 2007 installed, and booted a DVD with various Live distros. Afterwards I removed the disk and re-booted. Opening a terminal I expected the usual --- peter@Laxey $ ,,, prompt. I was rather shocked to see a new prompt, which is now --- peter@ubuntu $ ,,, What on Earth is going on here? Has the Live CD somehow altered my installed system and changed my settings? A:: I can see why you would be alarmed. It looks like the Ubuntu Live CD has hijacked your system, but this is not the case. It's all down to the way different distros use DHCP to configure the network, and how your DHCP server works. When a computer broadcasts a DHCP request over the network it can optionally include a hostname, which the DHCP server may use to determine the address to give it. If it doesn't send a hostname, the server may well issue one. What appears to be happening here is that PCLinuxOS is set to not send a hostname with the request, and the DHCP server has not previously sent one back, so the hostname from your settings is used. You've run the Ubuntu Live CD on the same computer and it has sent a hostname of 'ubuntu' with its DHCP request. The DHCP server has remembered this, so the next time it gets a request without a hostname from the same computer, it send the same hostname you used last time (the DHCP server uses the hardware MAC address of your Ethernet adaptor to determine that this is the same computer). The easy solution to this occurrence (it doesn't merit the title of problem) is to set PCLinuxOS to behave in the same way as Ubuntu and send a hostname with its DHCP request. This will prevent the DHCP server trying to 'help' by looking up the last-used hostname because it thinks you forgot to send it this time. To do this with PCLinuxOS, go to the Network section of the control centre, select 'Reconfigure A Network Interface' and go to the DHCP tab. Tick the 'Assign Host Name From DHCP Address' box and type your preferred hostname into the box below this. On the next boot, your computer should be correctly named, and will remain so no matter how many Live CDs you throw at it. Back to the list ****** How to install GRUB on a USB stick Q:: I have a laptop with the default OS installed on hda1. I installed Debian on hda2 but installed Grub on hda2 instead of the MBR. I wanted to take this opportunity to learn how to install Grub on to a USB stick instead of just re-installing Linux, but after much Googling, I am still unable to do it. Is it possible to get Grub installed on to a USB stick, without also installing Linux to it? A:: There are at least three ways to get your Grub setup working. You could modify the existing (Windows?) bootloader to chainload Grub; you could install Grub to the MBR or you could set up Grub on a removable device, like a floppy disk or USB stick. To chainload Grub from Windows NTLDR (New Technology Loader), you need a copy of your Grub boot sector on the Windows drive. Do this while in Debian --- dd if=/dev/hda2 of=lin-boot.img bs=512 count=1 ,,, This creates a file called lin-boot.img (the name is unimportant) that contains the first 512 bytes of the partition containing Grub. Copy this file to the Windows C: drive, either by mounting your Windows partition in Debian or copying the file to a USB stick and then copying from that in Windows. Now reboot into Windows, copy lin-boot.img into C: if you haven't already done so, and edit C:\boot.ini in Notepad to add this line to the end --- C:\lin-boot-img="Debian GNU/Linux" ,,, Now the Windows bootloader will have two options, with the second passing control to Grub from hda2. The second option, and the one I prefer, is to let Grub handle everything. Run grub as root to open the Grub shell and install it to the MBR of the first disk with --- root (hd0,1) setup (hd0) quit ,,, This installs Grub to the first disk, (hd0), after telling it to look for its files in hda2 (hd1,1) remember that Grub counts from zero. Now you need to modify /boot/grub/menu.lst (some distros use /boot/grub/grub.conf) to add the Windows menu entry, like so --- title Windows rootnoverify (hd0,0) chainloader +1 ,,, which simply tells Grub to pass control of the boot process to the bootloader found at /dev/hda1, which is where Windows keeps its bootloader (this is why Windows needs the first partition to be marked bootable, because the bootloader is on the partition, not the MBR). The final option is to place Grub on a removable device, such as a floppy disk or USB stick. This will be slower than using Grub files on the hard disk, but it does provide a useful backup should the MBR bootloader become corrupted. To do this, copy the /boot/grub directory from your hard disk to the removable device (it must still be boot/grub on that device). Now you need to set up Grub on the device. To make it easier to find the correct device number in Grub, first do --- touch /media/sda1/findme ,,, replacing sda1 with the actual mountpoint. Now run grub and do --- find /findme ,,, which will return the Grub designation for your device, say (hd1,0), then do --- root (hd1,0) setup (hd1) quit ,,, Grub is now set up and ready to boot on the USB stick, so try it. It is possible you may get a 'File not found' error when trying to boot Linux from the new boot menu. This is caused by the BIOS switching the hard disk and USB stick around when booting from the stick. The cure is to replace all calls to hd0 in menu.list with hd1, or remove all absolute paths and put root (hd1,1) at the top of the file. Note that USB booting is a bit of a black art - not all BIOSes and USB sticks cooperate, so if it refuses to boot at all, you may need to experiment with BIOS boot settings or try a different stick. If you decide to place Grub on the MBR of the disk and then follow the above steps to put it on a USB device, you will then have to repeat the procedure for the hard disk with the USB device removed, otherwise your hard disk boot will no longer work. Back to the list ****** Keep email folders intact when changing distros Q:: We are a multiple PC, dual-boot family, and I would like to get rid of Windows, but I have a problem for which I have not yet found an answer. I must keep my email folders intact, and I sometimes change Linux distros. This ties me to Windows/Mozilla. I've tried a networked hard drive, but it was very slow. Would using an old PC as a mailserver solve my problem, or do you have a better solution? A:: You have two options here, which may be combined. The first is to always install your distros with a separate /home partition. This means that all of your personal data, not just your email, is preserved when you install a different distro, or a newer version of your current favourite. However, this is a one-computer solution and you have several. By setting up your mail server, each person's mail resides on that server and can be read from any computer using any operating system. Provided you are sensible about security, you can even access it from outside of your local network (mail accounts are password protected, so it is no less secure that reading it from the ISP's server) meaning you have access to all your mail at any you can get on the internet. You could consider nstalling a webmail program, which means you can access your email from an internet cafe or hotel using nothing more than a web browser. There are a number of webmail programs to choose from: SquirrelMail (www.squirrelmail.org) is a popular choice, although my favourite is RoundCube (www.roundcube.net). Back to the list ****** Dual-booting Linux with SUSE Q:: I'm guessing that a few readers have tried dual booting SUSE 9.2 and Windows XP. Given the problems with 9.1 (and several other distros, not just SUSE) a few months ago, my question is, does Grub get along with the XP boot loader straight out of the box now? I'm asking because my daughter's boyfriend is coming to stay with us, and he wants me to install SUSE on his laptop for him. The laptop is currently running WinXP, but he's seen my daughter's PC running Linux and wants to begin migrating. Naturally I'm delighted that he's decided to embrace the penguin, but I'd rather not (a) have problems during his stay trying to get a dodgy dual boot to work or (b) make his XP unbootable. (Actually, there might be some pleasure in that, but that's another story...!) If all is well, then SUSE 9.2 will go on. If there are still problems, I could always do a 9.0 install instead, but I'd really rather give him the benefit of the latest versions of everything. A:: When you install SUSE, it will locate the Windows XP file system and automatically add an entry to the Grub configuration so that XP will continue to boot. For the most part, distributions are intelligent enough to spot what else is on the discs and will add the appropriate entries to the boot loader so that everything can keep on working. Back to the list ****** Looking for distro that runs on Apple Mac G4 Q:: As someone who dislikes bloatware, eye-candy, and other such stuff that clogs and hogs a computer's hardware, I have been considering refurbishing an old computer to run Linux with a lightweight WM. The trouble is, the computer I hope to use (an old machine from work) is an Apple Mac. I know that Linux, BSD, etc have been ported to many CPUs, including the PowerPC, but I don't know if there is anything special about Apple hardware that allows only Mac OS to run on them. Can Linux run on Apple Mac G4 PCI Graphics hardware with 400 MHz PowerPC and 768 MB RAM? A:: Apple PPC hardware is can definitely run Linux - I ran it on a 1GHz iBook G4 for almost three years, until the hardware failed. With a 400MHz CPU, you'll want something lightweight, but that appears to be your objective anyway. There aren't many distros for PowerPC, but all of them are intended for use on Apple hardware. The main choices are: Yellow Dog Linux (www.yellowdoglinux.com) which is derived from Fedora, Debian (www.debian.org) which runs on just about anything, and Ubuntu. Ubuntu no longer officially support PowerPC but PPC versions are available in the ports directories of their download servers. Your best choice for this would be Xubuntu, which uses the Xfce desktop and is available from http://cdimage. ubuntu.com/xubuntu/ports/releases/gutsy/release. While Xfce is significantly lighter than the likes of KDE and Gnome, it may not fulfil your idea of a truly lightweight WM. However, Ubuntu and Debian in particular have plenty of alternatives available, so you can go as minimalist as you like, whether you want the function of FluxBox (www.fluxbox.org) or IceWM (www.icewm.org) or the true minimalism of Ratpoison (www.nongnu.org/ratpoison) or EvilWM (http://evilwm.sourceforge.net). Back to the list ****** Connect an extra SATA hard drive in Debian Q:: I have a Debian GNU/Linux rig on a Celeron based system. I would like to connect an extra SATA hard disk as an upgrade and need a Linux compatible SATA adaptor in PCI form factor, before I break the bank for a new system mainboard. Can you give me guidance on this search, so I can connect the SATA drives with a PCI SATA adaptor and also tell me mainboards have a SATA controller compatible to Linux? A:: What counts here is not the individual makes and models of adaptors but the chipsets they use. Manufacturers sometimes switch to a different chipset while keeping the same model name, making decisions based on anything but the chipset somewhat risky. However, most SATA chipsets are supported in Linux, especially those that support the AHCI (Advanced Host Controller Interface) standard. The kernel contains drivers for a number of SATA controllers - you can see the ones that are included with your distro with this command --- modprobe -l | grep sata ,,, I've used a few low-priced SATA controllers that all utilised the Silicon Image chipset, which has been supported in the Linux kernel for years. Equally, motherboard SATA controllers are well supported, especially if you go for a well-known brand. SATA is no longer new technology, to the point where SATA drives now cost less than their PATA equivalents, so you should have no trouble with anything from a common source. You can look up individual controllers or motherboards, (or just about any kind of hardware) in the Linux Compatibility database at www.linuxcompatible.org/compatibility.html. Some distros also list of known working hardware for their kernels on their websites. There are also PATA to SATA adaptors that fit on the back of a SATA drive and provide a PATA interface (and adaptors that do the opposite, so make sure you get the right type), such as this one from Maplins in the UK: http://tinyurl.com/2t4j3z. Other countries will have similar suppliers that provide similar items. Back to the list ****** How to switch from Gnome to KDE in Ubuntu Q:: I am a first-time Linux user, having used MacOS and Windows for some time. I have replaced Windows XP with a special version of Ubuntu that includes both Gnome and KDE, but I don't know how to change from Gnome to KDE. How do I do this? A:: The disc included, and installed, both the Gnome and KDE desktops, along with Xfce too. To switch between them, there is a discreet 'Options' button at the bottom-left of the login screen; click on this and choose 'Select Session' from the menu that appears. This allows you to choose the type of session you run for this and future logins until you change it again. If you are already logged in to the KDE desktop, log out by pressing the Shutdown button and select logout, choose a new session type and log in again. Back to the list ****** Set up Linux box as a remote desktop Q:: It is possible to use a Linux box to run a remote desktop hosted on my company's server? I often work from home by connecting to their IP address, logging on to Windows, then running an application. As I understand it, all the processing is carried out on the remote (Windows 2003) server, and my desktop is basically just painting a screen image and sending off key presses to be interpreted. This sounds to me like something that should be possible from a Linux machine. I have tried searching for a Wine application that would do the trick, but I can't find anything. Can you help me? I could then junk Windows and start cooking! A:: You do not need Wine for this, because there are native remote desktop clients (and servers) for Linux. You have a choice of at least one command line program and two graphical clients, and it's highly likely that your distro includes at least one of these, so you may even have it installed already. The command line program is rdesktop (http://rdesktop. sourceforge.net) and you run it like so rdesktop my.remote.server giving it either a domain name or IP address to which it should connect. You may need to add some options, such as -u followed by the username to connect with, or -s and the name of the application you wish to run. For a more graphical approach, try grdesktop (www.nongnu.org/grdesktop), a GTK front-end to rdesktop. This provides exactly the same options as rdesktop but with a GUI to set and save your preferred settings. If you use the KDE desktop, you probably already have a remote desktop client installed - krdc. This is usually started from the Internet section of the KDE menu and could be labelled 'Krdc' or 'Remote Desktop Client' depending on whether you have KDE set to display program names or descriptions in the menu. Either way, just start this, give the address of the computer to connect with and it should connect. As with the command line client, the graphical clients may need some extra options to make the connection, but one advantage of the graphical programs is that they remember these for the next session. All of this assumes that the server is running the Windows Remote Desktop software, which is most likely as it is included with Windows Server 2003. The alternative is that it uses VNC, in which case you should install TightVNC (www.tightvnc.com) and run that. This is not an issue if you are using KDE as krdc handles remote desktop and VNC connections from the same program. Back to the list ****** Shrinking primary partitions to make room for logical partitions Q:: I have a HP 6710 laptop that already has three primary partitions on it: Vista, HP recovery and another 2GB partition. Am I right in thinking that I can shrink the Vista partition, fill the space with an extended partition, then put three logical partitions inside for root, home and swap? I would install the bootloader to root and use EasyBCD to load this into the Vista boot sector. An alternative solution would be to install to my external 160GB hard drive but I am reluctant to do this, as I want to have access to Linux all the time. Should I partition first from Vista (I have Partition Manager) or let the distro do it all? After a year of trials in VirtualBox I've decided on the testing branch of Debian, as it makes you learn all about APT and the rest of the terminal commands. If it gets too difficult I can always change to Mepis. A:: You can do what you ask, and it is a fairly simple process. However, any time you modify filesystems and partitioning, you are taking some risk. If the process should be interrupted, you could suffer data loss. In some ways, this is safer on a laptop, because the battery provides protection against a failure of the power supply, but you must do this with the computer connected to the mains, as a flat battery midway through could be disastrous. You should back up all important data before carrying out any partition resizing operation in any OS. The first step is to boot into Vista and defragment it, as a fragmented filesystem is difficult, sometimes impossible, to resize. Once you have done this, boot from the Debian disc and let it take care of the partitioning. Select the Manual option under Partitioning, select your Windows partition and press enter. You will see a menu with an option to resize the partition; take that and pick the size you want to make it, then use the Guided Partitioning option to have Debian allocate suitable sizes for the root, swap and home partitions. Unless you know exactly what you are doing, it is generally best to leave it to the people who developed the distro to decide how much space to give each of its components. However, I strongly suggest you take the option to use a separate home partition or you will almost certainly regret not doing so at some time. I'm not familiar with EasyBCD, but the Grub bootloader installed by Debian and most other Linux distros is a good way of handing multiple operating system and is well supported. If you want to use EasyBCD, make sure you tell the Debian installer to install its bootloader into Debian's root partition. Otherwise, let it install to the master boot record of the disk and let Grub handle the choice between operating systems. EasyBCD appears to be tied to Windows, so is not a good choice if you plan to drop Windows at some time, whereas Grub is independent of any operating system. As usual with open source software, the choice is yours. Back to the list ****** Install Fedora software from a DVD Q:: After installing Fedora I tried to install some additional software in RPM format from the same disk. I received the following error message: "Unable to retrieve software info - this could be caused by not having a network connection." That is correct - the machine in question is not connected to any network and will be used as a standalone machine. I then went back a version to Fedora 6 and after a painstaking hour of installation was presented with the same problem. Next I tried Fedora Core 5 with the same result. Oh dear! Does this mean that you can only install software on a Fedora installation from the web? My Mandriva installation will allow me to install any RPM package from anywhere, even my USB memory stick. Is Mandriva the only distro that has got it right? Somewhere in there is a file where I can redirect the package management software to another source, I have searched but failed to find, can you please point me in the right direction? A:: Most package managers do allow you to install software with a simple click. Yes, they require superuser permissions to do this, but so does Windows. The difference is that Windows lets a user run with admin privileges all the time, which is one of the ways it contributes to its own insecurity. You describe a common and frustrating problem with a simple fix. Fedora 7 and previous installations assumed a network connection and trying to run the software manager without one gave exactly the error you describe. I agree with you that this is wrong, and now it seems so do the Fedora developers, as this is not a problem with Fedora 8. You can fix this with other releases of Fedora by editing the repository files to disable all online sources and add one for the DVD (you need to be root to do this). Load /etc/yum.repos.d/fedora.repo into your favourite text editor, find the section starting [fedora] and comment out the baseurl and mirrorlist lines by placing a hash (#) at the start of each line. Then add a new line reading --- baseurl=file:///Fedora%207%20i386%20DVD? ,,, This creates a new repository at /media/Fedora 7 i386 DVD, where the DVD is mounted. The spaces in the mount point have to be replaced with %20 to be a valid URL. You then have to edit the other .repo files and change any occurrences of enabled=1 to enabled=0. Now the only repository that is enabled is the one for the DVD, and running Add/Remove Software should allow you to install software from the DVD. Of course, you will not have access to any security updates that Fedora may release, so it would be wise to check the Fedora website from time to time for any updates. You could copy these into a directory on your computer after downloading elsewhere and edit fedora-updates.repo to point to this directory in the same way that you pointed fedora.repo to the DVD. Back to the list ****** Set up Linux as a kiosk - just starting up Firefox Q:: I finally got a half-decent machine that runs Ubuntu. My girlfriend won't touch a computer unless it runs Windows XP, so it's dual booting. I read ages ago about kiosk machines that would load Firefox on startup then shut down when Firefox was closed. Can I do this with a virtual machine, and how difficult is it likely to be? I'd like her to put in her username and password at the GDM screen (made to look XP-ish) and when she logs in it loads up an XP VM to her desktop with nothing else, not even panels, loaded with it. I'd then like the machine to shut down when she tells the XP VM to do so. She won't be keen on the whole 'shut down Windows, then shut down Ubuntu' because she's often in a hurry. A:: This is possible with both VirtualBox and VMware Workstation. To do it with VirtualBox, first create the VM as normal and make sure everything works. Then test that you can run it from the command line with --- VBoxSDL -fullscreen -vm "VM name" ,,, The name is the name shown in the list of VMs in the VirtualBox GUI - there's no need for a path. This should start up your Windows XP virtual machine and return to the command line when you shut down Windows. Now you need to have it do this automatically when the user logs in. The first step, if you haven't already done so, is to create a user from the System > Administration > Users And Groups menu item. As you're running the virtual machine full-screen, there's no need for anything more than the most minimal of window managers underneath - you certainly don't want anything as heavyweight as Gnome running when Windows will want so much of your memory. My favourite for this is EvilWM, which you can install through Synaptic. Then create the file .xsession in the user's home directory, containing this --- #!/usr/bin/env bash /usr/bin/evilwm & sleep 3 exec VBoxSDL -fullscreen -vm "VM name" ,,, Log out and type the other user's login name into GDM. Before you give the password, click on the menu at the bottom-left to call up the Sessions window. Select 'Xclient Script' and click the Make Default button when asked. Now, whenever that user logs in, the .xsession file will run, starting VirtualBox full-screen as if they've logged into Windows. When they shut down Windows, VBoxSDL will exit, and the .xsession file will finish, returning you to the GDM login screen. Back to the list ****** ntpdate problem: no servers suitable for synchronisation Q:: I have a small, self-contained network for testing VoIP, and every machine has a static IP in the range 192.168.254.x. I've assigned the machine with address 192.168.254.200 to be the NTP server. NTPD is installed and starts fine, but if I ask another machine to sync with it using ntpdate I get: no servers suitable for synchronisation. This happens on both my Linux servers and on my Mac, so the problem appears to be in the config of the NTP server rather than the clients. I've also got some Cisco IP phones that use Simple NTP rather than full NTP and they pick the time up from the server no problem. NTP configuration seems to be very poorly documented. The ntp.conf file on the server contains the following (and I just want machines on the network to be able to get the time): --- restrict 192.168.254.0 mask 255.255.255.0 nomodify notra ,,, As I understand it, that will allow any machine with an IP in the range 192.168.254.x to get the time off the server for itself but not to modify the time on the server. I also tried setting the stratum level, but to no avail. A:: The first step to diagnosing this is to run ntpdate with the -d (debug) argument. This causes ntpdate to show details of the communication with the server, but not alter the system clock. I suspect you'll see something like --- 192.168.254.200: Server dropped: strata too high ,,, This is usually caused by the server being too far out of sync with the upstream servers, so it sets an artificially high stratum value to prevent other computers trusting it. In effect, the server is saying, "Here's the time, but I'm not that sure of it", to which the client responds, "OK, I'll leave it, thanks" This probably also accounts for the more simple clients accepting its time. Leave the server running for a few hours to allow it to bring itself into sync with the upstream servers from pool.ntp.org or wherever you've set in ntp.conf. Running ntpq -p 192.168.254.200 will give some useful information, reporting the peers known to the server and their accuracy. You want most of them to have a * or + in the first column and a low value in the st (stratum) column for them to be considered authoritative. The stratum setting in ntpd.conf can only be used to increase the stratum level, which won't help here. You're right about the NTP documentation. It's written by those with thorough knowledge of the subject, which is good, but assumes a similar level of understanding among readers, which is not. Back to the list ****** MySQL Administrator crashes with undefined symbol error Q:: I've never been able to get MySQL Administrator to work on a Linux workstation. I've tried different workstations connecting to different servers - I enter server, username and password, click Connect and the window disappears. It's not the server, because I've tried connecting to several, and I can connect to them using MySQLAdmin on Mac OS X, but the Linux version just appears to be permanently broken. When starting it from the terminal it still crashes when I log in and tells me --- /home/andrew/mysql-gui-tools-5.0/mysql-administrator-bin: symbol lookuperror: /usr/lib/libbonoboui-2.so.0: undefined symbol:g_type_register_static_simple ,,, I'm using Ubuntu 7.04 'out of the box' Do I need to use a different version of libbonoboui? A:: Did you install mysql-gui-tools from the MySQL website or install mysql-admin via Synaptic? It seems this is an old problem that only occurs when using the MySQL download. It's still open on the MySQL bug system, where it was reported in June 2007. Most of the reports involve Ubuntu, which was at version 7.04 when this bug was reported, but it also affects 7.10. The cause appears to be a conflict between libraries included with the download and those installed on your system. This isn't a problem when using Ubuntu's own packages. If possible, uninstall the current version and start again from Synaptic. If you want to use the MySQL download, there are a few suggestions for fixing it, not all of which work for everyone. The most complete is to build the package from source, which explains why the program works perfectly on my Gentoo systems. Less extreme solutions involve removing libraries from the MySQL Administrator install directory, causing it instead to use the system libraries, which are consistent with one another. Before you do anything like this, do not be tempted to delete libraries. You may find you need them later, so rename them instead. The first candidate is /opt/mysql-gui-tools-5.0/lib/libgobject-2.0.so.0 - rename that and try again. If it still fails, try renaming all of the libraries in /opt/mysql-gui-tools-5.0/lib (or rename the directory) forcing the program to use only the system libraries. With the latter, you may get failures from missing libraries, in which case you should replace them one by one until everything works. That way you use only the minimum of the bundled libraries, avoiding version clashes with the system. It has also been reported that creating the empty directory /etc/mysql/conf.d helps with Ubuntu, although we were unable to verify this. This does highlight one of the drawbacks of using packages created for another system, which is why we always recommend installing from your distro's repositories wherever possible - that way someone else will already have dealt with any compatibility matters. Back to the list ****** Get Talk-Talk internet working in Mandriva Q:: As a complete newcomer to Linux, I find the whole idea of open source software and the independence it offers appealing. However I seem to have fallen at the first hurdle. Having installed Mandriva Powerpack 2008 as the sole operating system on my old Packard Bell desktop, which used to run XP Pro with no problem, I am unable to set up an internet connection. We have a wireless setup, courtesy of Talk-Talk, which happily handles one desktop, a laptop and a Wii. When I insert the disc to load up the software on the Linux machine it recognises the disc but I can find no way of loading it. Am I missing something obvious, or do I need to import another program to help with the setup? A:: That CD is for Windows only, and it's not even needed there. Such CDs just set up the modem to log on and sometimes install branded versions of software such as Internet Explorer. All configuration can be done via a browser, but you need to use a wired connection for this. Many wireless routers only enable configuration changes via the wired connection, otherwise anyone who could connect to your network wirelessly could make changes. Ensure your wired network is set to use DHCP to obtain an address automatically - you do this in the network section of the Mandriva Control Centre. Plug in the cable, load up Firefox and type http://192.168.1.1 in the address bar. This will get you to the router's administration page, where you will be asked for a login and password - the default is 'admin' for both, but you should have changed this by now. Provided the router is already connecting to the internet, there is no further action needed here, and you should now be able to access the internet using the wired connection. Wireless setup is slightly more complicated, but it would appear that you already have the router correctly configured as it works with your other systems. There are two possibilities here - one is that you haven't set up the encryption system, which is done from the Network & Internet section of the Mandriva Control Centre. The other is that the drivers for your wireless adaptor are not included by default and need to be installed separately. Unfortunately, you have not included any information about your hardware. The output from lshw would have identified your hardware and the drivers you may need to install for it - very straightforward. Back to the list ****** Samba drive mapping Q:: I can copy a file from a Linux machine running Mandrake 10.0 to a Windows directory in a partition on the same machine, ie: --- cp samba_issue.txt /mnt/windows/ ,,, This works OK. I can read from, but cannot copy any files to, a Windows directory on another machine mounted using Samba, ie: --- cp samba_issue.txt /mnt/pc1_DOWNLOADS/ cp: cannot create regular file '/mnt/pc1_DOWNLOADS/samba_issue.txt': Permission denied ,,, The same PC re-booted in WindowsMe can copy to the Windows directory on the other machine. The target Windows machine is a Windows 98SE machine and has the directories shared with no password, as I know from past experience that there are issues that I don't understand with encypted or plain text passwords not working correctly. The machines are networked together by good old-fashioned Ethernet wires, via a router. The Samba connections were set up via the Mandrake control centre, and I have checked that I don't have the 'read-only' option for the mount checked. The Samba packages I have installed are: --- $ rpm -qa | egrep -i samba samba-client-3.0.2a-3mdk samba-winbind-3.0.2a-3mdk samba-common-3.0.2a-3mdk samba-doc-3.0.2a-3mdk samba-server-3.0.2a-3mdk My "fstab" file looks like: /dev/hda5 / ext3 defaults 1 1 none /dev/pts devpts mode=0620 0 0 /dev/hda7 /home ext3 defaults 1 2 /dev/hdc /mnt/cdrom auto umask=0, user,iocharset=iso8859-15,codepage=850,noauto,ro,exec 0 0 none /mnt/floppy supermount dev=/ dev/fd0,fs=ext2:vfat,--,umask=0,iocharset=iso8859-15,sync,codepage=850 0 0 //pc1/DOWNLOADS /mnt/pc1_ DOWNLOADS smbfs username=%,defaults 0 0 //pc1/MY\040DOCUMENTS /mnt/ pc1_MY-DOCUMENTS smbfs username=% 0 0 //pc1/MY\040MUSIC /mnt/pc1_MY-MUSIC smbfs username=% 0 0 //pc1/RECORDINGS /mnt/pc1_RECORDINGS smbfs username=% 0 0 /dev/hda1 /mnt/windows vfat umask=0,iocharset=iso8859-15,codepage=850 0 0 none /proc proc defaults 0 0 /dev/hda6 swap swap defaults 0 0 ,,, I don't really understand the format of this file and have not edited it by hand. I understand that it controls how drives are mapped on to your machine. My understanding is that it is the entries in one's fstab file that control the mapping of drives on the machine you are running on, and that the /etc/samba/smb.conf file controls how the machine you are on appears as a Samba server on the network. Is this correct? Any help from you to get Samba writing from the Linux Mandrake 10.0 machine to the Windows 98SE machine working would be much appreciated. A:: A common problem with Samba mounts are permissions. Obviously, Windows lacks the facilities of Unix file permissions, so these are masked by Samba to replicate the sort of file Linux may expect to see. The simplest option is to add the entry 'umask=0' to the mount lines in /etc/fstab for the Windows mounts, which will allow any user to modify the files. As a more secure approach, you could create a group who can access Windows files, then use 'umask=007, gid=group', where group is the GID of the group you created. This will enable users within this group to modify files, without their having to worry about everyone else being able to do it too. man 5 fstab' is a good place to start when trying to figure out the fstab file. /etc/samba/smb.conf is purely designed for the smbd and nmbd services - modifying anything in there is not going to change the way your mounts behave. Back to the list ****** Command line mail Q:: In a recent issue there was a mention of a program that could use my ISP's SMTP server to send emails from the CLI. Can you remind me what its name was? A:: Any of the standard mail transfer agents, such as Postfix or Sendmail, can do this, but you probably don't want to install a full-blown MTA for this simple task. The standard mail program will do this too, but it requires a local MTA. If all you want to do is send mail via your ISP's mail server, the best program for the job is ssmtp (http://packages.debian.org/stable/mail/ssmtp). Although this is a Debian program, Fedora, Mandriva and Ubuntu also have it in their repositories, and Gentoo installs it as part of the core system package set. However, this is one of those programs that you must set up before you can use it, by editing /etc/ssmtp/ssmtp.conf as root. The key setting is mailhub, which must be set to the name of your ISP's SMTP server. If you want to send mail via a port other than the standard 25, you need to append this to the address, as in --- mailhub=smtp.myisp.com:587 ,,, Other options you may need to set, depending on your ISP's setup, are UseTLS for secure communication with the server and AuthUser/AuthPass if you are required to log in before sending mail. A simple rule of thumb is that whatever you need to change for the defaults in your normal mail client needs to be set here. Using ssmtp is the same as using sendmail. In fact, it also installs a sendmail program, so any program that expects to send via a local MTA can send via your ISP with ssmtp. Sending mail via ssmtp is done by feeding it the mail on standard input and giving the destination address on the command line. --- /usr/sbin/ssmtp -t <<EOF From: My Name <me@myisp.com> To: Your Name <you@yourisp.net> Date: $(date -R) Subject: Just a test This is a test of ssmtp EOF ,,, In this case, the destination address is included in the headers sent to ssmtp - the -t option enables this. Everything between <<EOF and EOF is fed to the program's standard input, so it's much easier than a bunch of echo lines. At other times, you may want to send the output of a program or script by email, in which case you may use --- myprogram | ssmtp me@myisp.com ,,, The program called mail, which needs an MTA to run, provides more options for sending mail, such as including the subject on the command line. Since ssmtp emulates sendmail, you can use this with ssmtp if you want the extra features. This requires no extra setting up - just install the mailx package for your distro and read the man page for the extra options - but for starters try --- myprogram | Mail -s "Output from myprogram" me@myisp.com ,,, Back to the list ****** Writing to NTFS Q:: I administrate a PC that dual boots Vista and Slackware. It has a shared partition, formatted NTFS. I can get read-only access under Linux, but I cannot create files. I don't need fancy permissions on the shared partition, as it will hold only one user's files. A:: There are three separate approaches to using NTFS filesystems with Linux. You're currently using the driver included with the kernel, which reliably supports only reading - you can write to an existing file as long as the length is unchanged. Creating files or directories is not possible, nor is any file write that changes the length of the file. The second option is NTFS-3G (www.ntfs-3g.org), a Fuse filesystem. This runs in userspace, but is reliable, reasonably fast and available in most distros' repositories. The third option is Paragon NTFS for Linux, which we reviewed last year. This is a commercial product that comes with a number of utilities and is available from www.ntfs-linux.com. As always, the choice is yours, but the in- kernel driver is by far the most limited and I would recommend trying NTFS-3G next. Back to the list ****** Three USB broadband modem, Huawei E220, not working in Linux Q:: I have a three-USB broadband modem, Huawei E220, which automatically sets up on Windows and a Mac but not on Linux. Please let me know if any distro produces drivers for this, so I can use it on Linux. A:: This modem has been supported by the Linux kernel since 2.6.20, so any recent distribution should have support for it. When connected, it should set up the device node /dev/ttyUSB0, which you use as the modem device in whatever dial-up software you choose. This configuration is reported to work for WvDial. --- [Dialer mobile] Modem = /dev/ttyUSB0 Baud = 460800 Init2 =AT Init3 = AT&FE0V1X1&D2&C1S0=0 ISDN = 0 Modem Type = Analog Modem Phone = *99***1# Username = username Password = password ,,, This modem actually contains two devices: the modem and a read-only USB mass storage device. The latter contains drivers for Windows, saving the expense of distributing a driver CD with the device, but as you're running Linux, you don't need it. The kernel should configure the modem side of the device when it detects it, but some people report a problem with this, with the device appearing as a memory stick instead. There is a program called HuaweiAktBbo that switches between the two modes. You will find it in the vodafone-mobile-connect-card-driver-for-linux package, which you can download from https://forge.vodafonebetavine.net/frs/?group_id=12&release_id=11. Although this is marked as a Vodafone package, other providers use the same hardware and the program works with them all. Running this program after inserting the stick forces it into modem mode. Do this automatically when you insert the stick with a suitable udev rule. Put this in /etc/udev/rules.d/10-local.rules (create the file if it does not exist) --- SYSFS{idVendor}=="12d1", SYSFS{idProduct}=="1003", RUN+="/usr/sbin/huaweiAktBbo" ,,, The vendor and product IDs may be different for your device; lsusb will show the correct values. Now the modem should appear as a modem at /dev/ttyUSB0 whenever you plug it in. Some people report success at 926100 bits per second (enabling you to use up your monthly allowance even sooner), but get it working at 460800bps before trying any tweaks. Back to the list ****** USB stick inaccessible on the desktop Q:: I am trying to get access to a USB memory stick. Grepping dmesg for USB results in the following --- USB Universal Host Controller Interface driver v3.0 uhci_hcd 0000:00:1f.2: new USB bus registered, assigned bus number 1hub 1-0:1.0: USB hub found ,,, So it appears the USB system is recognised. However, I can't access the stick from the desktop. How do I mount the stick (without being arrested or ending up in casualty)? A:: While dmesg shows information about your USB interface, any mention of the device itself is suspiciously absent. It's often easier to look at the system log for this sort of information. This is usually /var/log/messages or /var/log/current, depending on which system logger is in use (you don't mention which distro you use so it's impossible to say for sure). Run --- tail -f /var/log/messages ,,, as root, before you plug in the USB stick, then watch the output as the stick is recognised. You should see something like this --- usb 7-5: new high speed USB device using ehci_hcd and address 16 usb 7-5: configuration #1 chosen from 1 choice scsi14 : SCSI emulation for USB Mass Storage devices usb-storage: device found at 16 usb-storage: waiting for device to settle before scanning scsi 14:0:0:0: Direct-Access Generic USB Flash Disk PMAP PQ: 0 ANSI: 0 CCS sd 14:0:0:0: [sde] 2007040 512-byte hardware sectors (1028 MB) sd 14:0:0:0: [sde] Write Protect is offsd 14:0:0:0: [sde] Mode Sense: 23 00 00 00 sd 14:0:0:0: [sde] Assuming drive cache: write through sd 14:0:0:0: [sde] 2007040 512-byte hardware sectors sd 14:0:0:0: [sde] Write Protect is offsd 14:0:0:0: [sde] Mode Sense: 23 00 00 00 sd 14:0:0:0: [sde] Assuming drive cache: write through sde: sde1 sd 14:0:0:0: [sde] Attached SCSI removable disk sd 14:0:0:0: Attached scsi generic sg6 type 0 usb-storage: device scan complete hald: mounted /dev/gigabyte on behalf of uid 1000 ,,, In this case, the device is recognised as /dev/sde with one partition, which is automounted. If you see no references to usb-storage, make sure the usb storage module is loaded by examining the output from lsmod: --- sudo lsmod | grep storage ,,, If the module isn't loaded, try loading it with sudo modprobe usb-storage and inserting the device again. If the usb-storage module is loaded, it's very unusual for a device to be unrecognised. Try it on a different computer: Flash memory has a limited lifetime for writes and the FAT table is often the first place to stop working on a FAT formatted device, which could result in this behaviour. Back to the list ****** Switch back to Windows after installing Ubuntu Q:: How do I return to Windows XP after installing Ubuntu on my laptop? A:: Are you asking how to use Windows XP instead of Ubuntu, or how to remove Ubuntu and go back to Windows? The first question is most easily answered. During installation, Ubuntu will have given you the option to resize your Windows partition and, provided you took this option, moved the Windows data over to install Ubuntu alongside it. It then added a boot menu to choose between the operating systems each time you start up. This menu is hidden with Ubuntu. There's a brief countdown while "Press ESC to enter the menu" is displayed on screen. If you don't press the key, it boots Ubuntu, so press Esc and select Windows from the menu. If you want to remove Ubuntu, use any partitioning tool to delete the Linux partitions and resize the Windows partition to fill the drive. You can do this with Partition Magic on Windows, or use the Ubuntu installation disc. Boot from the disc and run System > Administration > Partition Editor. From here you can delete the Linux partitions. Ubuntu uses two by default; a smallish swap partition and a large one for everything else. Make sure you only remove partitions that are marked as type swap or ext3 - any NTFS or FAT partitions are for Windows. This removes the Ubuntu data from your computer but leaves the Grub bootloader. You this need to boot from your Windows CD to remove this. Select the rescue option and run fixmbr. This will restore the disk's boot code back to the Windows settings and your computer will now boot straight into Windows when you power up. All of this assumes that you took the installation option to resize the Windows partition. If you told the installer to use the whole disk, it will have wiped your Windows installation from the hard drive. In this case, the only way to get Windows back is to reinstall it. Back to the list ****** Use iptables to set up a firewall Q:: I want to set up a firewall but I'm not sure where to start. The default firewalls in the distros I've tried are a bit basic - I need something with more control. I've heard that Iptables is the way to go, but it seems very complicated, with some arcane-looking rules. Is there something that gives me decent control over what is and isn't allowed, but in a more accessible way? A:: All firewalling takes place inside the Linux kernel, using the netfilter modules. These actually do a lot more than firewalling, handling anything to do with routing, forwarding, blocking and tracking network packets. Iptables is the user space application that controls netfilter, and is usually used in conjunction with a file containing a series of rules that are applied to netfilter. It's possible to write the rules file with a text editor, and many people do, but it requires a decent knowledge of the various options and their consequences. Remember that computers do what you tell them to do, not what you want them to. It's possible to create a set of rules that leaves your computer open to attack, while believing that it's locked down. That's where the various firewall front-ends come in; they enable you to specify your needs and create the Iptables rules for you. The rule files they create are then read by Iptables at startup and you can even create rules on one machine and transfer them to another. One popular firewall front-end is Guarddog (www.simonzone.com/software/guarddog). Guarddog works with zones, defined for the local computer and the internet to start with. You group computers or networks in these zones, so the first step may be to create a LAN zone for other computers on your network. If you have only one computer, the local and internet zones will be enough. Once the zones are defined, use the Protocols tab to specify what types of communication you allow to and from other zones. For example, you may want to allow NFS or SMB connections from the LAN so other computers can see your shared directories, but you almost certainly don't want this open to the internet. The protocols are grouped by category and the lower left pane shows a description of the selected protocol. By default, everything from other zones to the local machine is turned off, so enable the services you need and click Apply. Now try to use services that you haven't enabled to see whether the results are as expected. The Logging tab controls writing of blocked and rejected packets to the system log. This can be useful when testing a setup but can also fill up the filesystem containing /var/log if overused. In the Advanced section you can disable the firewall, which is a good test if something doesn't work. If it starts working when you disable the firewall, you need to look at the firewall rules. You can also enable DHCP - useful if you're following the LTSP tutorial in this issue - and export a set of rules for use on another machine. The final tab, Port Reference, is useful to see what each port number generally handles. Back to the list ****** Looking for a Linux equivalent of the Windows Device Manager Q:: I work in a mostly Windows environment but am constantly striving to move to Linux; one thing I find quite frustrating is good visibility of what devices I have running and their status with regards to device drivers. I hesitate to say but Microsoft does a good job with its Device Manager. Is there something in Linux that provides the same sort of visibility? A:: The situation is rather different with Linux, because most drivers are included with the kernel, so there's not the same need to compare what is installed and running with what is available from various websites. As long as you keep your package manager up to date, it will inform you of any updates. There are various programs that will report on the status of your hardware, some generic and some specific to a distro. One of my favourites is lshw (http://ezix.org/project/wiki/HardwareLiSter), which is generally used in a console and gives a detailed listing of everything in the machine, from motherboard and CPU to USB devices. The default output is plain text, but it can also generate HTML for viewing in a web browser or open a window where you can click on items to see more information. It has a number of options to limit the information given, such as restricting it to certain types of device or removing sensitive information like serial numbers from the output. A similar program is HardInfo (http://hardinfo.berlios.de) which displays plenty of information about your hardware and software in a GUI. This displays information in a tree view so you can zoom in on the specific details you need. There is a section showing the loaded kernel modules, so you can see which drivers your hardware is using. You may need to run these programs as root, or with sudo, to be able to read everything from your system. The main desktop environments have their own programs: the Gnome Device Manager and KDE's KInfoCentre, which provide similar information. Various distros also have their own variants of these programs: Ubuntu's Device Manager (which is probably closest to the Windows program, although it is a while since I used that), SUSE's Yast and Mandriva's Control Centre all provide hardware information. The SUSE and Mandriva offerings are integrated into their all-encompassing system administration programs, so they also have the option of configuring the hardware where appropriate. Back to the list ****** Compiling errors on Ubuntu - FlightGear and Toribash Q:: I installed Ubuntu and all went well. I was impressed with the user interface and was able to get networking, Nvidia support and printing up and running quite easily. Unfortunately that's as far as I got. I decided to install some software, starting with FlightGear. I unzipped the files and then tried to find the install instructions, or as I would do in Windows, the install Exe file. I eventually found some instructions, obviously written by a programmer, telling me how to compile the program. I followed the command line instructions to the letter, but only succeeded in generating errors. Not to give up too soon I followed the instructions for installing Toribash. The first part worked, but when I typed toribash_ubuntu7 at the command prompt it gave me an error saying "bash: toribash_ubuntu7; command not found". I would love to be able to dump, or at least sideline Windows, but if it is this difficult to install a program on Linux, then I fear it will be some years yet. Given the enormous energy, intelligence and dedication the program writers have put in, could one of them not write a simple install.exe for Linux to allow those who wish to put their toes in the Linux water, the mechanism to do so? A:: One of the biggest challenges when trying a new operating system is "unlearning" the ways you currently do things. Linux is not Windows and many things are done differently, and software installation is probably the most extreme example of this. There are three main ways to install a package in Linux: compiling from source, downloading a package from the program's website (or a DVD) and installing through the distro's package manager. The middle option is the closest to the Windows approach, although it does not use executable files - instead the file is a package containing all you need, and is loaded with a package manager. If there is a Deb file available, install it with --- sudo dpkg --install someprogram.deb ,,, This works, but it suffers some of the same limitations as the Windows method. You have to revisit the website to see if updates are available, there may be conflicts with other installed software and you have no idea about the integrity of the package you have just downloaded. All of these are avoided by using the distro's package manager and repositories. A repository is a collection of packages that have been built and tested for your distro, and verified to be free of any known security vulnerabilities. Packages are digitally signed, and verified by the package manager, to ensure you get only "clean" software. Not only is this the best way to install software, it is also the most convenient and includes almost everything you can want. Simply start Synaptic (in the case of Ubuntu), press the Search button to find what you are looking for, pick what you want to install and click on Apply. The package manger takes care of sourcing the packages it needs, including any dependencies, packages required by your package, downloading and installing them. It will also keep you informed up any updates as they become available. Some of the graphical package managers, such as SUSE's Yast, will also install from a package you downloaded or found on a DVD, but Synaptic does not currently do this, hence the need for the dpkg command given before. If you want to compile programs from source, you will need the build-essentials package -install this from Synaptic. The Toribash error arose because Linux will only look for commands in a list of specific directories, which as a security measure excludes the current directory. To run a command located in the current directory, prefix its name with ./, as in ./toribash_ubuntu7. Back to the list ****** Repartition drive, keeping /home on a new, separate partition Q:: I run Ubuntu 7.10 and thought I would repartition my hard disk, and keep /home on a new separate partition, but without the encryption. However a number of issues arose and before I do the deed I would appreciate some advice if possible. It is not clear from any of the books I have read, exactly how to ensure that the operating system knows where the new /home partition is. The instructions in the article apply to the situation where the encryption process has been completed. I thought that the answer might lie with fstab so I had a look at fstab (I've attached a copy) but found that the existing Linux partitions have a UUID entry, which is frankly incomprehensible. The fstab entries for the two Linux partitions on the drive also seem to be commented out. --- # /dev/sda3 UUID=ff773431-fb57-48b4-bb55-01da6902c372 / ext3 defaults,errors=remount-ro 0 1 ,,, If I run GParted from the System/Administration menu I cannot change the sizes of the Linux partitions - I think this is because they are mounted and it is more dangerous to edit mounted partitions - however I have downloaded a Live CD version of GParted that boots the PC and allows any of the partitions to be edited. A:: You are on the right lines with trying to add the new /home to /etc/fstab. The entry would normally be something like --- /dev/sda5 / ext3 defaults,errors=remount-ro 0 1 ,,, but, as you have discovered, Ubuntu uses UUIDs instead of partition numbers. The commented lines before them are simply to show you which partitions they applied to at the time of installation. A UUID is a unique identifier applied to a filesystem when it is created, one that doesn't change for the life of the filesystem. If you were to shrink /dev/sda2 and add another partition between it and the current sda3, that would change to sda4 and your fstab would no longer work, but the UUID stays the same, so the Ubuntu-style fstab would still work. You have a number of choices here when you add your new home filesystem. You can do it the way you know and use the traditional /dev/xxx method, knowing that you will need to edit fstab if you move partitions. Or you could do it the Ubuntu way by using the vol_id command to get the UUID of your new partition. --- $ sudo vol_id /dev/sda5 ID_FS_USAGE=filesystem ID_FS_TYPE=reiserfs ID_FS_VERSION=3.6 ID_FS_UUID=e242a0ee-f07e-45f2-a104- c8603ccfbe04 ID_FS_UUID_ENC=e242a0ee-f07e-45f2-a104- c8603ccfbe04 ID_FS_LABEL= ID_FS_LABEL_ENC= ID_FS_LABEL_SAFE= ,,, Here you can see the UUID of the filesystem and you can paste it into fstab. There is a third option, and vol_id's output gives a clue - the filesystem label, which is the method preferred by Red Hat/Fedora. This has the advantages of UUID in terms of not changing when partitions are added, but is also more readable. All you need to do is give your partitions labels with --- e2label /dev/sda5 HOME ,,, then edit /etc/fstab to contain --- LABEL=HOME / ext3 defaults,errors=remount-ro 01 ,,, You can change the label of an existing ext3 filesystem with e2label without disturbing the contents, so you can label your root partition and amend fstab. If you are using a filesystem other than ext3, they all have their own labelling tools, you can even label your swap partition with "mkswap -L ..." . You are correct about GParted not working on mounted partitions, but you don't need a separate Live CD; you can boot from the Ubuntu install disk and run it from there. Back to the list ****** Run Sherman's Aquarium as a screensaver in Linux Mint Q:: I am running Gnome on Mint 4.0, which is based on Ubuntu 7.10. Is it possible to use Sherman's Aquarium as a screensaver? I've installed it and it runs as an applet on the Gnome panel and I can manually start a larger version from the command line, but it doesn't appear as a screensaver in the screensaver list. It looks as though the list is generated with some XML config files, but I don't want to start fiddling without an idea of what I'm doing. A:: Sherman's Aquarium works with XScreenSaver but not Gnome-screensaver. XScreenSaver is not installed by default with Mint Linux, so the first step is to install it. Installing Sherman's Aquarium in the usual way (via the Synaptic package manager), installs the program but does not add it to the list of screensavers used by XScreensaver, possibly because it is considered a hack. To add it to the list, you need to edit the file .xscreensaver in your home directory. If this file does not exist, run xscreensaver- demo this will create the file with the default settings. Now edit the file and find the line that says programs:. Add a line after this that contains --- "Sherman's aquarium" shermans -root \n\ ,,, Then run xscreensaver-demo to configure the screensaver. Finally, you need to make sure XScreenSaver runs when you start a session, and disable Gnome screensaver. Go to System > Preferences > Sessions and press Add, type in a suitable name and description and set the command to --- xscreensaver -nosplash ,,, Disable the Gnome-screensaver, log out and in again and you should see a very fishy desktop when the screen blanks. Back to the list ****** Sendmail server load spikes Q:: I seem to be having some serious problems with Sendmail. Earlier this evening Sendmail seemed to be causing my server load to spike up in the 4.0 or greater area. When I ran the top command there were several instances of Sendmail going and they were the top processes running. It seemed to be affecting the server for an hour or so. I wish I'd taken a snapshot of the top command at the time. My first question is, what do I look for in my mail log file? What sort of things should raise flags? One issue I have is that I get a lot of spam. Lots of mail is addressed to users that don't exist on my server, configured.unknown.al.charter.com [192.168.10.20] (may be forged) Also, around the time of the problem I found records like this with a strange URL in them: --- Nov 12 16:16:13 server1 sendmail[18756]: NOQUEUE:otherdomain.com.br [192.168.10.30] did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA ,,, My second question: when I check my running processes, what sort of Sendmail commands should be running? Is there anything odd about this? --- 20334 ? S 0:00 sendmail: server 27.domain.com [192.168.10.40] child wait 20336 ? S 0:00 sendmail: ./iAE2ECt20336 27.domain.com[192.168.10.40]:DATA ,,, Any thoughts you have on troubleshooting the maillog file would be greatly appreciated. Thanks in advance. A:: I don't believe the service load you are experiencing is due to spam coming into your system (successful or failed), unless it's a targeted attack. A CPU load of 4.0 is really quite high and a mail server, unless under attack or extremely busy, should not use so much resources. Going through your mail log by hand can be painstaking even if you don't receive very much traffic. Logwatch at www.logwatch.org is able to go through your mail logs and give you summaries of what is happening. You should be able to tell from here if you have one serious culprit causing you trouble. For a more detailed analysis you could also install Anteater from anteater.drzoom.ch. Although it can be tricky to set up it can give you some very readable reports. The entries you've pointed out are more than likely spam but again, so much mail is these days. I'd highly recommend setting up some level of spam filtering on your system. At the very least subscribe to some RBLs: Spamhaus is a good option. Your second log excerpt is a connection to your server on the SMTP port that was not used to send a message. This could be a port-scan or if there are many of them it could by a Denial of Service attempt (although not a terribly effective one). The last entry you mention is just Sendmail processing a message. Back to the list ****** Get Maxon BP3-EXT modem working in Linux Q:: I have used Microsoft for about 25 years and I must confess Linux is vastly different and it really takes some coming to grips with. I still consider myself on "L" plates. I am using a Maxon Broadband Wireless Modem Model BP3-EXT to access the internet using Windows XP on a desktop PC. My ISP, Bigpond in Australia, has informed me that it does not support Linux operating systems. Can I connect to the internet? A:: The answer to your question is "yes" but I suspect you also want to know how. The modem you have is a USB serial modem. Despite it using wireless to connect to the internet, it appears to the computer as a normal dial-up modem. Getting it to work in Linux involves two fairly simple steps. The first is to load up the driver, and make sure it is loaded every time you boot. The other step is setting up the dialler software to connect. The modem needs the USB serial driver, which is included with all distros, but it needs to be configured to work with your modem and for that you need its vendor and product ID codes. You can find these by running lsusb in a root terminal or by examining the output from dmesg (which prints out kernel messages). Open a root terminal, plug in the modem then run --- dmesg | tail -n 20 ,,, to see the last twenty lines of kernel messages and look for something like this --- usb 2-4.4: new full speed USB device using ehci_cd and address 8 usb 2-4.4: new device found, idVendor=16d8, idProduct=6280 ,,, Now, using the values from dmesg (which may not be the same as given here), load the driver: --- modprobe -v usbserial vendor=0x16d8 product=0x6280 ,,, Running dmesg again should show --- usbcore: registered new interface driver usbserial drivers/usb/serial/usb-serial.c: USB Serial support registered for generic usbserial_generic 2-4.4:1.0: generic converter detected usb 2-4.4: generic converter now attached to ttyUSB0 usbserial_generic 2-4.4:1.1: generic converter detected usb 2-4.4: generic converter now attached to ttyUSB1 usbserial_generic 2-4.4:1.2: generic converter detected usb 2-4.4: generic converter now attached to ttyUSB2 ,,, which means you have successfully loaded the drivers and enabled the modem. You can make this happen on boot by adding usbserial to /etc/modules.preload and the following line to /etc/modprobe.conf: --- options usbserial vendor=0x16d8 product=0x6280 ,,, Now switch from root back to your normal user. As you can see from the dmesg output, the modem appears as three devices. The one you want is /dev/ttyUSB1, so fire up Kppp and set up a new connection using /dev/ttyUSB1 as the modem device, "*99#" as the number to dial and the login details given by your ISP. When setting up the modem, set Flow Control to Hardware and turn off "Wait for dial tone before dialling" Back to the list ****** Find a card reader compatible with Linux Q:: I am thinking of buying a Canon S5IS camera that uses SD memory cards. However I am told that I need a "card reader" My computer has lots of free 2.0 USB slots, but I don't know what reader is compatible with Linux. (I'm running OpenSUSE 10.3.) A:: A far more difficult question to answer would be "which card readers do not work with Linux?". Some people have reported problems with the 99-in-one type card readers. This is usually caused by them running a home-built kernel stripped of all the superfluous options. These devices often need the SCSI_MULTI_LUN option enabled, which will be the case with any default distro kernel, because each card port on the reader is seen as a separate device. If you only want to use it with one type of card, the single-format readers that look like a USB pen drive are more convenient; you just plug the card into the slot and the reader directly into the USB port. You may not need a card reader; Gphoto2 is able to download pictures from most cameras via their USB cable. Your camera is not in their list of supported devices, but if it is a newish camera, this should change soon. If you use KDE, type camera:/ in the Konqueror location bar to see a list of connected cameras and the photos on them. Having said that, I find a card reader faster and more convenient, especially as it means I can put a card in my laptop and carry on shooting with another. Back to the list ****** Internet connection failure using IPv6 Q:: I have tried several distros including Fedora, Mandriva and SUSE, and I always have the same two problems. The first problem is that Firefox rarely manages to connect to the internet, giving a 'timed out' message, which I also get if I try to update the system, while Konqueror has no problems. The second problem is that after anything from a few minutes to an hour or two, the screen will freeze and not respond to the mouse pointer, which is still able to move. I am using an Nvidia graphics card but had tried unsuccessfully several times to install the proprietary driver for it. I tried Ubuntu Feisty after reading that it had a simple method of installing the Nvidia driver. This worked very easily and now my screen no longer freezes. I have also found that Firefox works on Feisty too, and so do updates. Then I decided to try Ubuntu Gutsy on another partition. I found that the Nvidia driver would not install on Gutsy as the internet connection could not be made, even after trying a different download server, and updates would not work either. My problem with Firefox also returned, although Konqueror still worked with no problems. Is this a problem with my hardware or settings, and why does Ubuntu Feisty work? A:: Your internet connection problem is almost certainly caused by your browser trying to talk to your modem or router using IPv6. If the connection does not work with IPv6, the system is supposed to fall back to using the older IPv4. However, some routers do not do this and get confused when dealing with a client that talks IPv6 and an upstream connection (your ISP) that does not. There are three ways to deal with this: fix the router, disable IPv6 at system level or disable it in the browser. It would be worth checking the router manufacturer's website to see if there is a firmware upgrade available. The problem may completely disappear after doing this. If not, you can tell your computer not to use IPv6 so Firefox, or any other program, doesn't even try to communicate with the router in this way. You disable IPv6 by adding or editing these two lines in your modules configuration file. This is /etc/modprobe.d/aliases in Ubuntu, but can be /etc/modprobe.conf or /etc/modules.d/aliases in other distros. --- alias net-pf-10 off alias ipv6 off ,,, These should replace any existing lines that refer to IPv6. The third option, which should only be used when neither of the others is possible, is to disable IPv6 in Firefox, which won't help with anything else. Type about:config into Firefox's location bar, then IPv6 in the Filter box. If network.dns.disableIPv6 is set to false, right-click on it and pick toggle from the menu to change it to True. Most distros now have the Nvidia drivers in their repositories, but installing them is fairly simple if you download the package from www.nvidia.com. Press Ctrl+Alt+F1 to switch from the desktop to a console, log in as root (or run sudo -i from Ubuntu) then execute these commands --- init 3 sh NVIDIA-Linux-XXXX-pkg1.run nvidia-config init 5 ,,, Debian and Ubuntu users should replace the first and last commands, which turn off and on the X server, with --- /etc/init.d/dm stop /etc/init.d/dm start ,,, The Nvidia installer bails out if X is running, hence the need to switch to a console and disable X. Part of the installation process may involve compiling a module for your kernel, if this fails, make sure the build-essentials package is installed. Nvidia-config modifies your X configuration file to use the new driver. Back to the list ****** HomePlug and play Q:: I have a Devolo dLAN HomePlug 85Mbps Ethernet Starter Kit. Devolo provides a software utility to enable encryption between each device, but I don't know how to install the provided software. A:: This is a source package: you need to compile it, but first there are a couple of requirements you should install from Synaptic. Dlanconfig depends on libpcap, which is installed by default in Ubuntu, but to compile against it you also need the libpcap-dev package. You also need to the build-essentials package, which includes the compiler and other tools needed to build software from source. Once these are installed through Synaptic, open a terminal and change the the directory containing the file you downloaded from www.devolo.com, then run --- tar xf dLAN-linux-package-XX.tar.gz cd dLAN-linux-package-XX ./configure make cfgtool sudo make install-cfgtool ,,, The first two lines unpack the archive and change to the directory containing its file. The next two build the config tool from source and the last one installs the files to your system. If you decide to remove the software at any time, repeat the process, but make the last command --- sudo make uninstall ,,, Now you can start the program with --- sudo dlanconfig eth0 ,,, and change your passwords. You will need the devices' security codes, which cannot be read when they are plugged in, so make a note of these first. Run dlanconfig and take the option to change the remote password, which also changes the one for the local device. If you change the local device first, you will not be able to connect to the remote one to change that because the passwords no longer match. If you have the USB version of the dLAN devices, you need to change the last two lines of the build process to --- make usbdriver sudo make install-usbdriver ,,, to install that or simply --- make make install ,,, to install both. Building the USB driver requires the kernel sources, so use Synaptic to install the linux-source package first. Back to the list ****** Use an iPod with Linux Q:: I've just bought an iPod nano and seem to be having some problems with it using Linux. Every time I try to put music on it, it goes wrong. Banshee and Gtkpod fill all the space up on my iPod without putting anything on it and it doesn't mount in Amarok. This is very frustrating, as I have to put all my songs on my iPod using the family computer, which runs Windows. My computer is running Ubuntu 7.10. A:: There are several options for transferring music between a Linux system and an iPod. Banshee and Amarok both include the functions within a media player, and Gtkpod is a dedicated program for the job. There are ways to integrate it with other software, such as the fusepod Fuse filesystem to mount an iPod and the Kpod KIO slave for KDE. Although it's possible to mount the iPod as a USB mass storage device, simply copying files to the device does nothing but fill it up. You have to use a suitable tool to manage the music on Apple's devices. The simplest is probably Gtkpod. Start the program and go to Edit > Edit Repository/iPod Options. Set the mount point to wherever your iPod is automounted and select the correct model number. You can find the model number on the device, on the first line of the file iPod_Control/Device SysInfo. Now you can add your music collection to Gtkpod and copy tracks to the iPod by dragging them from the local window on to the iPod's entry in the left-hand pane. You must then click Save Changes to write the files to the iPod and, importantly, update the iPod's database. The procedure is similar with Amarok. First, tell it about your device in the main configuration windows, then go to the devices tab in the main window and click on the iPod icon above it to set the correct model. Now, you can drag tracks or entire playlists to your iPod. They will appear in the Transfer pane at the bottom-left of the window, until you press the Transfer button. This step is necessary to copy the tracks and update the database. It's likely to be the lack of database update that's causing your problems, and this can be caused by setting the wrong iPod model. Or you may prefer Banshee, which can also copy to iPods. This is the easiest of the lot to set up. Just tell it where your music files live, plug in the iPod and drag tracks from the library. To copy the files and update the iPod's database, right-click on the iPod's name in the left-hand pane and select Synchronise iPod and Save Manual Changes to copy over the files you dropped on the device. Alternatively, use the Synchronise Library option to copy your collection to the iPod. To access the device directly, there's a Fuse filesystem for iPods available from http://fusepod.sourceforge.net and a KDE KIO slave from http://sourceforge.net/projects/kpod. With the latter, you type ipod:/ in Konqueror's location bar to access the iPod directly, and copy files by dropping them on the Transfer folder. Then go into the Utilities folder and run Synchronise to write the changes. Back to the list ****** Linux Mint installation: part of hard drive inaccessible Q:: I installed Linux Mint and, being new to Linux, followed the instructions about partitioning. However I was installing on a laptop with Windows already taking up two partitions therefore when I partitioned I found that (only being allowed four partitions or something) I am unable to access a large chunk of my hard drive. I dedicated 10GB to / and 512MB to swap, but the partition editor in Mint won't let me do anything. A:: A PC hard disk is limited to four partitions, one of those wonderful limits from the days when people still thought it was fine to say "no one will ever need more than..." The solution is a kludge, but one that works very well, to make one of those four primary partitions an extended partition, which can act as a container for so-called logical partitions. In this way, it is possible to have a lot more partitions - as many as you have space for, in fact. Your problem is that you have already used up your four primary partitions and so have nowhere to create an extended partition. The solution is to remove one of your existing partitions (the one next to the unused space) then all of that space can be used for logical partitions. If your partitions are in the order you gave them - Windows, Windows, Linux root, Linux swap and then the unused space - you only have to remove the swap partition to be able to make use of all of the space, and you can do this while the system is running. Open a terminal and run --- sudo swapoff -a ,,, to disable your swap, then run the partition editor as you have done before. Delete the swap partition and you should now find that you are able to add extra partitions. You don't have to worry about the primary/extended/logical nuances, just tell GParted to create a partition. If it asks whether you want to create a primary or logical partition, answer logical and it will take care of creating an extended partition to hold it. The first thing to do is create a new swap partition to replace the one you just deleted, which will be called sda5 - logical partitions are always numbered from five, no matter how many primary partitions you have. After you have finished editing the partitions, you will need to enable your new swap partition. Go back to your terminal and run --- mkswap /dev/sda5 sudo gedit /etc/fstab ,,, Find the line that refers to swap and replace the UUID=xxxx part with /dev/sda5 (or whatever your new swap partition is called). When you reboot, you will still have a swap partition and another 30GB of disk space to play with. Back to the list ****** Panels and taskbars missing in Xubuntu (Xfce) Q:: I am new to Linux having loaded Xubuntu. Since loading this has been upgraded by the inbuilt facility and the Grub screen now tells me that I am loading 'ubuntu 7.10, kernel 2.6.22-14-generic'. My problem is that I have lost the panels, or taskbars, from the desktop and can therefore no longer reach and activate the various applications. This occurred once before, but at that point I was still using the Xfce desktop and so a right-click brought up the Applications menu and I was able to change settings. This time I had changed to a Gnome desktop and the right-click only enables me to bring another icon on to the desktop or a few other unhelpful alternatives. It would seem to me that if I could reactivate the Xfce desktop from a terminal I could get a useful computer back. A:: There are two separate questions here: how to switch back to Xfce and how to restore the Gnome panel. The first is the simplest to answer, at the login screen, click on 'Session' below the login/password box and select Xfce as your desktop. You will now log into the Xfce desktop you had before, after having the option to make this the new default. Gnome is not installed with Xubuntu and is not included in an update either, so you must have installed Gnome yourself at some time. While it is possible to delete panels, Gnome will not normally allow you to delete the last one - ie, there must always be one panel running. To check that gnome-panel is running, and restart it if necessary, run gnome-panel from a terminal. This should give you an error dialog about a panel already running. You can kill this panel process, forcing Gnome to start a new one, with --- killall gnome-panel ,,, Your panel may now reappear - if it doesn't, the panel is either hidden, invisible or of a very small size. To change the properties of your panel, even when you cannot see it, run gconf-editor in your terminal. Navigate to Apps > Panel > toplevels > panel_0. From here you can change many of the settings for the panel, even if you cannot see it. Click on an item in the Name column to see a description; click in the Value column to change it. Once you have got your panel back, you may find that it is empty, in which case you will need to reinstate the applets you need. Right-click on the place in the panel you want an applet to appear and select "Add to panel..." To add the main application menus, choose Menu Bar (not Main Menu) from the bottom of the list. Back to the list ****** Add an entry for Fedora 8 to Grub Q:: Under my vanilla Fedora Core 5 installation, the new Fedora 8 installation shows up as two partitions: /dev/hda7 and 8 under both fdisk and /dev, with none of the /dev/VolumeXX/GroupXX stuff I've been finding described on the net. I'd like to add an entry for Fedora 8 to my grub. conf file so I can boot it. I have been using Fedora Core 5 happily for a while, and it contains all my customisations and data. A:: Fedora Core 5 and Fedora 8 use a similar LVM setup, and this is the source of your problem. Both distros name the first volume group VolGroup00. LVM uses names to distinguish between volume groups, and it cannot cope with two groups having the same name. As a result, it ignores the second one, which is why you cannot see its contents. The solution is to rename one of the groups, but you cannot rename the Core 5 group, because that is in use, and you cannot rename the other one, because the system cannot see it until the name is changed. The best way around this is to boot from a recent Live CD distro like Knoppix to give unfettered access to the volume groups. When Knoppix boots, open a terminal and run su (no password needed) to become root. Later versions of the LVM tools can identify different volume groups with the same name. Run vgdisplay and you'll see two groups both called VolGroup00, each with a different UUID. You can probably tell which volume group belongs to which disk from the sizes, so rename the Fedora 8 group with --- vgrename xxxxxxxxxx Fedora8 ,,, replacing xxxxxxxxxx with the UUID from vgdisplay. You could now reboot into Core 5 and see both sets of logical volumes, although you will need to edit /etc/fstab on the Fedora 8 root filesystem to match the new name. You would still have a problem if you added another Fedora disk later, so it is best to rename both volumes now. If you do this, you will need to make some changes to boot with the changed name. First edit the boot menu with --- mount /dev/hda7 joe /media/hda7/grub/menu.lst ,,, to mount the boot partition and load the menu into an editor. Change the VolGroup00 reference and press Ctrl+K X to save and exit. Now mount the root filesystem and edit /etc/fstab with --- vgchange -a y mkdir /media/root mount /dev/FC5/LogVol00 /media/root joe /media/root/etc/fstab ,,, The first line activates the renamed volume groups. Change the two instances of VolGroup00 to FC5, or whatever you named the volume group. Press Ctrl+K X to save and reboot. Of course, if you were feeling particularly lazy, you could reinstall Fedora 8 and set a different volume group name during installation. Now that you have non-conflicting volume groups, you can add an entry to Core 5's boot menu to pass control to the Fedora 8 menu. --- title Fedora 8 root (hd1,4) chainloader +1 ,,, will pass control to the bootloader on /dev/hdb5. I would also recommend setting up a separate home partition, so that future upgrades will be possible without losing your personal data and settings. Back to the list ****** Debian without internet Q:: I am new to Linux and am currently learning the basics using a Debian-based system, but have problems installing new programs. I do not have internet access at home, but use the computers at the local public library (which run on a Windows system) and use a memory stick to transfer data to my computer. I have tried installing Linux programs I have downloaded, using the package manager, but this doesn't seem to work. I also have the same problem when trying to install programs from DVDs. I have looked through the books on Linux, but cannot find anything to help. Could you possibly tell me how to proceed to install a program saved on memory stick or from a DVD? A:: Debian's dpkg package manager, which is the low-level program behind graphical managers like Synaptic, is able to install directly from package files. To do this, use the following command from a terminal --- sudo dpkg --install somepackage-1.2.3.deb ,,, If you are using Ubuntu or one of its derivatives, or --- su dpkg --install somepackage-1.2.3.deb ,,, for any other Debian-based distro. It is possible to install several packages at once, either by giving all their names on the command line or by passing the name of the directory containing them, as in --- sudo dpkg --install --recursive /media/usbstick ,,, However, you still need to know which files to download, so there is another option. Run Synaptic, mark the packages you wish to install and use the File > Generate Package Download Script menu item. This creates a shell script to download the files you need. Although you cannot use this script directly on most Windows computers, you can copy and paste the URLs from the file into your download software and put the downloaded files on your USB stick. Then plug it back into your home computer, run Synaptic and select the File > Add Downloaded Packages menu item, go to the directory containing the files you downloaded, click Open and Synaptic will install them for you. Updating the lists of available programs is a little more tricky, but possible. Look in /etc/apt/sources.list and you will see a line like this for each source --- deb http://gb.archive.ubuntu.com/ubuntu/ gutsy main restricted ,,, Using this example, browse to http://gb.archive.ubuntu.com/ubuntu/dists/gutsyand go into the Main and Restricted directories. In each of those you will find a binary-i386 directory; download the Packages.bz2 and Release files in there and save them using the full path with each / changed to a _, as in: --- gb.archive.ubuntu.com_ubuntu_dists_gutsy_binary-i386_Packages.bz2 ,,, Copy each of these to your USB stick. Take it home and copy all the files into /var/lib/apt/lists/, as root, and unpack the .bz2 files. It is easiest to do this from a terminal with --- cd /var/lib/apt/lists/ sudo cp /media/usbstick/*.bz2 . sudo bunzip2 *.bz2 ,,, Now fire up Synaptic and it should have all the latest versions available. It's a bit of a fiddle, but the system was really designed for use with an internet connection. Back to the list ****** Memory testing Q:: Some distros, when they boot up, have a menu list with install options that sometimes include a hardware check utility like the Memtest app. Is it possible to have that code and the program boot from a hard disk partition, and how would one go about it? A:: This is indeed possible, and often very simple. Some distros have a package for Memtest86 or Memtest86+, a fork of the original project. In such cases, installing from the package manager will usually add a bootloader menu entry and everything is done for you. If you want to install manually, go to either www.memtest86.com or www.memtest.org, depending on which variant you want to try, and download the pre-compiled version. In the case of Memtest86, this is marked as "installable from Windows and DOS" but it can also be installed from Linux. Unpack the archive and copy the .bin file to /boot - you will need to be root to do this. Then, also as root, edit /boot/grub/menu.lst (some distros use /boot/grub/grub.conf) and add one or both of the following sections, depending on which variant you installed. --- title memtest86 kernel /boot/memtest.bin title memtest86+ kernel /boot/memtest86+-2.01.bin ,,, If you have a separate /boot partition, you can omit the /boot part from these, for example --- kernel /memtest86+-2.01.bin ,,, If your distro uses Lilo instead of Grub, add either of these sections to /etc/lilo.conf --- image=/boot/memtest.bin label=memtest86 image=/boot/memtest86+-2.01.bin label=memtest86+ ,,, Don't forget to run /sbin/lilo after changing lilo.conf! Back to the list ****** Emailing forms from a Linux webserver Q:: First off, let me give you a clear warning. I'm a new convert from the Microsoft camp, so you'll have to go easy on me. Well, new convert isn't entirely accurate: I still use mainly Windows 2000 servers but I'm trying to get as much moved to Linux as possible. It's not been an easy ride but it has been deeply rewarding, both in terms of bettering myself and financially. On my Windows servers with IIS I can get my forms to send me emails right out of the box. My pages are very simple HTML but it's just the form that's getting me. I don't mind using something other than FrontPage if I need to. Is there a Linux equivalent to FrontPage forms? A:: Welcome to Linux. I'm sure you'll find that as your confidence and ability grows you will gain even more from the world of Open Source! Most Linux administrators and developers will tell you to stay away from FrontPage's extensions as they're so proprietary. They won't work on anything but IIS unless you find some unofficial way of supporting them. There is a FrontPage Extensions available for Linux but it's not made by or supported by Microsoft, and even then it doesn't work on every platform and every version of Apache. You'll be much better off with native code anyway. There are two very popular options when it comes to passing data from form to email under Linux with Apache. One is formmail.pl (www.scriptarchive.com/formmail.html) and the other is formmail.php (www.dtheatre.com/scripts/formmail.php). formmail.pl is probably the more widely used and requires Perl. Though its reputation has been slightly tarnished by security issues it's still a good option - just make sure you get the newest version. The newer formmail.php does not need to run as a CGI script as it's PHP and will work just fine from your html document root. Both come with examples so integration should be pretty easy. Back to the list ****** How to hot-swap SATA drives in Linux Q:: I have several 500GB SATA hard drives with all my movies on them. Instead of putting them on a server and running them across the wire, I have chosen to have a removable tray in my media computer. The only problem is that I must shut down the computer to change hard drives. I want to be able to hot-swap disks, but I'm not aware of a way to do that under Linux, as the drive tables are loaded when the kernel is brought up. A:: What you are looking for is hotplugging on SATA, which is dependent on the hardware in two areas. The drive caddy system you use must be hot-swappable; most are, but check before you buy. The lock is often necessary, although some caddies use a sliding catch rather than a key, because it not only locks the drive in place but also controls the power to the drive. Unlocking the drive powers down the drive so it is not still spinning when you physically yank it out. Secondly, your SATA controller must handle hot-swapping. It must be able to recognise when a drive has been disconnected or connected and communicate this information. Provided that happens, the OS should handle hot-swapped SATA drives much the same as it does USB or FireWire drives. Identifying suitable controllers is not so easy. I've had complete success with Intel ICH8 controllers running in AHCI mode, which seems to be the most important factor. If your SATA controllers are AHCI compatible (there is often a BIOS option to enable or disable this if they are), you should be OK, but search Google for your particular controller(s) first. Watch the system log with --- tail -f /var/log/messages ,,, while pulling and replacing the drives. You should see various messages relating to the disappearance and reappearance of the drive. If this ends in success you are ready to use them, although there is one more factor you may need to consider. If you want the drives to be automounted and your automount system uses pmount to do the mounting (pmount allows mounting by a normal user without an entry in /etc/fstab) you may need to edit /etc/pmount.allow. If the drives are seen as non-removable, which SATA hard disks usually are, pmount will refuse to mount them unless you add the device name to /etc/pmount.allow, for example. --- echo '/dev/sdb1' >>/etc/pmount.allow echo '/dev/sdc[123]' >>/etc/pmount.allow echo '/dev/sdd*' >>/etc/pmount.allow ,,, The first allows one particular partition to be mounted by pmount, the second example permits three specific partitions on a drive, while the third lets through every partition on a drive. Note the use of single quotes to stop the shell interpreting the wildcards. Back to the list ****** Ubuntu locking up running games Q:: I have Feisty Fawn up and running reasonably well on an old PC. I had no problem going through most of the programs except when I tried the games. The only one that gave me grief was a card game that I play a lot on my XP that uses two banks of four cards. Every time I tried to play, it would freeze the computer at some stage, always. I thought perhaps it would be OK if I reloaded Feisty, but I can't try any of the Live disc programs, and I can't seem to get out of Ubuntu. How do I get back to a clean hard drive to start again? I am lost when it comes to uninstalling, I don't seem to have this problem with Windows. I have not yet been able to connect to the internet so I have lots of files that need a connection before they can be used, I don't want to keep swapping my broadband between the two computers. I must be doing something wrong somewhere, but I feel must persevere. A:: Reinstalling the whole operating system to deal (sorry) with one card game is not the solution. All you'll do is spend an hour getting back exactly where you are now. The answer is to search Google or the Ubuntu forums, or ask on the Linux Format forums at www.linuxformat.com, giving the details of the game involved. If this game was included with Ubuntu, the fix is most likely quite simple. It sounds like your computer is set to boot from the hard drive before the CD/DVD, so you need to explicitly tell it to boot from the Live CD. Most computer BIOSes have a boot menu - press a key at boot time and it asks you which drive you want to boot from. If not, go into the BIOS when you power on the computer and change the boot order so that the CD/DVD drive is before the hard drive. A third alternative is to use Smart Boot Manager. Download it and copy this to a floppy by opening a terminal and running --- sudo cat sbootmgr.dsk >/dev/fd0 ,,, with a floppy disk in the drive. Now reboot and choose 'CDROM' from the Smart Boot Manager menu. If you're using a DVD, don't worry - you still use CDROM. You don't say what type of broadband you have, but if your modem has an Ethernet port you should be able to connect it directly to the computer with a standard network cable, then set Ubuntu to obtain an address automatically (this is the default, so you shouldn't need to change anything). To use your internet connection with both computers, you need a router. Either a cable modem router that connects to your cable modem or, if you have ADSL, a combined modem/router that connects to your phone line with each computer is plugged into it. These are very cheap nowadays and make connecting computers to the internet and each other extremely simple. Back to the list ****** External drive booting Q:: I am new to Linux and am exploring all the distros. I have XP dual booted with Ubuntu and I then added a second drive installing SUSE 10.3 on to that. The Ubuntu install created a boot menu including Windows, and when I installed SUSE it installed its own boot menu listing both Windows and Ubuntu. When selecting Ubuntu from the SUSE menu I am returned to Ubuntu menu, which works as before. I now want to boot into a 250GB USB drive holding PCLinuxOS. When I tried a direct install of PCLinuxOS, it installed its own boot menu which deleted the SUSE menu and only listed PCLinuxOS and Windows. Is there an easy way to add OSes on the external drive to the SUSE boot menu or quickly create a new one to include all OSes, so as to boot any selected partition internal or external? I am not very good with the command line having just migrated from Windows. I understand partition labels and numbering having read various Grub tutorials but don't want to trash my existing boot system unnecessarily. My second drive is recognised by the BIOS. A:: Normally a distro installs its bootloader to the Master Boot Record (MBR) of the first hard disk. When the computer boots, it sees the bootloader here and passes control to it. The problem with this is that each installation overwrites the previous one, as you have found out. Once you have a working boot menu that you like, you can prevent any other distro from overwriting it by taking the option to install that distro's bootloader to the root partition, which may be hidden in some advanced section of the installer. This means that the distro's bootloader is not written to the MBR and the original bootloader is untouched. Instead, it goes to the start of the main system partition used by that particular distribution. Now you can edit the original distro's boot menu to add an option that passes control to your new distro. First you need to know where the root partition is, which you should have seen during installation. If you are installing to the first partition of an external drive, it is probably /dev/sdb1. Linux designates hard drives as sd (or sometimes hd) followed by a letter - 'a' for the first drive and so on, and a number for the partition, starting at one. Just to confuse you, Grub uses a different scheme and labels drives (hdx,y) where x is the number of the drive and y the number of the partition on that drive, both counting from zero. So the first partition on the second drive is /dev/sdb1 in Linux terms and (hd1,0) to Grub. Now you understand that, boot from the distro that owns your main bootloader, SUSE in this case, and edit it to add an entry for the new distro. You can edit the bootloader in Yast or by editing the file /boot grub/menu.lst directly (you have to do this as root) in a terminal with --- su nano /boot/grub/menu.lst ,,, Scroll down the text and you will see the existing menu entries, then add one for your new distro with --- title My New Distro root (hd1,0) chainloader +1 ,,, The first line is the text for the menu, the second tells it the location of the partition containing the new distro's bootloader and the final line passes control to that bootloader. Select this and you will see the new distro's boot menu. If Grub reports error 21 or 22 you have incorrectly specified the partition in the root line. You can press E while your new menu entry is highlighted, select the root line and press e again to edit it. Change the root line, press Esc to apply the change and B to try booting it again. You cannot do any damage when experimenting like this. Once you find the correct value, edit the menu.lst file as above to make the change permanent. Back to the list ****** Best way to share files - Samba or SSH? Q:: I am running Samba on my Linux desktop so that my Windows laptop can access files in the desktop. If I need to access the files on my Linux desktop from another Linux laptop, do I need to run Samba? I tried TightVNC the other day and was able to see the other person's desktop. Can I use it to transfer files like in Samba? I then came across something call KDE Remote Desktop Connection. Is this something like VNC? Which is the preferred method, VNC or Remote Desktop? SSH is something else that I've read about but not tried out yet. Wikipedia states "Secure Shell or SSH is a network protocol that allows data to be exchanged using a secure channel between two computers." Does this mean that it is something like Samba that allows a connection between my laptop and desktop or only between Linux computers? Or is it a totally different beast? A:: Samba is a server to allow other computers to access files using the Windows SMB and CIFS protocols. Although it was originally intended to make files on a non-Windows computer available on a Windows network, it has grown beyond that. As it is a server, you do not need it running on a computer that is used to access files from a Windows system - you only need the client side of the software, with no server running. The client programs are usually installed by default and are often included in a separate package to avoid the need to install the whole of Samba just to access files from a Windows network. You can also use Samba to share files between Linux computers. There is a more native method called NFS (Network File System) but if your network contains a mixture of Windows and other operating systems, it is often simpler to stick with Samba for everything. TightVNC, a variant of the original VNC suite, is completely different, allowing remote access to the graphical desktop of another computer. It does not include file transfer facilities because you are doing everything on the remote computer, but using your local keyboard, mouse and monitor. KDE's Remote Desktop Connection is a front-end to both VNC and the Windows Remote Desktop Protocol. It can connect to computers using either method, determining the best protocol to use on each connection. SSH is also different, providing a way of logging in to a command shell on a remote computer using an encrypted connection, making it secure for administrative tasks over an insecure connection, like the internet. SSH also provides file transfer facilities, through the use of the command line scp and sftp programs. The latter can also be used with graphical file managers. Type stfp://user@domain/path/to/directory into Konqueror's location bar to display the contents of the directory from the remote machine (provided you have SSH login access of course). There is also an SSH program for Windows called Putty (www.chiark.greenend.org.uk/~sgtatham/putty). Back to the list ****** Install programs not in Mandriva repositories Q:: I finally took the plunge and installed Mandriva 2008 Powerpack on my Toshiba Laptop (Satellite M50 PSM53A). Everything works great that I have noticed so far, except that I think I may have a software modem, I'll have to work around this, I guess. I thought I'd install GCstar and Genius. After several frustrating hours in the Software Management utility, the instruction manuals and the Mandriva website, I came to the conclusion that the Mandriva people only want you to use the programs that they make available. The same seems to be true for the other distros. Please tell me I'm wrong! If not, why is this so? How can I tell whether certain software is suitable for my particular distro. It also seems to matter whether I have Gnome or KDE desktops, how do I tell which desktop the software is for? If I were to install the Gnome desktop on Mandriva, would I be able to swap between Gnome and KDE like you can with Fedora? A:: The various distros work hard to make installing and using software as straightforward as possible. To this end, they provide huge repositories that contain almost everything you could need. These packages are tested to make sure they are compatible with that distro and each other. As a result, installing from the distro's package manager is usually very simple, with all the details of dependency resolution, package downloading and software configuration hidden from sight. So it is true that the distro makers would prefer you to stick with their repositories, but this does not make installing software from elsewhere impossible. Some package managers are able to install from individual package files, although this often requires a trip to the command line. With Mandriva, you use the urpmi command from a root terminal --- urpmi some-package.rpm ,,, This only works with packages in the correct format for the distro: RPMs for Mandriva, Fedora and SUSE; Debs for Debian and the Ubuntus. It does have the advantage that the installed packages are included in the distro's database, so it can track what you have installed. However, the two you mention are only available on the DVD as source code and need to be compiled before you can install them. In the case of GCstar, there is a files called INSTALL.txt on the DVD, which explains how to install it. For Genius, and the majority of programs, there is an INSTALL file in the tarball that explains how to install it. The general procedure is --- tar xf /media/dvd/Hot_Picks/Genius/genius-1.0.2.tar.gz cd genius-1.0.2 less INSTALL ./configure make su -c "make install" ,,, These commands unpack the archive, switch to the directory holding the contents of the archive, display the INSTALL file, configure the build process for your system, compile the programs and install them respectively. Installing the compiled software requires administrator privileges, hence the use of su. Ubuntu users should replace the last line with --- sudo make install ,,, You may need to install GCC, the compiler software, before you can do this. The ./configure command will check if this, and any other required software, is available and warn you if not. Source code distribution is the most distribution independent method, so it should work on any distro and with any hardware (but it does require a bit of extra effort). As far as running KDE and Gnome software is concerned, as long as you have the correct libraries installed, and most distros cover this, you can run KDE software on a Gnome desktop and vice versa. You can install Gnome on Mandriva and choose between it and KDE when you log in, but you don't need to switch desktops to run software for the other one. The only real problem with running KDE applications on Gnome is that they look out of place. The same is true with Gnome programs on KDE, but there is a KDE module to apply the current KDE theme to Gnome/GTK programs too. The GTK-Qt Theme Engine is available from http://gtk-qt.ecs.soton.ac.uk. Back to the list ****** Foresight Linux: 'Bad IPL' error message Q:: I have a cheapo Time AMD 64 laptop with Windows XP and usually Mandriva dual-booted. I occasionally try other distros on it, sometimes alongside with Mandriva and sometimes replacing it for a while. This week I tried to load Foresight Linux, using the existing linux partitions and choosing the default option for the bootloader. On restarting the machine, however, I just get the error message 'Bad IPL' 'Press any key to reboot' Then nothing happens. Google tells me that IPL means 'Initial Program Load' which doesn't mean much to me. I did try reloading Mandriva but still no result - same error. I'd like to try to salvage my Docs/ Pics/ and music, but more important for me is my email and news, which is in a RISC OS partition on Win XP. I'm tempted to try Insert but I'm not sure this would do what I need. A:: The Initial Program Loader is the first stage of the bootloader. This is the code that loads the rest of the bootloader, which then presents you with the boot menu. The IPL has to fit in the Master Boot Record (MBR) of the hard disk, where it has all of 446 bytes available - that's less than half the space it took you to describe the problem, so you must forgive it for not including more descriptive error messages. It would appear that your bootloader code is corrupt and the error you are seeing in fact comes from the BIOS. You can reinstall the bootloader by booting the disc in rescue mode. Type linux rescue at the boot prompt. This will mount your Foresight installation at /mnt/sysimage, from where you can reinstall the bootloader to the Master Boot Record with the following command --- cat /mnt/sysimage/boot/extlinux/mbr.bin >/dev/sda ,,, This assumes you have it installed on the first (or only) hard disk. Now exit the rescue shell by typing exit or, if you are lazy like me, pressing Ctrl+D, and it should reboot correctly. You other option is to switch from using Extlinux to Grub. Foresight installs both but only configures the bootloader you choose, so Grub has only a template menu file. If you are familiar with the syntax of the Grub menu, or you want to read up on it at www.gnu.org/software/grub/manual, you can edit the file and install it by booting the rescue disc and entering the installation with chroot, like this --- chroot /mnt/sysimage nano /boot/grub/grub.conf # edit the file and press Ctrl-X to save and exit grub-install /dev/sda ,,, Press Ctrl+D twice, once to exit the chroot and again to exit the shell, and the computer will reboot, this time to a Grub menu. If you are not comfortable editing the Grub menu, you could simply reinstall Foresight but this time select Grub instead of the default bootloader of Extlinux. I don't normally advocate reinstalling a system as a means of addressing particular problems, but in this case you haven't used the system (because you are unable to start it) so you have nothing to lose. Most Live CD/DVD distributions will let you mount your Windows partition in order to back up your data. Back to the list ****** Looking for a suitable wireless access point for Linux Q:: Following some recent advice I bit the bullet and bought a Dell laptop, the Inspiron 1525. It's happily running Ubuntu, and it's the first wireless device I have. So I need an access point. I already have a network in place, with a LAMP setup that's also a firewall and router to the internet. But I don't know where to start with the wireless. I googled and read product reviews, and I see a lot of devices that are an access point, switch and router or just an access point, but I can't figure out where that sits in my network. Is it possible to use an access point/switch device, like (for example) the Linksys WRT54GL to extend my wired network to wireless via the switch function? I know it's a router, but I want it to be a switch, so that the wireless is in the same 192.168.2.0/24 segment. If I install it as a router, I'd have to go through double NAT to the internet, which makes it impossible to connect to a remote desktop at work from home. Would it work through a concrete floor so that the wireless extends to downstairs? A:: You need a plain wireless access point. This connects to your existing wired network at your router or switch, and extends it into the wireless realm. While it would be possible to use an all-in-one router and access point (and even modem) for this task, you would need to disable the bits you don't need, making it a more complex setup than using a plain access point. The access point handles the wireless connection and encryption, while everything else uses your existing wired setup. One thing to watch out for is that wireless access points generally have a built-in DHCP server. If you have an existing DHCP server in your router, disable the access point's DHCP as having two independent DHCP servers on a network is asking for conflicts. All the access points I've used have a web interface (which you have to access for the wired network) where you can turn off DHCP. Range is a difficult topic, anything thicker than air between the access point and laptop will reduce your range to an extent. Also, most omnidirectional antennae are only omnidirectional in the horizontal plane with limited vertical coverage; a higher gain antenna increases this effect. A patch antenna is a directional device that allows you to adjust horizontal and vertical coverage, although some experimentation is required to find the best position. Since you may need to replace your antenna to improve coverage, make sure the access point you choose has a removable antenna. Most do, but there are a few with fixed antennae. Back to the list ****** LILO printing number nine (9) all over the screen Q:: I'm sure you remember one of the usual bugs in the boot stage that fails to load the kernel, this happening usually when upgrading kernels but neglect to call Lilo. Grub is another story of course. Well, when the kernel is not found, we see 9s all over the screen. But why 9s? Why not 6s or some other number? Since the loader is pointing to the wrong sector shouldn't the screen present random data even as raw numeric values? A:: The nines are not random: they are a Lilo error code. As Lilo loads itself it writes the word Lilo to the screen, one letter after the successful completion of each stage. If it fails at any of these stages, it outputs a two-digit hexadecimal code to indicate the error. Error code 99 means "invalid second stage index sector" in other words, it cannot find the location on your filesystem that contains the rest of the Lilo code. The error message is repeated, which is why your display fills with nines. Lilo does not use filesystem code to locate files; instead, the physical block address of the code it needs is written with the bootloader code. That's why you need to rerun Lilo after making any changes, be it editing the menu or installing a new kernel. Otherwise, the bootloader looks for its code in the wrong place, sees that this is not the code it needs and outputs the error message of 99. The screen does not show random data because, while Lilo is simple, it is not stupid, and it realises that this is the wrong location. The options, as ever, are to rerun Lilo every time you upgrade your kernel, change the menu or switch to Grub, which, while it still has terse error codes, does at least make them a little more comprehensible. Back to the list ****** Choosing a serial modem for Linux Q:: I've been having a look at Linux as a replacement for Windows XP and have been very impressed with the availability of the range of software, however I can't get access to the internet. I have an external 56k modem and the few Linux distros I've tried won't recognise the modem. As a newbie to Linux I'm really in the dark on this issue and find the available support info confusing or lacking. Can you help point me the right direction? A:: Is this a serial or USB modem? If it's a plain, old-fashioned serial modem, then things couldn't be simpler. The first serial port is /dev/ttyS0 (the equivalent of COM0 with Windows), and you just need to set up your dialler program to use this. Which dialler program you use depends on which distro and, more importantly, which desktop you use. With KDE you should use KPPP, while in Gnome you should go to System > Administration > Network and choose the Point-to-Point or Modem option. If the modem is a USB device, you may or may not be in luck. USB modems are a little like internal modems, in that some just work and are supported by the kernel; others require specific drivers that may only be available for Windows; while many fall in between and can be made to work with a little effort. Use the lsusb -v command to find out the details of your modem, then use Google, or any other search engine, to find information on this device and Linux. This should tell you whether it should 'just work' require a driver or if it is a lost cause. A quick test is to run --- tail -f var/log/messages ,,, in a root terminal (prefix the command with sudo if using an Ubuntu variant) before you connect the modem, then watch the messages as you plug it in. If the make and model are recognised, things are looking good. If the device dev/ttyUSB0 appears, you are really in luck and can use this as the modem device in the dialler program, just as for a serial modem. Otherwise, some web searching will be required. Back to the list ****** Using SD cards on a netbook as a memory expansion Q:: I am quite keen to buy a laptop like the Asus Eee PC, but I wonder about the use of SD cards for memory expansion. On the Panasonic home page there is a formatting tool available for SD cards. Apparently Windows does not do it properly. Is that an issue with Linux systems as well? And is it a general problem or just related to the use of SD cards in cameras? A:: To clarify things first of all, the Eee can use an SD card as storage space, but not as memory (unless you use it as swap of course, but that would be very slow). Memory is expanded by replacing the SO-DIMM memory stick inside the computer. As for using SD cards for storage, they generally come preformatted with a standard FAT filesystem, but my preference is to format them in whichever device I intend to write to them in, such as my camera or PDA. I have never come across a problem reading such a card in Linux. If you want to use an SD card as an extra drive in an Eee PC, you would be best off reformatting it using a Linux filesystem. Ext2 is probably best for this, as the lack of a journal reduces the number of writes to the card as well an increasing the space available for storage. The Eee can use SDHC (Secure Digital High Capacity) cards; I've used an 8GB card in mine. Beware though that these are different from normal SD cards, even though they look the same, so while you can use one in your Eee, you would need a compatible card reader (and most of them are not) to transfer data from your desktop computer using one. Back to the list ****** How to survive a Slashdotting Q:: I wrote an article a few months back that got mentioned on Slashdot, and, because I hosted the site myself, my humble server got pounded by thousands of visitors in just a few hours. Back then I used Apache 1.3 with MySQL 3.23. I've now got another article ready to upload that I think will also be a big hit, but this time I want to be prepared -what can I do to ensure maximum throughput for my server? Since the first article, I have upgraded to Apache 2 and MySQL 4 (both compiled by hand), plus PHP 4.3, and am now using a dual 2.0GHz box with 1GB of RAM. The rest of the system is a pretty basic CentOS (the freebie Red Hat Enterprise) install. If possible I would rather not upgrade the hardware further! A:: Slashdot can be difficult to prepare for without some sort of testing. Much of the advice I'll give depends on the type of content you're hosting. Obviously the more static you make your page the more hits your server will be able to handle. You may even want to create a separate low-bandwidth version for the Slashdot crowd. Mounting your file system (especially if it's ext3) with the noatime option will minimise disk overhead as your system will not be updating the 'last accessed' time for the page every time it's opened: --- /dev/sda5 /var/www/html ext3 defaults,noatime 1 1 ,,, In Apache itself it's well worth turning keepalives off. That will reduce the amount of simultaneous open connection but will introduce some latency into the page loading, especially if there are many images in your page. This is a tradeoff you will have to test but usually turning keepalives off is beneficial. You mentioned that you have a custom-compiled version of Apache. Be sure to turn your MaxClients variable up quite a bit. By default it's hard-coded to 256 in the Apache source and you'll probably need something substantially higher. Still in Apache, you could try an Apache module like mod_gzip. This is only really useful if the bottleneck is your bandwidth (as opposed to your CPU or system). It will compress outgoing data with a hit on CPU utilisation. From the kernel you could modify the net.ipv4.tcp_keepalive_time and net.ipv4.tcp_fin_timeout to something more suitable. I've had good results with setting fin_timeout to 30 seconds and keepalive to 20 minutes. You can modify these using the sysctl command. You could consider using bdflush, especially if you're using a 2.4 kernel. There are many options here to optimise your memory (and page file) usage. Given the specs of the server you've mentioned I doubt you're using IDE hard drives but if you are, make sure that DMA is turned on. Check your IDE drives performance out by running: --- hdparm -Tt /dev/hda ,,, An acceptable speed is about 400MB/sec for cached reads and 20-30MB/sec for disk reads. I've seen a similar configuration on similar hardware managing 200,000 unique connections per hour serve around 300k each time so with a little planning you should be successful. Good luck! Back to the list ****** Linux freezes on Sony Vaio VGN-N385N laptop Q:: I have a Sony Vaio VGN-N385N laptop and can run any KDE distro up to 3.5.8 or any Gnome distro with no problems at all. The problem is that any KDE 3.5.9/4.03 or any new Gnome distro just won't run. I install them on my hard drive with no problems but the minute they start the boot process they freeze. The Grub message comes up and counts down, it goes into the page with the distro's graphics and the coloured bar flashing horizontally across the screen - and then nothing! This happens with Ubuntu, Kubuntu, Mandriva spring and now Foresight. Is there something different with the bootloader that I need to change? A:: This is failing far too early in the boot process for KDE or Gnome to be involved. Similarly, the bootloader is not a factor: once it has passed control to the kernel, which happens before the boot splash screen comes up, it takes no further part in the proceedings. I suspect that your problem is caused by a later version of some hardware-related system software, either the kernel or something like HAL or udev, not getting on with your hardware. However, this is only a suspicion, and you need to find out what is breaking during boot. The pretty splash screens that most distros use do a good job of hiding all the scary text output from a standard boot process, but that text almost certainly shows the source of your problem. Some distros have some sort of 'safe mode' boot option that disables the splash screen, while others allow you to remove the splash screen during boot with a keypress, usually Esc or F2. If you can do this, you can usually see the point of failure, which often means you are 90% of the way towards fixing it. If there is no option to disable the splash screen, you can do it from the Grub menu. Press E (for edit) while the default menu option is highlighted, move the highlight over the line beginning with 'kernel' and press E again. The kernel line contains a number of options; the only ones you need to touch relate to the splash screen. For example, on Ubuntu it looks like this: --- kernel /boot/vmlinuz-2.6.22-14-generic root=UUID=xxxx ro quiet splash ,,, Remove the quiet and splash options, press Enter to stop editing and then press B to boot with the changed options. Some distros also have a separate quiet option on separate line. You can disable this by highlighting it and pressing D. You should now see a lot of text flash past, most of which is unimportant. What matters is the last few lines before things stop. Search Google for the last error message to find a solution, or post to the LXF forums at www.linuxformat.com/forums/. Before you do that, there are a couple of boot options worth trying, as they solve more hardware-related boot problems that all the others put together. After removing quiet and splash from the kernel line, add noapic acpi=off. Note the different spelling. Despite the similarity, these are two very different options. If you can boot with these, try with each one individually. When you find the best set of options, you can edit the menu file at /boot/grub/menu.lst to make the change permanent (you will need to be root to do this). Back to the list ****** Best Linux distro support for Asus P5KPL motherboard Q:: I am running OpenSUSE 10.2, but the OS doesn't go too well with the Asus P5KPL motherboard of my new machine. Which Linux distros get on best with it and will in be able to use Planner on it, too? A:: Mainstream manufacturers like Asus are generally well supported, because of the large number of people using them. However, it can take time for support for new hardware to make it into the kernel, and then for a distro to release a version with that kernel. Most distros release twice a year, and with the testing cycle that precedes release it can be nine months before new hardware is supported in your latest distro's release. The Asus P5KPL contains nothing particularly esoteric, so it should be supported by most current distros. The key word here is current: OpenSUSE 10.2 was released in December 2006, so support for anything less than 18 months old is likely to be problematic. Installing any more recent distro is likely to solve all your hardware compatibility problems. Some distros have a hardware compatibility section on their website, where you can check before downloading and installing. For example, OpenSUSE has one at http://en.opensuse.org/HCL and Mandriva's is at www.mandriva.com/hardware. Many distros provide Live CD versions, which are a good way of making sure everything works before installation. Planner should work with any distro that includes the Gnome libraries. It is included in the OpenSUSE repositories, so you can install it from Yast in the usual way. Back to the list ****** Video mode recovery Q:: I installed/upgraded from Ubuntu 7.10 to 8.04 and made a mistake with my screen display. It's now widescreen and is unreadable. I can go to recovery mode but that doesn't help, so I need a line in the root window to restore to, say, VESA 800x600 mode. A:: There is indeed a one-line solution: go into a terminal and run --- sudo dpkg-reconfigure xserver-xorg ,,, This will run the configuration utility for the graphics system, where you can choose the correct settings this time. Not only does it help you when things break, it also means you can experiment with your settings secure in the knowledge that if you mess things up, rescue is at hand - an ideal situation for those of us who practice provocative maintenance. Back to the list ****** Retrieving Grub Q:: At present I dual boot between XP Pro on drive C and PCLinuxOS on drive D. It's time for a reinstall of XP, but I have PCLinuxOS set up pretty well the way I want it and don't want to start all over again with it. I also don't want to lose the dual booting. Is it possible to perform a complete reinstall of XP without losing the dual booting? Or is there a way that I can restore the dual boot after XP installation ? A:: Installing Windows will get rid of whichever bootloader you use for Linux (Grub in the case of PCLinuxOS). This is why, when setting up a dual boot machine, it is always best to install Windows first. However, it's a simple task to restore your boot setup, because Windows will only overwrite the bootloader code in the disk's Master Boot Record (MBR), it will not touch your menu configuration. To restore your dual boot, you need to run Grub, which you can do from almost any Live CD distro. Boot from the CD (or DVD), open a terminal and log in as root by running su or sudo bash if you use an Ubuntu CD. You can run the automatic install script as --- grub-install /dev/sda ,,, although this does not always work well with multiple drive systems. The alternative is to install it manually, which requires only two commands anyway. You need to identify your drives and partitions, because Grub uses a different labelling scheme. The first hard drive is (hd0), the first partition on that drive is (hd0,0). Note that Grub counts from zero. The two locations you must identify are the location of the /boot/grub directory, which will be (hd1,0) if it is the first partition of the second drive, and where you want the bootloader code installed, which is usually (hd0), if you want to put it in the MBR of the first drive. Once you know where everything goes, run grub to enter the Grub shell, then run --- root (hd1,0) setup (hd0) quit ,,, The first command identifies the location of the Grub files, the partition containing the /boot/grub directory; the second command writes the initial bootloader code to the MBR, the meaning of the third command will be left as a mystery. Provided this all ran with no errors, a reboot should show your original boot menu in all its glory. Alternatively, you could save yourself a lot of grief and remove XP entirely. Back to the list ****** Control a remote machine Q:: I have a PC in my father's home that I do not want to leave on 24 hours a day. So I would like to be able to turn it on and shut it down from my house. That computer is running Windows XP, while I am using Slackware 12.0 here. Is this possible? A:: All you need to turn the computer off is to run some sort of remote desktop software, then you can log in and turn off from the Start menu just as you would if you were sitting in front of the computer. If the computer is running XP Home, VNC (Virtual Network Computing) is a good choice. TightVNC (www.tightvnc.com) is an implementation of this aimed at slower internet links. Install this on the Windows computer and set the server to run on the Windows box. In this case, you need to forward ports 5800 and 5900 in your router and firewall. If you are using KDE, you can use KRDC to connect to VNC as well as RDP desktops, otherwise install TightVNC on your Slackware box and use that to connect to the Windows desktop. Turning the computer on uses a completely different technology called Wake-on-LAN. When the computer is turned off but still connected to power, it listens on its network interface for a 'magic packet' - a specific sequence of bytes. When it receives this, it turns itself on. This requires support for Wake-on-LAN in the motherboard's BIOS. Most recent BIOSes support it, but it is often disabled by default, so you'll need to find the option in your BIOS setup menus to turn it on. If you have an onboard NIC, that's all you have to do, but if you are using a PCI network card you will need to use the supplied cable to connect its Wake-on-LAN header to the one on the motherboard. Wake-on-LAN uses port 9, so forward that from your router to the broadcast address for your network. This will only work if you have a separate modem/router that is always powered up. It must also connect to the computer via Ethernet, as Wake-on-LAN only works with Ethernet adaptors. Finally, you need the hardware address of the Ethernet adaptor on the Windows computer, which you can get by running ipconfig in a DOS box. With this information, you can run the wakeonlan script from http://gsd.di.uminho.pt/jpo/software/wakeonlan with --- wakeonlan -i [ipaddress of server] [MAC address] ,,, For example --- wakeonlan -i 123.124.125.126 00:0C:29:55:B0:C1 ,,, The IP address you use with wakeonlan or TightVNC must be your external facing address, not the internal LAN address of the individual computer. Since most ISPs use dynamic addressing, you need to use one of the dynamic DNS services, search the web for them, to map a set domain name to the dynamic address, unless you are lucky enough to have an ISP that offers static addresses. Wakeonlan should work with your Dynamic DNS hostname instead of the IP address. If it does not, ping or dig the hostname to get an IP address. Back to the list ****** Add repositories to Eee PCs: 'couldn't stat source package list' Q:: I have only been using Linux for a week or two but need to tweak a number of Eee PCs. I followed a tutorial to add a repository but it didn't seem to work. Did I do something wrong? If I browse to the websites the directory structure does not seem to fit the lines in sources.list. I added this line to sources.list: --- deb http://xnv4.xandros.com/xs2.0/upkg-srv2 etch main contrib non-free ,,, then I got several messages like the following after doing sudo apt-get update: --- Failed to fetch http://update.eeepc.asus.com/p701/dists/p701/Release.gpg Could not resolve 'update.eeepc.asus.com' Failed to fetch http://xnv4.xandros.com/xs2.0/upkg-srv2/dists/etch/Release.gpg Could not resolve 'xnv4.xandros.com' Reading package lists... Done W:Couldn't stat source package list http://update.eeepc.asus.com p701/main Packages (/var/lib/apt/lists/update.eeepc.asus.com_p701_en_dists_p701_main_binary-i386_Packages) - stat (2 No such file or directory) W:Couldn't stat source package list http://xnv4.xandros.com etch/main Packages (/var/lib/apt/lists/xnv4.xandros.com_xs2.0_upkg-srv2_dists_etch_main_binary-i386_Packages) - stat (2 No such file or directory) W:You may want to run apt-get update to correct these problems E: Some index files failed to download, they may have been ignored, or old ones used instead. ,,, A:: The errors you saw relate to all repositories in sources.list, not just the Xandros one you added. It appears that your Eee is unable to connect to any of the repositories, which means that you either have no internet connection or your firewall is blocking access to asus.com and xandros.com. Are you able to browse the sites from the Eee? If so, this could be a proxy setting somewhere. If your network requires you to set a proxy server in the web browser, you need to set it up in /etc/apt/apt.conf by adding this line (obviously using the correct address and port for your proxy server) --- Acquire::http::Proxy "http://proxy.server.address:port"; ,,, You can also use the following syntax; both do the same job, so use whichever you prefer. --- Acquire { HTTP { Proxy "http://proxy.server.address:port"; }; }; ,,, It may also be a good idea to set the http_proxy environment variable, which you will need to do if you ever want to download files with Curl or Wget. While you may not be aware that you are using them, a number of programs make use of these for file downloading. You can set the variable by adding this line to /etc/profile --- export http_proxy="http://proxy.server.address:port" ,,, Back to the list ****** How to use Magic SysRq keys to reboot frozen machines Q:: I've been trying to understand the Magic SysRq keys. I foolishly assumed it would be easy and that I would be able to do it without any problems. All it involved was pressing four keys at once. Still, I failed miserably and didn't get it to work. I'm using a Dell XPS M1719 laptop and whenever I press Fn+Alt+SysR+B the screenshot dialog box pops up (SysRq shares a button with Print Screen). I looked at the keyboard preferences but couldn't find a keyboard model for the XPS. Can you let me know how to fix this. It's making me a bit depressed not being able to follow the simplest instructions in the world. A:: SysRq and Print Screen are often the same key, even on desktop keyboards, so you may not need the Fn key. On most laptop keyboards, features that need the Fn key are highlighted in a different colour. If this is not the case on your keyboard, try the standard three keys of Alt, SysRq and the command you want to use. Speaking of commands, B (reBoot) is probably not the best key to use for testing. S (Sync) is harmless, and you can see the results if you switch to a virtual console with Ctrl+Alt+F1. Press Alt+SysrRq+S and you should see --- SysRq : Emergency Sync ,,, printed to the console, and written to the system log. If nothing happens, the most likely explanation is that your kernel has been built without the necessary options, but first check whether the file /proc/sysrq-trigger exists. If it does, you do have Magic Key support in your kernel. That file is an alternate interface to the same functions, you can send the commands by writing to this file, which is useful for remote sessions --- echo s >/proc/sysrq-trigger ,,, If the file does not exist, your kernel certainly does not have the option CONFIG_MAGIC_SYSRQ set. This is more likely than your keyboard being unable to send the correct command, after all, the same key provides a common function for Windows. In this case, the only reasonable option is to recompile the kernel yourself. This is not a difficult task (see the Quick Reference box on p111 for instructions). The other possibility is that your kernel has Magic Key support but it is disabled on your system. Run, as root, sysctl kernel.sysrq. If this returns a value of zero, edit /etc/sysctl.conf to change this for the next boot, and change it with --- sysctl kernel.sysrq=1 ,,, Back to the list ****** Separate OS, separate drive Q:: I would like to install Ubuntu on a separate SATA drive from the current SATA drive that has Windows XP. After doing some research on this, the installation process seems fairly straight forward. My one big question has been with the installation of Grub. Is it possible to install Grub on the Ubuntu drive only and keep the XP drive untouched? A:: Yes it is possible, but not necessarily the best option. If you install Grub into the master boot record of the Windows disk, it will not touch the partition containing your Windows install. The Ubuntu installer will take care of this and create a boot menu with options to use Windows or Ubuntu. The only drawback of this is that it will fail to boot if you remove the Ubuntu disk, which is easily remedied by running fixmbr from the Windows rescue CD. If you want to keep the bootloaders separate, you have a couple of options. You could install to the Ubuntu drive's MBR and use your BIOS's boot menu to choose which drive to boot. Most motherboards now pop up a menu if you hold down a key while they boot; see the initial BIOS screen or the manual to see which key. This has the advantage that it doesn't touch your Windows disk at all, but you have to be quick to hit the menu key at the right time. The alternative is to modify the Windows bootloader to add an option to pass control to Grub on the other disk. Install Ubuntu by booting from the disc and running the installer, telling it to use the second disk in the partitioning window. hen the 'Ready To Install' window is displayed, press the Advanced button and set the device for bootloader installation to /dev/sdb, the second disk. You also need to take this step if you want to use the BIOS boot menu to choose between disks. Now let the install run but do not reboot at the end (if you do, you will not have an option to use Ubuntu and will need to boot from the install disc again for the next steps). Open a terminal (Applications > Accessories > Terminal), switch to the root user and mount your Windows filesystem with --- sudo -i mkdir /mnt/windows mount /dev/sda1 /mnt/windows ,,, Then create a file on there containing the bootloader code from your Ubuntu install with --- dd if=/dev/sdb of=/mnt/windows/ubuntu.img bs=512 count=1 ,,, This creates a file called ubuntu.img (the name is unimportant) that contains the first 512 bytes of the second disk, with the Ubuntu bootloader. Now reboot into Windows and edit C:\boot.ini in something like Notepad to add this line to the end --- C:\ubuntu.img="Ubuntu" ,,, You could edit this file in Ubuntu, but Windows uses different line endings from Linux and ntldr can be a bit fussy, so play safe and subject yourself to Notepad. Reboot again and the Windows bootloader should show you a menu with options to boot Windows or Linux. Back to the list ****** eth0 device not appearing on Dell e1505 with Broadcom Ethernet Q:: I am running Ubuntu 8.04 on a Dell e1505 with a Broadcom Ethernet controller. A while ago (I may have been running 7.04 or 7.10 around that time) I installed Windows and updated my BIOS. After that, I noticed the eth0 was gone and I reverted to the old BIOS version, but the problem remained. I formatted the entire hard drive and reinstalled Ubuntu. My eth0 interface did not appear when I ran ifconfig. I ran lspci, and the Ethernet controller was not listed (Network Controller was still listed). /etc network/interfaces lists the loopback interface, but that's it. I tried manually changing the file to include eth0, and then ran ifup -a, but to no avail. I reset the BIOS configuration to factory settings. Wireless works fine with Ndiswrapper. When I plug a cable into the Ethernet port, the little lights show orange/red, but not green. What is the problem? Is it BIOS related, is it driver related? What do I do? A:: If the interface does not show up in lspci, it is almost certainly not there. Even if the device were not recognised, which would be surprising for a Broadcom device, lspci would show the manufacturer and product ID numbers. Do you have any unrecognised devices in the lspci output? If so, search the web for those PCI ID numbers. Ifconfig shows only configured devices, although adding -a shows all interfaces, it is still limited to devices for which a driver is available and loaded. If there is no sign of the device, there are two likely causes: that you have had some sort of hardware failure or that it is disabled in the BIOS. The lack of cable detection also points to a non-working Ethernet port. The chances of a hardware failure coinciding with a BIOS update are only likely if you are a follower of the teachings of Murphy, so I would start by looking at the BIOS options. Updating the BIOS often wipes the settings, resetting everything to the default. If the default of this BIOS update is to disable the wired Ethernet and leave only the wireless active, you have found the cause. Check the BIOS options and experiment with anything relating to the network controller. I would also see if there is a further BIOS update available, because the one you installed may well be the cause of the problem. If your Ethernet controller is truly broken, and the laptop is out of warranty, the easiest option would be to use a USB or PC-Card Ethernet controller. Several of these devices are supported in recent Linux kernels, so find out what is available and check for support before you buy, or find a helpful dealer who will let you plug the device into your machine before buying it. lsusb should show the device and ifconfig -a will show the network interface if the driver is loaded, even if the network is not configured or even connected. Back to the list ****** Sharing partitions between different flavours of Ubuntu Q:: I finally built up a computer and installed Ubuntu. I divided my 750GB SATA drive into several partitions: /boot, swap, /, /usr, /local, /var, /home as per the recommendations in Practical Guide to Ubuntu Linux. I'm interested in installing an additional real-time kernel-type Linux such as 64 Studio or Ubuntu Studio for some audio and video projects. Can any of the existing partitions be used by the additional install? Do any packages used in both installs have to be installed into each Linux install? I would expect that /home data would at least be shared. How about /usr, /local and /var? Perhaps it is just my ignorance, but it seems that the differences could potentially be confined to /boot and a configuration file. A:: The only two partitions that it is really safe to share are /home and swap. Swap data is temporary anyway and not expected to survive between reboots, so a common swap partition makes a lot of sense. Running a separate home partition is a good idea, because your data survives reinstallation, but it can be a little tricky to share /home between two distros. Because of differences in program versions, and possible conflicts of user IDs, it's not a good idea to share a home directory between two distros, so you're better off using one home partition but a different home directory on that partition for each distro. You can either use a different username with each distro, or use the same name but a different directory. The convention of using /home/username as the home directory is just that, and is only a default setting, not a requirement. If your username is bryan, you could have home directories of /home/bryan-ubuntu, /home/bryan-studio and so on. Each distro installation is a separate entity: you cannot share installed program and library files between two. Some distros modify programs to suit their own needs, and it is very rare for them to update versions at exactly the same time. You could share /boot in theory, but it can be a lot of work to set up and maintain, and a separate /boot partition is not really necessary with modern hardware. Using the number of partitions per distro that you are using is sure to exceed the partition limits of the system before long. You have a couple of options here. The simplest is to have a single root partition for each distro, plus common swap and home partitions. Each distro is then a self-contained entity within its own partition. A more flexible option, especially if you want to run multiple distros, is to use the Logical Volume Manager (LVM). This would entail having a small /boot partition for each distro plus a large partition given over to LVM. This would then contain logical 'partitions' for the various distros, as well as /home and swap. The advantage of this approach is flexibility, volumes can be created, resized and removed on the fly, which is useful when experimenting. Many distros have an option to use LVM during installation. There is another option when experimenting with different distros: virtualisation. You can install VirtualBox on Ubuntu and create virtual machines within that for any distros you would like to experiment with. Only when one convinces you that you want to use it long term do you need to worry about partitioning the disk to install it. Back to the list ****** Mozilla is slow to resolve domains Q:: Now that my broadband connection is working under Linux (SUSE 9.0 and a Netgear DG834 modem/router), I have noticed that when loading web pages (using Mozilla 1.7) a message usually appears at the bottom of the Mozilla screen saying 'Resolving Host XXXXXXXX' where XXXXXX is the web address being loaded. This causes a 5 to 10 second delay in the page appearing. Using Mozilla or IE6 under XP this does not occur, and pages load almost instantly. Do you know the reason for this, and can I speed up the display of web pages in any way? A:: Mozilla will use IPv6 for DNS lookups, which you probably won't need, and will slow down web access while it times out trying to access hosts. Disable this in /etc/modules.conf by modifying the 'net-pf-13' line, which loads the ipv6 support module, and replacing 'ipv6' with 'off'. This stops IPv6 support loading and will allow Mozilla to purely use IPv4 to access the DNS services for faster lookups. Back to the list ****** Nautilus can't access new partition Q:: My mum has just got a new digital camera and I have made a partition on /dev/hda2 for all the photos from it. I made it using GParted and everything worked, but when I tried to open the partition, I received an error from Nautilus: --- libhal-storage.c 1401 : info: called libhal_free_dbus_error but dbuserror was not set. process 5222: applications must not close shared connections - see dbus_connection_close() docs. This is a bug in the application. error: device /dev/hda2 is not removable error: could not execute pmount ,,, This is the second partition on the disk, and I have tried doing it multiple times both on Ubuntu 8.04 and 64 Studio but I just cannot access the partition. On top of that, I can't access the Ubuntu partition on the first partition of hda1 either. It just gives the same error. PS My girlfriend thinks Linux is boring. Please give me good reason to tell her otherwise! A:: There are two messages here, the first two lines are usually caused by your not being in the correct group for automounting, plugdev. You can check which groups you are in by running groups from a terminal. If plugdev is not listed, add yourself to the group with --- sudo gpasswd -a username plugdev ,,, You need to run this for your own username and your mum's. If you are already in plugdev, it is likely that this is only a warning and it is the second error that's causing the mount to fail. Most systems now use Pmount to mount removable drives, because it allows devices to be mounted by a user without giving them root access. Because of the security issues involved with this, it is restricted to removable devices. If a device does not report itself as removable, such as a hard disk, Pmount will refuse to mount it. You can override this behaviour by adding the device name to /etc/pmount.allow. Create the file if it does not exist and add /dev/hda2 to it. You can add as many devices as you need, one per line. However, if this is a fixed disk, why you are using automounting in the first place? It may be easier to simply mount the partition somewhere in your directory tree. Create a directory to use as a mount point, say /mnt photos, and add a line to /etc/fstab like --- /dev/hda2 /mnt/photos ext3 defaults 0 0 ,,, then make sure it is both readable and writable by your user with --- chmod username: /mnt/photos ,,, PS You're 14; I'm sure you can find other ways of impressing your girlfriend. Sheesh! Kids these days have no imagination! Back to the list ****** Does lsmod list modules built into the kernel? Q:: When I'm compiling my own kernel I can build modules into the kernel or compile them separately to load when required. If I run lsmod in a terminal, will it list the modules that are built into the kernel, or it will show just the modules that have been loaded separately? A:: lsmod lists only the modules that are currently loaded into your kernel. To see a list of all modules available for the current kernel, run modprobe -l. Each of these commands work with loadable modules only, so if you want to know what is compiled into your kernel, you need to inspect the kernel config file with either /boot/config-<version> , /usr/src/linux/.config or, if your kernel has this option enabled, use /proc/config.gz for the running kernel. Look for lines ending in =y, although this also includes options that enables features, not just modules. Use one of these two lines: --- grep '=y$' /usr/src/linux/.config zgrep '=y$' /proc/config.gz ,,, These automate the process of searching for configuration options in the kernel. Back to the list ****** Disabling IPv6 in Ubuntu Q:: I want to remove IPv6 support. I set about doing /etc/modprobe.d/aliases in the terminal as root, but I got a terse message telling me: 'permission denied' Can you tell me why that is? I have Ubuntu Studio 7.10 and I need the other things working as well. This has been a nightmare for me trying to disable IPv6. My modem is a D-Link ADSL DSL-G624T, my ISP is Virgin. I went to the Ubuntu website and followed their instructions to disable IPv6. As root I put in the terminal, --- gksudo gedit/etc/modprobe.d/aliases ,,, Nothing happened (I believe a list is supposed to come up on a web browser or something). I am a Linux newbie and I really would like an answer to this terrible nightmare. Others out there have the same problem as me. Can't the Ubuntu people make an update for this, I think IPv6 is not even being used yet, and will not be used for some time to come. A:: /etc/modprobe.d/aliases is a data file, not a program, so you cannot run it: you need to edit it. This is what the command you found on the Ubuntu website does, but you mistyped it. The correct command is --- gksudo gedit /etc/modprobe.d/aliases ,,, note the space after gedit. This does not open a browser; it runs the Gedit text editor, loading the aliases file ready for you to modify. The gksudo program is used to run the program as root, because only the root user can modify system files like this. In fact, plain old sudo will do the job just as well. Once it's loaded, add the lines --- alias net-pf-10 off alias ipv6 off ,,, reboot and you should be able to use the net. It is true that IPv6 is not presently in common use, but ISPs will have to begin the switch over fairly soon. The best solution is to fix the broken part of the system, which is not your distro for having IPv6 capability, nor your ISP for not using it yet. The fault lies with your router for not handling the fallback correctly. Many manufacturers have released firmware updates for their routers, which fix this problem without your having to hack at the operating system setup. Back to the list ****** Control remote security cameras Q:: I have seen security cameras for sale that come with Windows software to detect movement, record video and send alerts via email or SMS. Can these cameras be used with Linux? What sort of software is available to record, are the other services available in Linux, and what type of camera should I use? A:: You'll be pleased to hear that you can do all of this with Linux. There are plenty of programs that will record a camera feed, including that Swiss Army Knife of video, MPlayer, but there are also dedicated programs to provide security features, like motion detection, alerts, handling multiple cameras and so on. The most complete of these is almost certainly ZoneMinder (www.zoneminder.com). There are various types of camera you can use, the most common being a standard security amera that provides composite video output. This connects to the composite input connector found on many TV and frame grabber cards. ZoneMinder works with these and with USB cameras, usually webcams. Any camera supported by the Video4Linux framework will do. You can also use IP cameras that connect directly to your network, either wired or wireless, but these are a lot more expensive. ZoneMinder can handle any number of cameras, but the principles are the same whether you use one or a dozen. Once ZoneMinder is installed, you need to modify the Apache configuration. Open a terminal and run --- sudo ln -s /etc/zm/apache.conf /etc/apache2/sites-enabled/010-zm sudo /etc/init.d/apache2 restart ,,, to include the provided configuration and restart apache to use it. This is for Ubuntu 8.04; it may be different for other distros. Now open Firefox and browse to http://localhost/zm and click on Add New Monitor - each monitor is associated with a camera. Give the monitor a name in the General tab. The type should be local for a wired camera or webcam, and set the function to Modect (Motion Detection). Pick the camera's device name and number in the source tab, usually /dev/video and 0 for the first camera. You can leave the other tabs as they are for now and press Save. If the function and source items show green in the main browser display, your camera is working, click on its name to view its output. Whenever ZoneMinder detects changes between frames from the camera, it begins recording. These 'events' are listed in the window that opens when you click on the camera name, and you can view them as a movie or stills. Of course, you don't want any type of movement to trigger an event - if there's a tree in your camera's field of view, a windy day will fill your hard disk. ZoneMinder supports zones within the field of view that can be prioritised or ignored. A lot more is possible with this software, read the documentation in the ZoneMinder wiki (www.zoneminder.com/wiki.html to see just how much. Back to the list ****** Encoding DVDs into small files for watching on Asus Eee netbook Q:: I am enjoying my Eee a great deal. One thing I would like to do with my Eee is use it to watch my DVD movies when travelling. I know I could attach an external DVD drive, but this defeats the purpose of an ultra-portable system, so I would prefer to copy my DVDs to one or more SD cards. Do I need to use a 4GB SD card for each DVD or is it possible to reduce them in size? A:: The normal way to do this is to copy the main film from the DVD to a video file, which you can copy on to an SD card and watch with the Eee's video player, SMPlayer. The only disadvantage of this is that you lose any DVD extras like angles, subtitles and alternate audio tracks. While it is possible to rip a DVD on the Eee, with the addition of an external DVD drive, transcoding video like this uses a lot of CPU horsepower and memory and would take quite a while on the Eee. This task is best done on a more powerful desktop or laptop system. There are a number of programs available for ripping DVDs to video files, one of the best being DVD Rip, which is almost certainly in your distro's repositories. The program presents a lot of options, most of which can be left at the defaults. The main decision for you to make is the format and bitrate of the video file you create. The format decision is fairly cut and dried - go for Xvid, as it gives good-quality, small files without requiring a lot of CPU power for playback. Bitrate determines the size and quality of the file you produce. The lower the bitrate, the smaller the file and the worse the quality. DVD Rip lets you specify the bitrate, and shows you the size of file it will produce, or you can tell it how much space you want to use and let it work out the bitrate for you. While DVD Rip has a convenient point-and-click interface, this becomes less convenient when you want to rip a number of DVDs. This is when the command line comes into its own. Mencoder is part of the MPlayer package, and can encode anything that MPlayer can play including DVDs. Mencoder has a huge number of options, making its man page one of the longest around. An easier approach is to use AcidRip (http://untrepid.com/acidrip), which lets you choose settings from a GUI and then calls Mencoder. Once you have everything set up as you like in Mencoder, click on the Export button in the Queue panel to generate a shell script, acidrip.sh, that calls Mencoder with the correct options. You can reuse this script for each DVD you wish to rip, just changing the track number and output filename for each one. The track number is given after dvd:// on each call to Mencoder. You could even edit the script to take the track number and output filename as arguments. Replace each instance ofthe track number (once for each pass, and Mencoder works better with two passes) with $1 and the output filename with $2. Then you can run it with --- sh acidrip.sh 1 mydvd.avi ,,, $1 is replaced by the first argument after the script name and $2 by the second, so this rips the first track to mydvd.avi in the current directory. Now you can copy the files to your SD card ready for viewing anywhere. Back to the list ****** Configure mobile email on the Eee netbook Q:: I have an Eee PC 901 Linux version on order and I'm hoping that a mobile broadband account will soon enable me to enjoy a much more comfortable mobile email experience. But how will I configure mobile email on my Eee? I have my bob@bob.net email address which everyone knows, but I don't want to migrate everyone to my new generic bob3876@googlemail account. I want to keep my bob.net account. Why would I want to give up my personal email address for a generic Gmail/Yahoo AOL account? Right now, my mobile phone downloads messages but leaves a copy on the server, so I can pick them up again when I'm back home. I need to keep that facility, but it would be nicer to have the Eee PC with a synchronised copy of all the messages in my email application on my home PC. I can't end up having some messages on the Eee and others on the home PC. Can I use IMAP to achieve this? Equally, when the Eee isn't available, and I'm using web-based email from another PC, can I still see all my emails? Can I have home, web and Eee PC mail clients synchronised? Equally, can I ensure that mails sent from any location appear to the recipient as coming from bob@bob.net? Am I setting my goals too high? A:: IMAP will indeed do what you want. Unlike POP, which was designed as a method of retrieving mails from your ISP's mailbox to your computer, IMAP is designed to work with mails in the remote mailbox, although it can keep local copies too for when you are offline. With IMAP, the concept of leaving mail on the server no longer applies, because it is left there wherever you read it from, The only time mail is deleted from the IMAP server is when you delete it on a connected computer. Similarly, the concept of having all mail clients synchronised doesn't really exist with IMAP, because all are using the IMAP mailbox as the same data store. The main disadvantage of using IMAP with an ISP mail account is, as mentioned in Andy's article, that you may be limited in the amount of storage you have, plus you have the hassle of changing email addresses when you switch ISPs. You can avoid the latter by using your own domain name, which is cheap enough these days. One way to remove any storage limitations for IMAP is to run your own IMAP server, assuming you have an always-on internet connection at home and your home PC is always on. In brief you should install Fetchmail and Procmail to pull email from your ISP using POP3 and configure them by creating a file called .fetchmailrc. --- set daemon 300 poll mail.myisp.com with proto POP3 user 'myispuser' there with password 'mypass' is 'myuser' here options keep mda '/usr/bin/procmail -d %T' ,,, Configure Procmail to deliver the mail by putting this in ~/.procmailrc: --- MAILDIR=/var/spool/mail DEFAULT=$MAILDIR/$LOGNAME/ ,,, and set Procmail to run when your desktop starts, using ~/.kde/Autostart, the Gnome session manager or whatever applies to your desktop. This will download mail from your ISP and store it locally (the keep option leaves mail on the server until this is working properly). Now you need an IMAP server, like Dovecot. Install it and edit /etc/dovecot.conf to change the lines --- #listen= [::] #mail_location = ,,, to --- listen = * mail_location = /var/spool/mail/%u ,,, Set Dovecot to start when you boot, using your distro's services manager, and test it by setting your mail client to make an IMAP connection to localhost. Once it works, you will need some way to connect to your machine from outside, so register a domain name at dyndns.org to get round your ISP's dynamic addressing. Finally, set your router to pass through ports 143 and 993 (143 is plain IMAP, 993 is secure IMAP). Back to the list ****** How to format a Maxtor SATA hard drive in Linux Q:: I am presently building my own computer and have installed Debian. Whatever I have now done, I can no longer log in. Could you please tell me how to format the Maxtor SATA hard drive? I wish to start again and install two partitions - one for Vista and one for Linux. I have tried using command instructions but these seem very cryptic. I wish to fully reformat and reinstall both operating systems. I have tried different shell commands but with no success. A:: Both Vista and Ubuntu have options to completely reformat the hard drive before installation. You should install Vista (or any Windows variant) first, because the Ubuntu installer will see the Windows system and add a suitable boot menu, whereas Vista will simply try to overwrite any other OS it finds. You could install Windows and take the option to use the whole disk. Then run the Ubuntu installer and let its guided partitioner shrink your Windows partition to create those needed for Linux. However, this involves an unnecessary resizing of the Windows filesystem. A better method is to boot from the Ubuntu install disc using the 'Try Ubuntu Without Making Any Change To Your Computer' option, and run System > Administration > Partition Editor and delete all your partitions, then create one partition at the start of the disk for Windows. Make it whatever size you want to give to Windows and leave the rest of the drive empty - do not try to create the Linux partitions. Click on Apply and wait for the partition editor to do its stuff. Now reboot, swap discs and install Windows into the partition you created, which Windows should see as C:. Once the Windows installation is complete and working, reboot from the Ubuntu disk, choose the 'Try Ubuntu Without Making Any Change To Your Computer' option again and then run Install from its desktop icon. Let the partitioner install into the unused space you left, which it should do by default, although you may want to tell it to create a separate partition for /home. When the Ubuntu installer finishes, you should be able to reboot and see a menu offering you a choice between Windows and Linux. While all of this is possible using the command line, and faster when you know what you're doing, all the GUI tools you need to do it are on the installation disc, so there should be no need to learn the commands you find so cryptic. They aren't really cryptic, just unfamiliar if you haven't used them before, but that's why we have friendly GUIs to do the same job. Back to the list ****** The relationship between the MTA, Sendmail, Procmail etc. Q:: I am just trying to work out the whole Linux mail server scenario. I understand that the mail transfer agent (MTA) is a program such as Sendmail or Postfix that actually does the SMTP sending, receiving and so on. I also understand the obvious mail user agent examples such as Evolution. I am trying to understand exactly what a mail delivery agent like Procmail does. Do I need to have one installed on my system? A:: When someone sends a mail to your server that is addressed to you, the MTA handles accepting the connection from the remote server and receiving the mail. It also handles the opposite transaction: when you send a mail through it, it finds the next server in the chain and passes the mail to that. Once the MTA has received the mail, it has to deliver it, usually into your mailbox. Most MTAs can do this themselves, especially if it is a simple case of adding the mail to a local user's mailbox, but it is more common to use a separate mail delivery agent (MDA) to take care of this. So an MDA is not absolutely necessary, but it is usually desirable, for a number of reasons. An MDA like Procmail adds a lot more options - mails can be processed before delivery, for example, to strip annoyingly huge mailing-list footers from them, or filtered into separate mailboxes. Procmail can also handle things like vacation messages (do those after you've filtered for mailing lists, or you will get yourself kicked off a few lists when you go on holiday). All this can be done before the mail is delivered to the user. Another reason for using something like Procmail is that, in addition to any global rules in /etc/procmailrc, each user can set up processing or filtering rules in .procmailrc in their home directory. On a multi-user system, this gives each user individual control over their mail delivery. If they want copies of their mail forwarded to their Gmail account while they are away, a simple rule will cover it. It is also possible to have the MTA pass the mail to a program other than an MDA, like a virus scanner or spam filter. These then pass the message to the MDA for delivery (or not) after doing their job. So the mail could go from MTA to spam filter to virus scanner to MDA and then into the user's mailbox. Collecting or reading your mail via POP3 or IMAP requires another program, which reads the mail from the mailbox where it was placed by the MDA. Back to the list ****** Ubuntu, Dell Vostro 1500, Snapscan E50 scanner: 'invalid argument' from SANE Q:: I have installed Ubuntu 8.04 on my Dell Vostro 1500. With my old Snapscan E50 scanner connected, XSane can see my scanner but shows the following: --- Error: "Failed to open device 'snapscan: libusb:003:006': Invalid argument." ,,, But if I plug the scanner into my PC running Windows XP for a moment and then plug it back to my Dell, XSane kicks in and works beautifully. I can reboot the laptop, or turn it off and on again, with no problem. However if I switch off the power to the scanner, the problem appears again. I have to plug the USB cable into the Windows computer and back into the Dell to be able to use the scanner on the laptop. What is missing from my Ubuntu machine that it can't start the scanner? A:: When a device works only after having been used in Windows, you can be sure it requires a firmware upload. Many devices, particularly communication devices, do not have all of their operating code built in. Some - maybe most - of the code is contained in a firmware file that is uploaded by the driver when the device is first used, and the device itself may contain only enough code to permit it to receive and use the firmware upload. The firmware then stays in place until the device is switched off, so when you switch from Windows to Linux without power cycling the scanner, it will still work. Why do manufacturers do it this way? Because it is cheaper and more convenient for them. They can have different firmware versions to comply with the laws of various countries without having to produce specific versions of the hardware. This is why it is particularly commonplace on wireless devices, because countries have different restrictions on power, channels and so on. It also makes updates easy, as they only have to replace the driver. This whole process is transparent to Windows users as it is all handled by the driver, and a driver update automatically installs the latest firmware. The process is almost as automatic on Linux - you just have to know about the need for a firmware file in the first place. The details are on http://snapscan.sourceforge.net but all you need is the file called snape50.bin, which will be installed on your Windows partition. Copy this to somewhere on your Linux setup (/lib/firmware is the standard location for such files), and edit /etc/sane.d/snapscan.conf to include the line --- firmware /lib/firmware/snape50.bin ,,, You'll need to use sudo to do this as root and the path you put in snapscan.conf must match the location of the file. Now that Sane's snapscan driver knows about the firmware file, it will upload it for you, removing another dependency on Windows. If you ever install an updated driver on Windows, you may want to copy the new firmware file over, although if the old one works for you, why mess with it? Back to the list ****** Get D-Link router working with OpenSUSE 10.3 Q:: How do I get my D-Link router (DSL-2640B) and D-Link adaptor (DWL-111) to work with Linux (OpenSUSE 10.3). I don't know if the attached Windows info helps. Thing is, if I change these settings, it may not function under Windows anymore. Also, is it the router or the adaptor that needs to be compatible with Linux? A:: The router connects to the internet, so the settings for that should not be touched, and it does so regardless of any operating system your computer may be running. The adaptor connects the computer to the router and needs a suitable driver to work with Linux. Unfortunately, model numbers are not enough to determine which is the correct driver for your particular hardware, as manufacturers have a habit of changing the chips used inside a device without altering the model number. I was caught out with a D-Link USB adaptor like this. I bought one because it was supported by the Linux prism drivers, only to find that they had just switched to an unsupported chipset. There is a way out if it turns out that there's no Linux driver for your chip, but it's possible that there is a native solution. Your card is quite likely to be based on the Ralink RT73 chipset. OpenSUSE 10.3 includes a driver for this but does not install it by default. Go to the Software Management section of Yast and install rt2x00-kmp-default. Now plug in your network adaptor and go into the Hardware Information section of Yast. If your device is now listed in the Network Card section, you can go to the Network Card section and set it up. If it isn't listed, unplug the adaptor, open a terminal and run --- su tail -f /var/log/messages ,,, Plug in the adaptor and watch the output in the terminal. You should see some details whizz past (press Ctrl+C to stop the output). Run --- modprobe rt73usb ,,, and if you get no feedback, you have the right driver. Go into the Network Card section of Yast and set up your network connection. You may need to specify the driver module (rt73usb) for the device here. If all of the above fails, there is an option to use the Windows drivers for the adaptor, using a package called NdisWrapper. This emulates the Windows network driver interface, so that Windows drivers can be used in Linux, so install this with Yast. Dig out the driver CD that came with the device and locate the driver .inf file, then in the terminal window you should still have open, run --- ndiswrapper -i /path/to/driver.inf modprobe ndiswrapper ,,, The first command should install the driver where NdisWrapper can find it; the second loads the NdisWrapper module and should give no output if all is well. Now create the interface in Yast using the Add button, set the Device Type to Wireless and the Module Name to NdisWrapper, leaving everything else on the default settings, and follow the usual prompts to set it up. Back to the list ****** How to install Acrobat Reader on Linux Q:: I've just installed Mandrake 10.1 CE and have also just downloaded Adobe Reader for Linux from the Adobe website for a bit of practice at installing something (I'm a newbie). I opened a terminal window and logged in as root (using su) and unzipped the file, etc. I then ran the install file (./INSTALL), and selected the default installation directory (/usr/local/Acrobat5). The installer began to copy the files but I was then hit with the following error: --- Installing platform dependent files ... Done ./INSTALL: line 219: ed: command not found ERROR installing /usr/local/Acrobat5/bin/acroread ,,, I have repeated the procedure a number of times, and have downloaded the software from the Adobe site on several occasions. Can you shed some light on this one? Thanks. A:: It looks like you're missing the 'ed' utility, which will be available on the installation media or via the FTP services provided by Mandrake. Of course, one could also use 'xpdf' or OpenOffice.org to view PDFs without having to resort to Acrobat. Back to the list ****** Installing an ISO over USB Q:: I wish to use Catux-USB. My problem is how can I install the ISO image on to a USB pen drive? The CD writing software I'm using (Nero) only gives me the option to install to the CD-W drive and not to the pen drive. I have access to another version of Linux (Morphix) which I could use. A:: The ISO image inside the Catux distribution is for Catux itself to use. You don't need to burn the software to a disc, just copy all the files in the extracted 'catuxusb-0.1a-128' directory on to your USB drive. Back to the list ****** Partition planning Q:: I am a beginner with Linux. I've divided the hard drive on my notebook into five partitions: two 50GB ext3 partitions (for / and /home), two NTFS partitions of 50GB and 30GB each and 1GB for /boot. I want to know how I can reconfigure my distro (I'm running Ubuntu 8.04) to stop using the home partition on hda3 and to move it to hda1 (/root) without re-installing. I noticed that on the /root partition there's also a /home folder. Are these files duplicates of the users' profile in the /home partition (hda3 partition)? I have another question: is there a script to install Realtek RTL8187B wireless card under Linux, in general? I googled around and found out that I need to use Windows driver files during installation. Do you know any native Linux driver for this wireless card? A:: The home directory on the root partition is where your separate home partition is mounted. Unlike Windows, where each drive or filesystem appears as an independent drive letter, Linux mounts all devices within the original filesystem. The home directory on your root partition is empty as its only function is to provide a means of accessing the files on the home partitions. So the files are not duplicates; they are the same files. You can copy the files from the separate partition to the root partition, but this is a bad idea. We get a lot more mails from people wanting to know how to perform the opposite manoeuvre. By keeping your home directory separate, you can reinstall or upgrade the operating system at a later date without affecting your personal data and settings. If you do want to make this move, boot from a Live CD or DVD distro, such as Knoppix. This will mount your partitions separately, as /media/hda1 and media/hda3. You can now move the files from /media/hda3 to /media/hda1/home (which you will see is indeed empty) but you should think very carefully about the consequences of such a move - almost no one will recommend this as a good idea. What you could do is alter your partitioning setup, as you're wasting a huge amount of space at the moment. This can be a complex task, best done by running QtParted from the Knoppix live disc or GParted from the Ubuntu Live disc. Your root partition needs no more than 10GB, and most installs will use less than half of that. 50MB is ample for /boot, as it only needs to contain a couple of kernels and the bootloader (my /boot currently contains less than 12MB). The real space hogs are /home, which contains all your personal data (which is why you don't want to tie it into the operating system), and your Windows partition, also because all of the user files it holds. The NdisWrapper driver does require a Windows driver file to be copied over, but the Linux kernel includes a driver for this now. To avoid a conflict between the two, go into the Synaptic package manager and make sure that no NdisWrapper packages are installed. Your card should then be detected and appear in your network settings when you reboot. Back to the list ****** When NFS server is rebooted, clients need to be rebooted too Q:: I have a remote drive (rw) on a server connected by NFS which the client computer mounts at boot time using fstab. If the server computer is rebooted this seems to require any clients to be rebooted and the server disk remounted. On my Amiga I can do this by issuing a diskchange command on the client. Is there a way to do this on Linux without having to reboot the client or manually unmount and remount, or can I do this automatically? A:: Thankfully, it's a simple task to remount a device in Linux: just use mount /mount/point -o remount However, this should not be necessary. NFS is capable of restoring a mount when the server comes back up, and should do this by default, but it can have problems when the reboot is lengthy (ie, around five minutes or more). Does the server have a static IP address? If it uses DHCP, it is possible that it gets a different address after a reboot. It is more likely to be an error with the original mounting or exporting of the filesystems. What do the system logs on the server and client machines show? These usually contain valuable clues. Check the export options on the server with --- exportfs -v ,,, particularly the sync or async setting. It also helps to have either the soft or intr option in the /etc/fstab entry on the individual client computers. The exportfs man page lists the various options for the server, and the NFS man page shows the various mount options. Intr is often preferable to soft as it is less likely to result in data loss if the server reboots. The alternative option, which is the default when none is given, is hard. These all determine how the client handles a lack of response from the server. As soft can sometimes cause data loss on the server, it is best used when exports are mounted read-only - intr is the best general-purpose setting, as it allows the client to respond to signals instead of just locking things up when the server goes away. It is really a matter of trying the various options, and seeing which works best for you, but intr is my preferred way of avoiding timeout problems. Back to the list ****** Open source streaming media server to access videos on a NAS Q:: I manage a Linux-based (PCLinuxOS/Karoshi) infrastructure in a school. I want to add a streaming media server with a web-based interface so that staff can easily access videos stored on a NAS. Can you suggest any open source solutions/projects which might help me do this? A:: MediaTomb, from www.mediatomb.cc may fit your needs. It's available for several distros, with unofficial RPMs for PCLinuxOS available from http://hack.mypclinuxos.com. Once it's installed, the first decision you have to take is how you want MediaTomb to store its data. The first time you start MediaTomb, it will create a default config file at ~/.mediatomb/config.xml. Edit this to set either sqlite3_enabled or mysql_enabled to 'yes' or 'no' Some distros use a global configuration file in /etc/mediatomb/config.xml. If you choose SQLite, MediaTomb will create the database file the first time you run it, but a MySQL server would probably be better for a larger collection. To use MySQL, you need to create a database called mediatomb and populate it by running --- mysql -p mediatomb </usr/share/mediatomb/mysql.sql ,,, Then create a user with a password and read/write privileges for the mediatomb database. You can do this using the MySQL command line client if you are familiar with it, or take the easy way out and use PHPMyAdmin. Edit the config file to set the correct username and password. Start MediaTomb and load http://localhost:49152 into Firefox (it doesn't work with Konqueror) If you are administering this from another PC on the network, use the address of the server instead. You'll see a very bare window, so click the + button to start adding content. When adding a video file, set the type to Item, the title to the name you want shown and Location to the absolute path to the file. The MIME type setting is important here; one way to determine the correct value is to use the file command on the video file --- file -i /path/to/video.avi ,,, You can add directories to the server using the filesystem view, and edit the properties of these directories in the database view to enable autoscanning. This means you can have one or more directories that will automatically make available any content dropped into them. There are several UPnP (universal plug and play) programs that can browse and play the contents of your server, but for Linux computers, you can also make the content available to any program. Install DJmount from http://djmount.sf.net (you will need the Fuse filesystem in place to use this). Create a suitable mount point, say/mnt/media, and mount it with djmount /mnt/media The /mnt/media directory will now contain a MediaTomb directory, plus one for any other UPnP server on the network, with subdirectories for each type of content, such as /mnt/media/MediaTomb/Video. Back to the list ****** Blind Linux installation - Orca screen reader Q:: I am considering moving to Linux from Windows, but I must have a screen reader because I have recently lost my sight. I have listened to a podcast by a blind user installing Ubuntu 8.04 using the Orca screen reader, which sounded pretty cool. Will Orca read out all screen text with all Linux programmes, and is Ubuntu 8.04 the best distro for this, bearing in mind that blind users cannot read the screen or use the mouse, and that screenreaders cannot read out graphics or icons with synthetic speech? Commercial screenreaders cost around £500 and work only with Microsoft programs, so I am faced with an upgrade cost of £250 if I want to change from Outlook Express 6 to OE7. Which would be the most suitable distro, and where can I obtain a disk to install on my PC? A:: Orca appears to do that you need. While it doesn't work with all programs, it does work with any that support the Assistive Technology Service Provider Interface (AT-SPI). This includes the Gnome desktop and many of the applications that run on it. All the common tasks are catered for with at least one program; office suites, web browsing, email, accounting and even a terminal, so this should fulfil your needs. Orca doesn't work well with KDE, so you'll need to use the Gnome desktop (though there are proposals to change the AT-SPI to make it possible to work with KDE). This is the default desktop for Ubuntu, making it a good platform for such a setup. You can download Ubuntu from www.ubuntu.com. If you don't have broadband, you can also request a CD from the Ubuntu website, free of charge. The Orca screen reader is included in the standard Ubuntu installation. Go to the System menu and select Preferences > Assistive Technologies. Here you can set the screen reader and/or magnifier to run at startup. You also have to deal with the login screen. There are two ways to do this, both set in System > Administration > Login Window. One is to enable automatic logins for your user under the Security tab. You should be aware that, although more convenient, this does make your system somewhat less secure, as anyone turning on the computer is automatically logged in as you. The other option is to enable Orca for the login screen by ticking the 'Enable accessible login' box under the Accessibility tab. You can find more information on Ubuntu's accessibility features at www.ubuntu.com/products/whatisubuntu/accessibility, and details on Orca, including a list of supported programs, at http://live.gnome.org/Orca. Back to the list ****** Get wireless networking working in Fedora 9 Q:: I'm a complete newbie to Linux, and have recently installed Fedora 9. I've been looking forward to getting started, but now find myself getting a bit stuck. I can't figure out how to get my wireless network running. I gather that only a handful of adaptors will work, chipsets vary, and even when you find one that supports Linux, driver installation seems to be quite an involved procedure using code. (My adaptor's a Netgear WG111v2 USB stick, but I'd like to use a PCI card.) I'm terrified by the amount of code, and the languages that seem to be required to use Linux effectively. I just can't understand what's written most of the time, and I'm far from computer illiterate. How do you get a wireless network adaptor to work with Fedora 9? Second, and I guess most importantly, can you suggest any resources/books that could lower me into the Linux pond gently? I want to escape Windows, but find myself almost creeping back to the familiar environs of XP because Linux, with its heavy jargon, myriad variants and bizarre-looking code looks so inaccessible. I'm raring to get stuck in, and would love to learn as much as I can, but don't quite know where to begin; any suggestions would be gratefully received. A:: Wireless chipsets can be a problem, and the situation is worsened by manufacturers changing the chipset in a product without changing the model name or number. Almost all wireless chipsets are supported in one way or another nowadays, although the effort required varies from one to the next. As you plan to buy a new PCI card anyway, I would suggest you buy it from Linux Emporium (www.linuxemporium.co.uk), which provide hardware that is known to work with Linux and can will give support for most distros. It also provides Linux drivers for those distros that don't have support built in (usually older distros, as wireless support in Linux has improved a lot recently). You don't need to know any programming languages to use Linux, but a willingness to type commands in a shell from time to time does help. The various distros have gone a long way towards providing system tools that avoid the need for the command line, but sometimes it's quicker and easier, and not as scary as it first seems. Linux's jargon is not so much heavy as unfamiliar - try explaining Windows to someone who has never used it before. There are plenty of resources for Linux users old and new. Man pages are documents for the various commands, which you access by typing man commandname in a shell or, if you use the KDE desktop, press Alt+F2 and type man:commandname to open an HTML version of the man page in the web browser. There are various online resources you can use such as Rute (http://rute.2038bug.com/index.html.gz), which provides a useful introduction to working with Linux, and the Linux Documentation Project (http://tldp.org), which contains a collections of HOWTO guides, FAQs, man pages and some longer guides, such as 'Introduction to Linux - A Hands-on Guide'. If you prefer a proper book, there are plenty of introductory works for the various distros, with Fedora options including Beginning Fedora from Apress and Fedora Unleashed, co-written by our esteemed editor (both of these are available from Amazon - other book retailers are available). It's also worth revisiting previous issues as your knowledge increases. Articles that went right over your head a few months earlier will give you useful information after just a few months of learning. Back to the list ****** Ubuntu 8.04 upgrade: Zenity no longer displaying messages Q:: In Ubuntu 7.10 I was doing automatic data backups using crontab to run Bash scripts containing Zenity messages. Since I upgraded to Ubuntu 8.04 the messages no longer display, though data backup still takes place normally. I have tried many suggestions from the internet for alternative code to display messages from crontab without success. I have tested the procedures in Linux Mint 5.0 Light and Ubuntu 7.10 on other machines and they work OK, so I suppose that this is a problem with the kernel. The hardware on my machine has not changed since these procedures were working in Ubuntu 7.10. I have been waiting in hope that a patch would arrive with the many updates to Ubuntu 8.04 but no luck so far! The command line in crontab is --- 16 10 * * * export DISPLAY=:0.0 && /home/bruceadmin/Scripts/dayback.sh > /dev/null 2>&1 ,,, and export | less at a terminal shows that :0.0 is correct for my DISPLAY value. A:: The first step is to remove or change the output redirection so that you are not dumping the error message your command is probably producing. Removing it will have the error emailed to you, or you could direct it to a file. If the backup script produces lots of output, you should consider redirecting stdout and stderr to different files. I suspect your error message may contain "cannot connect to X server :0.0", so you may also see rejection messages in /var/log/Xorg.0.log. Are you running this command from your user's crontab or /etc/crontab? If the latter, you need to run --- xhost local: ,,, to allow other users (/etc/crontab entries are run as root) access to your display. Alternatively, run the script as your user and use sudo within the script for any commands that need root access. Another possibility that would affect your user's crontab as well as the system one is your use of export to set DISPLAY separately from running the command. Export is only needed when commands are run separately from the settings of the environment variable and there is no need for this here. Instead use --- 16 10 * * * DISPLAY=":0.0" /home/bruceadmin/Scripts/dayback.sh &>/home/user/cron.log ,,, which works perfectly on our Hardy test rig. When you're happy, change the logfile to /dev/null, but until then it can be useful for debugging. Back to the list ****** USB thumb drive not showing up after OpenSUSE 11 upgrade Q:: I have just installed the latest OpenSUSE 11. I kept all my backup files in a thumb drive (/media/disk) before the installation, which completed without any hitches. When I opened the Konsole terminal and tried to change directory by keying cd /media/disk I got this error message: --- bash: cd: /media/disk: no such file or directory ,,, The same thing happens when I tried to get to my thumb drive using Konqueror. But I noticed that Dolphin can recognise the thumb drive as 'Volume (vfat)' It is only after I have clicked on the Volume (vfat) icon that I am able to do cd /media/disk using the terminal. The same goes with the Konqueror browser. A:: This is normal. It means that your system is set to not automount removable devices when they are connected. The automount system creates the appropriate directory in /media just before mounting the device, and removes it when the device is unmounted - that's why /media/disk does not exist. When you open it in Dolphin, the device is mounted, which is why you can then access it in a terminal. KDE should pop up a notification when a new removable device is connected, with an option to open it in Dolphin. KDE 3 enables you to set an action to be performed automatically when a device is connected (set in the Peripherals > Storage Media section of the KDE Control Centre), though KDE 4 unfortunately doesn't at the time of writing doesn't. However, It isn't a major hassle to open the device in Dolphin before trying to access it from the shell, now that you understand what's going on. Back to the list ****** Fedora 9: DNS problems for mobile broadband E220 connection Q:: I'm using an E220 USB HSDPA modem under Fedora 9, all working with the new Network Manager 0.7.x. I have a little wireless connection showing up on the task bar and got a IP address. When I right-click on the icon and go to Connection Settings, it keeps showing up 4.2.2.3 and 4.2.2.4 for the DNS settings even when I go to /etc/resolve.conf to change them to use the OpenDNS setting. I don't use an Ethernet cable or wireless with this laptop, only the USB modem. All the websites I've seen have told me to change it in Network Manager and the /etc/resolv.conf file, but when I do, it goes back to 4.2.2.3 and 4.2.2.4. How can I change the DNS settings for the PPP connection (mobile broadband) and get them to stick? A:: Fedora is obtaining the DNS addresses from your ISP as part of the DHCP negotiation (see the Frequently Asked Questions box on page 110 for more details on DHCP), which also gives your IP address, default route and some other bits and pieces. It is possible to change this by running the network configuration tool at System > Administration > Network, selecting your modem and pressing the Edit button. Leave the setting to 'Automatically obtain IP address settings with:' at DHCP and untick the box to 'Automatically obtain DNS information from provider' Press OK, then go to the DNS tab and specify your addresses manually. Don't try to edit /etc/resolv. conf, as this is generated each time you bring up a network interface using the settings you just changed. Some other distros allow you to specify the options used when requesting information from the ISP with either dhcpcd or pump, two common DHCP programs. In such cases, the option to be passed to dhcpcd is -R, which tells it not to rewrite /etc/resolv.conf; pump uses --no-resolvconf to do the same thing. Back to the list ****** HardInfo installation: 'error while loading shared libraries' Q:: I installed HardInfo (the system profiler and benchmark app) on to my Ubuntu 8.04 system. When I run it from the command line I get "hardinfo: error while loading shared libraries: --- libsoup-2.2.so.8: cannot open shared object file: No such file or directory" ,,, Yet locate libsoup shows --- /usr/lib/libsoup-2.4.so.1 /usr/lib/libsoup-2.4.so.1.1.0 /usr/share/doc/libsoup2.4-1 /usr/share/doc/libsoup2.4-1/AUTHORS /usr/share/doc/libsoup2.4-1/NEWS.gz /usr/share/doc/libsoup2.4-1/README /usr/share/doc/libsoup2.4-1/changelog.Debian.gz /usr/share/doc/libsoup2.4-1/copyright ,,, A:: HardInfo requires version 2.2 of libsoup, and it looks like you have the later 2.4 installed. HardInfo is included in the Ubuntu repositories and can be installed directly from Synaptic. When you try to do this, it picks up on the dependency on libsoup2.2 and installs that too. So the solution to your problem is to either install HardInfo with Synaptic, or at least install libsoup2.2 from there. Libsoup 2.4 is not simply a newer version of the library: it's a different version that is not backward-compatible with 2.2. This keeps the code clean and compact because it is not burdened with legacy code and functions, but it means it will not work with programs that were written specifically for libsoup 2.2. This is not a problem as long as you are aware of the situation, because you can have both versions installed and programs can then use the version they need. Back to the list ****** OpenSUSE and SmartAX modem: light turning off in Windows Q:: I installed OpenSUSE on to a PC that had been fine with Windows, including browsing the web. With OpenSUSE it was also able to browse web, but on reverting to Windows, there was no LAN light on the modem-router (a TalkTalk-supplied Huawei SmartAX MT882). I uninstalled the Windows driver and re-installed, with no improvement. Multiple complete shut-downs and re-starts made no impact. Reversion to the previous day's restore point made no impact. Restarting OpenSUSE again made the LAN light come up and browsing was OK, but this didn't help the situation in Windows. Deeply disenchanted, I switched the power off at the wall and went to bed. This morning I fired up the PC in Windows, and the LAN light is back, with all seeming OK. Can you advise what was wrong and what may have changed to resolve it? I have no idea if the issue lies in the PC or the modem-router but I hate 'miracle cures'. A:: The problem lies in the PC, and more specifically in the way that Windows initialises the network controller. When you start the computer from cold, the device is in its default state and Windows has no trouble initialising it. When you reboot Windows, things still work, because the driver expects the device to be in the state it left it before the boot. The problem arises when you boot from Windows to Linux. Windows sees this as a warm boot, but the card has not been initialised in the way it expects, because the Linux driver was handling it last. These problems are usually caused by drivers uploading specific firmware to the card - it is possible that the Windows and Linux drivers use different firmware versions, or simply the Linux driver leaving registers set in a way that confuses the Windows driver. Either way, the simplest solution is to shut down the computer instead of rebooting when you want to switch from Linux to Windows. A simple Shutdown from the desktop menu followed by a press of the powerbutton may suffice for this. However, as the motherboard still receives some power after a shutdown, you may find you have to switch off the power at the wall socket or the switch on the back of the PSU for a few seconds to ensure a fully cold restart. This would explain it working again when you switched on the next morning. Back to the list ****** Letting users edit pages on a CMS Q:: I run a backpackers website and I would like my users to be able to post their own news and photo albums for their family or friends without the need for webmaster interventions and without interfering with each other. What I have in mind is providing a zone/webpage system. I use PostNuke but I would like my (limited) number of users to have the possibility of creating their own homepage, visible to others or for their private use. Unfortunately, I have no idea what such a system is called... not a CMS (content management system) I suppose? A:: The system you're looking for is known as a Wiki and allows users to log in and dynamically change pages available to them with ease. There is a Wiki in almost every language imaginable, and even a few no one has heard of, so there is quite a choice. Taking a look at sourceforge.net and freshmeat.net will give you a list of popular Wiki installations, as well as comments from users as to which work best for them. Each Wiki will have a variety of capabilities, including authentication, privileges and so forth, and installing a few and trying them all out is usually the best way to make sure everything works and that they do all you need them to. Back to the list ****** What is the best filesystem? Q:: I would like to get rid of Windows, completely! But it seems that, as far as filesystems are concerned, we are always obliged to pass through either FAT, FAT32 or NTFS. What should we use for USB keys? They are always FAT formatted. I've tried to format a shared partition as ext3 on my PC where I have different distros, but it comes to be a read-only partition, while NTFS-3G allows read-write without any further operation. At work I have a MacBook with Leopard and Ubuntu, but it seems that to share a partition I have to use NTFS. OS X can't read ext3 and Linux can't write on HFS+, while both can read and write NTFS. Can you suggest which filesystem I should use when using different OSes and Linux distros? Are we really obliged to always use the Microsoft filesystems? A:: The reason USB keys and flash memory cards are formatted with FAT as standard is that everything can read and write FAT. Manufacturers of these items are less concerned with using the most effective filesystem than making sure it works for everyone. Having said that, you can use almost any filesystem you like on a flash memory device. You should steer clear of journalled filesystems, like ext3, ReiserFS, XFS and NTFS because the journal can severely reduce the life of the device. Flash memory can only accept a limited number of writes to any one location (most manufacturers quote 100,000). A block that is written to every time anything on the filesystem is changed will be subject to much heavier wear and will fail far sooner than the rest of the drive. HFS+ filesystems are supported in Linux, although with limited journalling support, which is a good thing for the above reasons but is also a reason to not use HFS+ on memory sticks on a Mac. Ext2 is a good choice as it is fast, reliable and non-journalled. It is also supported on other operating systems, but not by default. There is an ext2 driver for Mac OS X available from http://sourceforge.net/projects/ext2fsx and one for Windows from www.fs driver.org. Is your ext3 filesystem actually read-only, or simply not writeable by your user? The output from mount will show this - it will contain 'ro' if mounted read-only. Otherwise, it is likely that, as the filesystem was created by the root user, it is owned, and only writeable by, root. To fix this, mount the stick and run --- sudo chown -R youruser: /media/usbstick ,,, So you see you have a great deal of choice for filesystems, some better than others. For ubiquitous support, FAT is the popular choice, but watch out for the 4GB file size limit if you use a large device. If you are only going to use the device on systems you know will have a suitable driver installed, ext2 is a good alternative. Back to the list ****** Download Flash video SWF files in Ubuntu Linux Q:: I'm trying to download streaming Flash video SWF files to a folder for later viewing. I am using Ubuntu 7.10 on a Compaq Presario SR1720NX. Is there some way to do this? When I drag the icon to a folder all I get is the link to the download. A:: The first thing to do is check the link in the file you did manage to download. Paste that link into a browser, or download it with a command line downloader like Wget: --- wget [link] ,,, If the link is an mms:// URL, you can usually play it directly with MPlayer: mplayer "mms://blah....." You may need to enclose the full link in quote marks, because they often contain characters like ? that the shell tries to interpret. Enclosing the URL in quotes prevents this and ensures the URL is passed to program unmolested. If you are trying to play YouTube videos there are a number of scripts available that will determine the correct URL and download the video. One such script is Youtube-dl from www.arrakis.es/~rggi3/youtube-dl. Call it with the URL of the YouTube page containing the video, using quotes as above, to download the video to the current directory. Then play it with your favourite player. If you use KDE and the Konqueror web browser, there is a servicemenu available from www.kde-apps.org/content/show.php/Get+Yo uTube+Video+(improved)?content=41456. Once it's installed, you can right click on any YouTube page or embedded video and download or play it. Back to the list ****** Convert Kaffeine saved files to DVD Q:: I record TV programmes using Kaffeine. It saves them in files with a file type of M2T. I want to convert them to a format that I can burn to DVD using K3b so that I can play them back on my DVD player to my TV. Can you recommend any combination of routines that will do the job? A:: Kaffeine uses the .m2t extension for MPEG2-TS (Transport Stream) files. These are programs transmitted in MPEG2 format on a DVB (Digital Video Broadcast) channel, such as Freeview. DVDs use MPEG2-PS (Program Stream), which is a variation on the same format. The main difference is that TS files contain extra, redundant information that ensures the stream is still playable in the event all the data doesn't get through. As a result they are somewhat larger than a file of the same duration and bitrate extracted from a DVD. One of the simplest apps for converting these files, or almost any other type of video file, into a DVD is the Tovid package from http://tovid.wikia.com. As with most such programs, it can take a while if each file has to be re-encoded, but it does the job with a minimum of fuss. ovid has a graphical front-end, where you can build a menu structure, add videos, set background images and music and many of the other frills that comes with a DVD. It also comes with a handy script, todisc, that does everything from the command line, requiring just a list of files to encode. If all you want is an easy way to view your computer-recorded files on the family TV and are not bothered about fancy menus, todisc is the simplest option. This is another example of how quickly things can be done in the shell, although the actual task of creating the DVD can still take quite a while, even on a fast dual-core system with lots of memory. While todisc is also capable of some fairly complex layouts, it comes into its own when you just want to put a couple of videos on a DVD. Todisc will take a list of video files and convert them to the correct format, generate menus and the DVD structure and write the whole thing to an ISO file ready for writing to a DVD, like this --- todisc -pal -files video1.mpg video2.m2t video3. avi -titles "First video" "Second video" "Third video" -out mydvd ,,, The number of files and titles must be the same, and you should quote the titles if they contain spaces. Run this program, wait a while and you'll find a DVD image ready for burning to disc. There are a couple of alternative programs. Q DVD-Author is a graphical front-end to dvdauthor, the program that generates a full DVD file structure from the component video and menu files, which is also used by Tovid. You could also consider MythTV. While it is more complex to set up, it provides a great deal more flexibility, allows you to schedule program recording and has a plugin to burn recorded programs to DVD that takes care of all the work. Just select the programs by title and make a cup of tea while it is converting and burning the videos. Back to the list ****** Run 32-bit and 64-bit distros from the same drive Q:: My main reason for moving to Linux is to port some software developed on other platforms (beginning with W and with M) and written in Lisp. There are 32-bit and 64-bit implementations of the language available, which has me curious about ways to run 32-bit and 64-bit versions of Linux from the same drive. If I establish /home as a separate partition, then I could compile zot.lisp to zot.fasl on that partition, but that name would be used both for 32-bit and 64-bit object files. Thus I would prefer to have separate /home partitions for 32-bit and 64-bit Linuxes. Can I pre-partition a drive into three partitions - call them X,Y and Z - and then run two installs, asking one to install 32-bit Ubuntu on X, requiring it to establish partitions / and /home on X, using Z as swap; then repeat this on Y for 64-bit Ubuntu including Z as swap again ? If I can, I'd like to know how to accomplish this. More generally, do I need to pre-partition a large drive if I wish to install several flavours of Linux if I wish to keep them out of each other's way? A:: To take your last question first, no you do not normally need to pre-partition a drive before installation. Most distro installers include the facility to take care of partitioning, including resizing existing partitions. However, I think you are approaching this from the wrong angle. There is absolutely no need to use different home partitions. You could keep separate home directories for the two distros, but on the same partition, say greg32 and greg64. Use the same username, but set the home directory during installation. You may need to fiddle with permissions. This still suffers from one of the drawbacks of the separate partitions approach - you will have two copies of your source code and will need to keep them in sync, otherwise you could find yourself working with two different copies of zot.lisp. Plus all your other data, such as emails, would be split between the two home directories. You could deal with this by installing 32 and 64 bit versions of the same distro with a common home directory. That way all your settings would be the same whichever version you booted into. Now set up separate directories for your 32 and 64 bit compiler output, but use the same source directory. It may mean a little more work with your makefiles, but it would keep everything together and lead to far less confusion. Back to the list ****** Capture audio and video using VoIP Q:: Can you recommend some software for capturing audio and video footage using a VoIP package? I have a friend who wants me to upload some video tutorials to the web, but I don't have any software to capture it. I am using a Trust VoIP pack I normally use for Skype. I need to easily install (hopefully using the Ubuntu package manager), capture, edit and save in a standard format. A:: There do not appear to be any convenient GUI programs for this, but a couple of command line programs work well: MEncoder and FFmpeg. As with all video encoding programs, the list of available options can be quite daunting, but if you ignore the esoteric parameters and stick to the basics it is quite easy. MEncoder is part of the MPlayer package, so if your camera works with MPlayer, you should be able to record from it. The first step is to identify your camera's device; if it already works with Skype, the easiest way is to look in the Video Device section of Skype's Options window. (It will probably be /dev/video0, unless you have more than one video device.) You can test the camera with MPlayer using --- mplayer tv:// ,,, This should open a window showing your camera's output. If the device is anything other than /dev/video0, you will need to specify it on the command line --- mplayer -tv device=/dev/video1 tv:// ,,, If that works, you can probably record with --- mencoder -tv device=/dev/video1 tv:// -ovc copy -o video.avi ,,, which will record the camera's output to the file video.avi - press Ctrl+C to stop recording. You can change the OVC (output video codec) option to another to record in a different format, but try it with the Copy codec first, as your camera may produce a suitable format without re-encoding. If you do need to re-encode, it may be better to use Copy, then re-encode later. Unless your machine is fast enough to keep up with the incoming data, encoding on the fly may result in dropped frames. You can re-encode to MPEG4 with something like --- mencoder video.avi -ovc lavc -lavcopts vcodec=mpeg4:vbitrate=800 -o newvideo.avi ,,, to encode to newvideo.avi using the libav codec to produce an MPEG4 file with a bitrate of 800. MEncoder worked with the built-in webcams of my laptop and Eee, but not with the USB webcam I have on my desktop. For that I switched to FFmpeg, which does a similar job. --- ffmpeg -an -f video4linux -s 320x240 -b 800k -r 15 -i /dev/v4l/video0 -vcodec mpeg4 myvideo.avi ,,, The option -an disables audio recording; -f forces the use of Video4Linux for the input; -s sets the video size to 320x240; -b sets the recording bitrate; -r sets the frame rate to fifteen frames per second; -i gives the input device; and -vcodec sets the output format to MPEG4 (you can also use Copy as with MEncoder). The final option is the name of the output file. Press Q to stop recording, or you can specify the duration of the recording with -t followed by either a number of seconds or a time in hh:mm:ss format, so -t 90 and -t 00:1:30 are two ways of specifying the same duration. Back to the list ****** What to look for in a Linux distribution Q:: Alright, so I can make Gnome or KDE basically look like whatever I want it to, and I have hardware that's supported in every distro I've looked at. Aside from the package manager, what should I really be looking for in a distribution? From my standpoint, it seems that many distributions are very redundant in their features, though logic tells me that if this were true, they would not have been created. What should I be looking for? When you are running the same desktop configuration, do different distros differ much in performance? I've been using Ubuntu since I started with Linux, while distro hopping with a second partition, but I don't have the knowledge to tell whatworks best. A:: To an extent, you are right. The heart of every distro is the Linux kernel, and that's the same for every distro, which is why hardware support is fairly consistent across distros. Some may use a more recent version than others, and some patch it to add a few features. The same goes for the included software, a distro may apply their own branding or theme to KDE, but it is still KDE. What the distros do add are administration tools, which includes but is not limited to the package manager. So SUSE has the all-encompassing Yast, Mandriva has its Control Centre, which is a gateway to many smaller configuration tools they have created, while Fedora and Ubuntu have their own selection of configuration tools (and Gentoo has Vim). There are other differences, such as the way in which updates are released. Most distros only release security updates for the software bundled with a distro release, and if you want a later version you need to upgrade to a newer distro release. Ubuntu provides an option to upgrade an existing installation to the next release without re-installing, whereas a complete re-install is usually the preferred option with Fedora and SUSE. Most distros are community releases, and the community is often a distro's strongest asset. Look at the mailing lists and forums for any distro you are trying to see the level of support and assistance available from the community, which is an often overlooked factor when choosing a distro. In the end though, it's all about personal choice. If you are happy with your distro, stick with it. If you get frustrated by missing or outdated software or unhelpful forums, look elsewhere. If you get the itch to try something else, do so - it won't cost you anything to try. However, you will learn more about Linux in general, and your distro in particular, if you stick with it instead of installing a different one each time you hit a snag setting up your webcam or you don't like the desktop wallpaper. Back to the list ****** Remote DVD booting Q:: I have a Pentium 3 laptop with not much RAM, but it does have a DVD drive. I have a Pentium 4 PC with 1 GB RAM, no DVD drive and VirtualBox installed. Both laptop and PC are networked and I normally access the PC from the laptop via SSH. I want to run distros from in VirtualBox but the PC only has a CD drive and I'd prefer not to copy the DVD ISO image on to hard disk. So I installed NBD (network block device) software on the laptop (server) and PC (client) and do this: 1 Boot laptop 2 Boot PC 3 Put DVD in laptop DVD reader 4 Start nbd-server on laptop 5 Start nbd-client on PC 6 Mount /dev/nbd0 to /mnt/dvdrom on PC 7 Start VirtualBox and start a guest like Puppy from the DVD by using the CD/DVD mount ISO image tab under details (the ISO images on the DVD are in the Distro directory) 8 Enjoy Puppy (or whatever) Now, you might say, I could burn the Puppy ISO to CD and boot the CD from the host CD drive (which would save a lot of network traffic and may be faster). But if I have a distro that is on the DVD, like OpenSUSE 11.0, how do I get VirtualBox to boot the remote DVD? A:: As far as I can tell, what you want to do is not possible with VirtualBox. It does not allow you to enter a device node for the DVD, and editing the configuration file to use /dev/nbd0 instead of /dev/cdrom does not work either. Similarly, telling it that /dev/nbd0 is an ISO image does not work either. In each case, VirtualBox does not recognise the device as a CD/DVD. However, the latter approach does work in VMware. Setting /dev/nbd0 as the CD/DVD device doesn't work, but setting it as an ISO image does. You could use VMware Workstation or VMware Server instead of VirtualBox to run distros on the DVD, or you could copy the DVD to an ISO image on the PC. I know you said you didn't want to do this, but it's a more efficient way of doing it than using NBD. Not only would the transfer be faster, although that would be offset by having to copy the whole DVD, but it only has to be done once and in the background while you are not trying to run or install the distro on the virtual machine. You can copy the ISO image directly from the DVD drive to the PC by running this command on the laptop: --- cat /dev/dvd | ssh -c blowfish ip-address "cat >~/dvd.iso" ,,, where ip-address is the address (or hostname) of the PC. The -c option switches SSH to the relatively insecure blowfish cipher. Although less secure, and not recommended for use on public networks, it is faster and reduces the load on both computers when transferring over a private network. You can then use the ISO image in VirtualBox, and it will run a lot faster than an NBD mount would have. The third option, and surely the simplest, is to fit a DVD-ROM drive to the PC. Your differentiation between this and the laptop indicates that this is a desktop computer, in which case a standard DVDROM drive can be bought for around £10 from an online supplier or a local computer fair, or a DVD re-writer for around £15. Unless there are good technical reasons for not replacing your CD-ROM with a DVD drive, I would suggest this as the fastest and most hassle-free solution. Back to the list ****** Basic Linux file server with web accessibility Q:: I am new to Linux. I want to install a file server in the office with web accessibility so we can all call code off the server. We also want to access emails (currently emails are picked up on one machine using Outlook, with no web access). Can you recommend a basic Linux server that would suit our requirements? A:: Is this computer to be used exclusively as your server? If so, I would recommend one of the distros aimed at dedicated servers. ClarkConnect (www.clarkconnect.com) is best known as an internet gateway (a means of connecting a network to the internet with suitable content filters and access controls), but it can also be used as an intranet server. Because ClarkConnect (and similar distros like SME Server - www.smeserver.org) are put together for this particular purpose, they include all the software and admin tools you need. They are also potentially more secure than a general purpose distribution, because of their focus and the fact that they contain fewer programs. When installing ClarkConnect, you have a choice of a standalone server (one that is only accessible from your LAN and connected to the internet by your existing router), or as a full internet gateway. In the latter mode, it operates as a router and firewall for your network. If you already have a suitable ADSL modem/router, the standalone mode is easier to set up. The Community Edition is free but limited to ten users and without technical support, although there is community support on the forums. The Enterprise edition has a price tag, technical support, no user limit and some extra features. If you wish to use this in a business, the Enterprise Edition would be advisable. Once initial installation is complete, you can detach the keyboard and monitor for the server and put it somewhere out of the way. All administration is done via a web interface from any computer on your network (provided the user knows the password). This can include setting it up as a local mail server. This would collect emails from your ISP email account(s) using the maildrop module, and all computers on your network would be set up to read their mail from here. The server can also filter spam and viruses from incoming mails, saving the need to set up and maintain this protection on every computer in the office. Back to the list ****** Knoppix passwords Q:: I have installed Knoppix on my hard drive and wish to configure dialup with Wvdial on my external modem. To do this I need root permissions but have no password. I have tried to gain permission with a Live CD without success. So how do I gain root/administration permission on Knoppix? Using 'Root shell' or 'root password' in the menu brings the message 'su error' A:: Knoppix uses kdesu to ask for the root password when trying to run programs as root from its menus, but kdesu won't work if there is no root password, which is how Knoppix is set up. Running a system without a root password is a bad idea, and not only because it stops kdesu working, so open a terminal as your normal user and type --- su - passwd ,,, The first command will take you straight into a root session, because there is no root password - this is one of the reasons such a setup is bad. The passwd command will set the root password - type in your new password twice and all will be well. You'll now have a more secure box and the programs that KDE needs to run as root will work. Back to the list ****** Understanding installation Q:: I bought your mag for the first time, intrigued at the prospect of a free OS. Unfortunately, despite trying everything I can think of, I am unable to install VMware. I have a Linux version on trial ware which runs out in the next few days. On OpenSUSE 10 it appeared to install, but there were no options, such as which folder or the option to have a Desktop icon. Then I couldn't find the program anywhere, let alone the icons on the desktop or indeed anywhere else. I now have OpenSUSE 11, and am experiencing similar problems. I also find that even a simple program that doesn't normally require an installation, such as TrueCrypt, will not run. The program asks me which program to use to open it! How unintelligent. I have tried various things, but surely any modern OS should be able to recognise a program that is specifically set up for it? I have in desperation tried two other versions of Linux. All seem at least three generations behind Windows. Maybe I am assuming too much. It is free, after all. I also have a MacBook. I am not especially fond of it, but at least it recognises programs and appears to install them automatically. What am I missing here? Is there some generic process I have to go through to install and operate a program? I am very disappointed because I hate Vista. Everything seems bloated and it appears a waste of money. I am sincerely hoping that Linux can fill this gap. Please try and help. I will be very grateful indeed. BTW, if you suggest I have to use the CLI, please forget it. I left that system in 1993. A:: It's odd that you are interested in free software, but your first stumbling block is with a commercial program. The best way to install software on any distro is through that distro's software repositories. While it is perfectly possible to download and install VMware Workstation from vmware.com, this is the Windows way of doing things. Linux distros have repositories of software packages that are tailored to fit in with that distro, including adding menu entries to launch the programs. They also have methods of letting you know when something has been updated, so you don't need to check back with the web site to look for updates. However, the two packages you mention are atypical and do not appear in the OpenSUSE repositories. Download the RPM (not the tar.gz archive) from www.vmware.com. If you use the Konqueror web browser, click on the browser icon in the task bar, it will give you the choice of installing the file directly. Or you can download the file and then "run" it to install. If you use the tar.gz file, you will need to use the command line to install it. While TrueCrypt may run as a single file on Windows, on Linux it requires installation. Unfortunately, they choose to supply it in a weird format. This is an installer script inside a compressed archive, even though the archive is no smaller than the unpacked script, so you will have to use the command line for this. Open a terminal and type --- su tar xf truecrypt-6.0a-opensuse-x86.tar.gz sh truecrypt-6.0a-setup-opensuse-x86 ,,, These three commands give you root permissions, unpack the archive and run the installer script. Replace truecrypt-6.0aopensuse-x86.tar.gz with the name of the file you downloaded, if different. This installer in turn contains an RPM file (RPM is the package format used by OpenSUSE) that could, if supplied separately, have been installed with no more than a mouse click, but the installer they use requires you to accept the licence before installation. While most distros have substantially reduced the need for the command line, many Linux users still prefer it because it is faster than clicking through a GUI. There are very few occasions when you absolutely have to use it, provided you stick with the distro's way of doing things, but some software still requires the terminal, as is also the case with Windows. The terminal is a different way of doing things, but it is a tool to be used when appropriate, not avoided at all costs. Back to the list ****** Is my USB flash drive destroyed? Q:: I was copying files to my thumb drive during my lunch when I realised I was late for work. Without regard for my thumb drive, I unplugged it mid-transfer and ran out. The thumb drive suffered the consequences, and now is not being detected by either my Linux OS or Windows XP. I fear that the drive is probably done for, since an OS has to detect it to reformat it, but I thought I would tap the community brain just in case. Any help would be appreciated. A:: If you don't see anything in 'dmesg' when you plug the device in, we would recommend returning it to the place of purchase and obtaining a new one. Possibly removing the card while it was active caused a short, or otherwise obliterated the electronics on the device. While USB is 'Plug and Play', it does require filesystems to be unmounted cleanly and for any processes accessing files on the device to be shutdown. Unfortunately, even Linux isn't smart enough to sync all data in the buffers to disk the instant you start to yank it out of the back of the PC. Back to the list ****** Reeepositories Q:: I have recently bought an EEE PC 901 and tried to add programs as per a tutorial I read on the internet. All I got was a load of error messages. I assume that the line --- deb http://xn4.xandros.com/xs2.0/upkg-srv2 etch main contrib non-free ,,, is on a single line? I tried looking at the web address http://xn4.xandros.com/xs2.0/upkgsrv2 but could not find the directory etch, only dists and pool. I found AbiWord at http://xn4.xandros.com/xs2.0/upkg-srv2/pool/main/a. Where am I going wrong? Has the address changed or do I need to use a different address for the 901? A:: The information you have entered looks correct. The address given in sources.list is the parent directory that contains subdirectories for various releases of the distro. The dist directory then contains specific information for those releases; in this case, the Etch and testing releases. The second item on the sources line, Etch in this case, tells the package manager which of the directories in dist to use, and the remaining items say which subdirectories to use, from dists/etch. Some of the repositories for the 900 series are different, but in most cases you can use the same repositories as the 700 series as the software runs on both sets of computers. It is only a few system packages that are specific to the different Eee variations. It is difficult to be more specific without knowing what the error messages were. Even if they appeared meaningless to you, they contain specific information that helps pinpoint the source of the problem. Were you using the Eee to browse the web addresses, or another computer? If the latter, I suspect a network problem - I have found the wireless handling in the Eee's default Xandros installation to be a little unreliable. I would recommend trying again and capturing the error messages. Get back to us with the details, or post to the forums at www.linuxformat.co.uk, for a more detailed response. Back to the list ****** Third time unlucky Q:: I recently took the leap of faith and decided to stop using Live CDs and install Hardy Heron on a standalone system. Hardy is brilliant but as a freshie to Linux I am at a loss as to how to share my internet connection between my one desktop running Win XP Service Pack 3 and the other desktop running Hardy 8.04. I have a Sky Netgear wireless router that comes as standard on Sky Broadband the Netgear DG834GT-SKUKS. The Linux desktop has a Wireless-G PCI Wireless Adaptor WG311 v3. I have done a search on the internet to no avail as some of the help out there is not easy for freshies to understand. Please help me before I'm sucked back to the dark side by uninstalling Ubuntu and installing XP on the second machine, which I am dreading. A:: There are three versions of the WG311 card, using completely different chipsets. The first two are supported by the MadWifi and ACX drivers respectively. The v3 card uses the Marvel chipset, for which there is no useful driver right now. This means you have to use NdisWrapper, which in turn uses the Windows drivers supplied with the card. Install NdisWrapper through the Synaptic package manager, then copy the driver files from the CD to your home directory. The three files you need are, WG311v3.INF, WG311v3.sys and WG311v3XP.sys. If these are not available on the CD in anything but a Windows installer file (or you no longer have the CD), you can download them from www.jimbo7.com/wiki/files/good_WG311v3-driver.tgz. If that file is no longer available, Google for "WG311v3 linux wiki" - this is a set of instructions for Debian but the driver files are good. Once you have the files, open a terminal and run --- sudo ndiswrapper -i /WG311v3.INF ,,, to install and register the driver with NdisWrapper, then test that the module loads and detects your wireless card with --- sudo modprobe ndiswrapper sudo ifconfig -a ,,, The first command should produce no output, and the second should show your wireless interface, wlan0. Now edit /etc/modules as root with --- sudo gedit /etc/modules ,,, and add ndiswrapper on a separate line at the end of the file. This loads the driver each time you boot, so your wireless card is available. Now you can set up the wireless connection using the standard Ubuntu program, System > Administration > Network. Back to the list ****** Playing host over Wi-Fi Q:: My Hardy Heron desktop computer has two printers connected: an HP LaserJet that I use for correspondence and various business reports and an Epson photo printer that I use for my holiday snaps. Both work well, but I would like to be able to use them to print from my laptop that dual boots OpenSUSE and Windows XP, and my wife's laptop that runs XP only. Can I share these printers to Linux and Windows over my wireless network? A:: Linux uses CUPS (Common Unix Printing System) for all its printing needs. By default most distros set it to work only with the local computer but it's a simple step to make it available over the network. Windows usually uses its SMB (Server Message Block) system to share files and printers, but it can also connect to printers using IPP, the Internet Printing Protocol, which CUPS uses. To make your desktop's printers available to the local network, either edit /etc/cups/cupsd.conf and change the line starting BrowseAllow to BrowseAllow @LOCAL or use the browser-based configuration. Load http://localhost:631 into your browser, click on the Manager Server button and tick the box to share your published printers. Then go to the Printers section and check that each of your printers says published in Printer State. This way you can decide which printers are to be available. Turning to your laptop, running Linux, there are two ways to add the remote printers. The quickest is to edit /etc/cups/client.conf (create the file is it does not exist) and add --- ServerName your.desktop.address ,,, Where your.desktop.address can be an IP address or hostname. Now all your published printers are immediately available to all programs using CUPS forprinting. The only real disadvantage of this method is that all your printers must be connected to the same computer. If you want to use printers attached to different computers, you need to add them individually, either by using the CUPS web interface on the laptop or your distro's configuration tools. The standard way of printing to a Linux printer from Windows has been to use Samba, and this is essential if you still use Windows 9x. More recent version of Windows can work with IPP. Fire up the Windows Add Printer Wizard from wherever your version of Windows hides it in the control panel and tell it you want to use a network printer. When it asks you if you want to browse for a printer, take the option to enter a URL, enter it in the form --- http://hostname:631/printers/printername ,,, where hostname is the name of your desktop computer and printername is the name given it in CUPS. Then you will be asked for a driver, if there is none for your printer model, select a PostScript driver, CUPS will take care of the translation. Finally, right-click on the new printer's icon, go to Properties and print a test page to make sure all is well. Back to the list ****** Local network resolution Q:: I have a desktop running Fedora 9 and a laptop running Ubuntu 8.04, as well as my son's and wife's machines running XP. These are all connected via a D-Link router. I connect my desktop and laptop via SSH but use the local IP number to do this. The issue for me is that all machines use DHCP to get their IP addresses so they change regularly. What I'd like to do is use computer names for each machine and then resolve these to the actual IP addresses. To do this, I believe I need to setup a DNS server locally. I really don't want to use static IP addresses. A:: Most routers also act as DNS servers, so yours is probably already doing this. Some also have the facility to specify the IP address and hostname given to particular computers. The normal approach with DHCP servers is to pick an unused address for the pool of available addresses each time a request is received, but it's sometimes possible to specify which address a particular computer should receive. The computer is identified by the MAC (Media Access Control) address of the computer's network card. If your router allows this, you can specify the MAC address of each computer along with the preferred IP address and hostname. When you've done this, access by hostname should just work. The MAC address is six pairs of hexadecimal digits (such as 01:23:45:67:89:AB) and can be found in the network properties, or by running ifconfig in a terminal. On Windows, run ipconfig in a command prompt. If your router can't handle this, you can use dnsmasq (www.thekelleys.org.uk/dnsmasq): a useful, lightweight DNS and DHCP server that would suit your needs (I use it on my home network). This will take care of everything for you, but you'll need to set up the machine running dnsmasq to use a static IP address. Disable the DHCP server in your router and put the following in /etc/dnsmasq.d/local: --- log-facility=/var/log/dnsmasq.log domain=example.com dhcp-range=192.168.1.128,192.168.1.192 dhcp-option=option:router,192.168.1.1 dhcp-host=00:1A:92:81:CB:FE,192.168.1.3,hostname ,,, The first line sets up logging, which you may need if things don't work as expected first time round. The next line contains the domain of your local network and the third line contains the range of addresses to be allocated by DHCP, followed by a line giving the router address that all hosts will need to know to be able to contact the internet. The final line is repeated once for each computer, and contains the MAC address of that computer, the IP address to be allocated to it and the hostname to be given it. The IP address is outside of the dhcp-range value given previously to prevent it being given to another computer. Make sure /etc/resolv.conf on this computer contains the address of at least one DNS server. If your ISP changes DNS addresses from time to time, it may be best to put the router's address in here and let the router and ISP sort the correct addresses out over DHCP. If you do not want to use your ISP's DNS servers, put the ones you want in resolv.conf. Start dnsmasq, or restart it if it was already running when you edited the configuration file. Then reconnect each of your other computers and they should be given the hostnames and IP addresses you want. More importantly, you should be able to contact each of them using the hostnames, so you don't have to remember the numbers anymore. Back to the list ****** That rsyncing feeling Q:: I'm highly interested in running backups - ideally decentralised - regularly. To do that, I've found the nice utility called rsync. Here's the command I run to get a local copy of my whole home directory in another folder: --- rsync -avz --delete-after /home/white/ /home/white/Backup/ ,,, However, I would like to filter out specific files (typically source files and not data files) so that I come up with a subset of the filesystem. In that way, I would be able to copy the most sensitive files on to a remote hard drive and a local USB pen drive. A:: There are a couple of arguments to rsync that will do what you want. First though, is /home/white/Backup on a separate filesystem to /home/white? If so, you should add the -x or --one-file-system option, otherwise you'll find yourself trying to back up the backup directory on to itself, which will quickly fill it. This option is also useful when backing up the root partition, to stop it trying to back up virtual filesystems such as /dev, sys and /proc. To exclude particular files or directories, you can use the --exclude option: --- rsync -avxz --exclude '*.c' --exclude '*.h' --exclude .thumbnails ... ,,, Note that the first two exclude patterns are quoted to stop the shell interpreting the * characters, while the third excludes an entire directory. Your command line is going to get very long if you have a lot of files to exclude, but you can put the patterns (no need for quotes this time) in a file one per line and then call: --- rsync -avxz --exclude-from ~/myexcludes ... ,,, The exclude options are fine for filtering out simple patterns or single directories, but what if you want more sophisticated filtering? The --filter option provides this and it's comprehensive enough to have its own section in the man page, which you should read carefully before trusting the security of your data to the rules you create. However, you can simplify the use of filters by putting the exclusion and inclusion rules in a file called .rsync-filter and adding -F to rsync's options. This argument tells rsync to look for .rsync-filter files in every directory it visits, applying the rules it finds to that directory and its children. The format of .rsync-filter would be --- exclude *.c exclude *.h exclude .thumbnails ,,, You can use include rules as well as exclude. Each file is tested against each of the rules until one matches, when it is included or excluded according to the rule (subsequent rules are not checked). Files that match no rules are included. This can be used to include some files that would match an exclude rule by placing an include rule before it. You can also use this to only backup specified directories with something like this: --- include /mail include /documents include /photos exclude * ,,, The leading / makes the directory match against the beginning of the path in the directory you are backing up, not the root filesystem. If these rules are in /home/white/.rsync-filter, the first one matches /home/white/mail. This should be enough to get you started, but you must read the man page to be sure you understand what you are doing. Remember that the second rule of backups is that you must verify them after making them (the first rule is to do them in the first place!). Back to the list ****** SUSE and MadWifi Q:: Can you tell me which version of the MadWifi drivers works best with SUSE Linux? Also, is there a free application that will report a numerical value for the frequency at which I'm exchanging bits with my router? By frequency, I mean the number of cycles per second (in the 2.4GHz band), not data transfer rate in kilobytes per second. A:: As a general point, it's usually best to use software from your distro's repositories whenever possible. This software has been tested to work with that distro, both by the developers before release and the users after. Any problems that do show up can be reported and dealt with through the distro's bugtracking system, usually very promptly. OpenSUSE 11.0 has a prerelease version of MadWifi 0.9.4 in its repositories, so you should try this first. If this gives you problems, you could try compiling 0.9.4 from source (full details are on the MadWifi website at http://madwifi.org. The main reason for doing this has nothing to do with the distro you're using, it's only necessary if there's been a change relating to the hardware that you use. There's not much development on the MadWifi driver now, because most of the team's effort is directed at the new Ath5k driver that's included with recent kernels. As this improves and gains support and performance for more cards, the need for a separate driver package will reduce and eventually disappear. This is the way things generally work with Linux; once an open source driver proves itself, it's usually incorporated into the kernel. Many computers work perfectly now with no external drivers at all and the proportion will increase as the kernel is able to handle more hardware directly. You already have software that will report the frequency used by your card and router and any others in range. The wireless-tools package, which you should have already installed on your machine, does this and a lot more. Any of these commands will give the information you want, in a different context in each case. --- iwconfig ath0 iwlist ath0 scan iwlist ath0 frequency ,,, The first gives details about the connection between your computer and the access point; the second gives a list of all visible wireless access points; while the last command shows the frequencies available on your card, and the one in use now. These are administrator commands, so you need to run su in a terminal to become root before running any of these commands. Back to the list ****** A faster network please Q:: I'm thinking about upgrading my home network to Gigabit Ethernet. All my computers have gigabit network cards, but my router has a four-port 10/100 Ethernet switch built in. If I connect my router to a Gigabit switch, and connect all my computers to the Gigabit switch will I achieve Gigabit speeds, or do I need a Gigabit router? A:: The only traffic that would need to go through your router if you did this would be the traffic that goes to or from the internet. The only reason all traffic goes though your router device now is that it also contains the network switch, but the data doesn't actually go through the router part of the device unless it needs to reach the Big Bad Web. Any traffic between two computers directly connected to the switch would communicate at the maximum possible speed, which would be Gigabit if both had Gigabit network cards. Your router is a 100Mbit device and would be connected to the Gigabit switch, but this would not affect the speeds between other devices. Unlike a hub, which operates at the speed of the slowest device connected to it, data flowing through a switch only goes between the two devices involved in the transfer and is unaffected by anything else on the switch. The same goes if you had a computer with a 10/100 network card connected to the switch, only transfers involving this device would drop to 100Mbit speeds. You can find Gigabit broadband routers, but they're more expensive and the only difference is that the built-in switch is Gigabit. If your router needs to be replaced, one of these might be worthwhile, but otherwise add a Gigabit switch and you'll get the same performance and more network ports. Back to the list ****** Aspiring Linux user Q:: I've just got a Acer One. Should it have any security on it, such as AVG? Comet, where I bought it, said I have to have McAfee or one of the others. I do hope you can help me. A:: You don't need antivirus, anti-spyware and anti-trojan software like you do on Windows, but there are some steps you should take to improve security. The most important is to make sure that your wireless connection is secure, so if you have your own wireless router at home, make sure you have enabled WPA encryption. The alternative is WEP, an older and easily cracked encryption protocol, but still better than running an open connection. Because Linux software is open source, there's no opportunity to hide malware within the code, since someone will always find it. Stick to software installed through the Acer's own package manager, which will have been verified by Linpus Linux's own developers. Viruses are unheard of on Linux, so you don't need to worry on that front. The salesperson you spoke to obviously has no idea that the Aspire One is not running Windows, or they wouldn't have suggested McAfee, which isn't much use for systems running Linux. There's an anti-virus program for Linux; ClamAV (www.clamav.net) but it's most useful on computers that share files with Windows systems because it detects their viruses too. There is a good article on The Register about tweaking your Aspire One, which you may find helpful: www.reghardware.co.uk/2008/09/05/ten_aspire_one_tips. Back to the list ****** Filesystem fears Q:: I've read that it's better to use an ext2 or FAT filesystem on a USB key, because they make them last longer than if you used a journalled system such as ext3 or ReiserFS (or even NTFS). This got me thinking, if you have a USB key that you use to exchange files between work and home, it would presumably be read and written to twice a day. On the basis of the working year being about 240 days it would last 104 years on the basis of a life expectancy of 100,000 reads and writes. I don't quite understand the ins and outs of journalled file systems, but I believe they automatically check the disk and then verify the file when it's written, which would make a read one 'use' but a write three 'uses', so they would reduce the life of this over-worked USB key to a measly 52 years. As these things can be bought for a few pounds these days, is a false economy to be so careful with your USB key? A:: There's more to a filesystem than just the files - there's also metadata, such as file permissions and time stamps. Then there are directory indices to consider. When writing to a file, all of these have to be updated. So if you copy a directory containing 10 files to a disk, that means eleven directory entries to be updated. With a FAT filesystem, the file allocation table that gives it its name is stored at a single location, so every action on the disk involves reading or writing this location, and that's what causes the wear. If a device is mounted with the sync option, there can be many writes to this location for each file that is updated. One kernel 'feature' once caused this to be written to for every 4KB of data written, which resulted in my (expensive at the time) 1GB device failing in a very short time when I was writing 700MB Knoppix images to it. Add to this the journal, which contains records of every transaction, and you can see that parts of the filesystem are worked very hard. Yes, the devices are cheap enough now, but their contents may not be. To use your example of transporting data between work and home, what happens if you take some important files home to work on them for an urgent deadline the next morning, spend hours working on them and find the USB stick doesn't work when you get into the office the next day? The most likely point of failure is the file allocation table, so even files you weren't working on will no longer be available without the use of some recovery software and a fair bit of time. It's also possible, but by no means certain, that cheaper devices may fail sooner because of the likely lower standard of quality assurance. The key point is that these devices are cheap and cheerful and should not be assumed to last forever. Note that these comments are aimed at USB flash memory devices. The flash memory SSDs (solid state disks) used by the likes of the Asus Eee PCs are completely different, incorporating wear levelling so that specific parts of the memory are not disproportionately hammered. Back to the list ****** Old distro, old problem Q:: I have a PC that I've put together using bits and pieces of old machines that I've scrounged from friends and relatives. It has a 13.5GB hard drive, Pentium III 600 processor and 384MB RAM. I'm trying to run Ubuntu 7.04 and having managed to make the Live CD and made the necessary change to my BIOS, my machine will actually start up and get me to the Ubuntu start-up list with the choices of booting methods. However, regardless of whichever method I choose, I eventually get a screen message that says: --- BusyBox v1.1.3 (Debian 1:1.1.3-3ubuntu3) Built-in shell (ash) Enter "help" for a list of built-in commands. /bin/sh: can't access tty: job control turned off (initramfs) ,,, Can you please tell me what's wrong and, if possible, what I can do to fix the problem to get Ubuntu started properly? A:: This is a known problem with a couple of older releases. There were various workarounds and fixes floating around at the time, but they're no longer necessary. The problem is caused by an incompatibility between your hardware and this particular version, but there have been three further releases of Ubuntu since then, all of which should avoid the problem. I suggest you try again with a newer version of Ubuntu and you'll find this problem is no longer there. Back to the list ****** Configuring broadband over ethernet Q:: I have loaded Mandrake 9.2 on to a second hard drive (with Windows ME running on the other) and have bought an Ethernet card to run my NTL broadband connection. Broadband runs fine on Windows now, but how do I get it running on Mandrake? Someone suggested I run 'ifconfig' but I don't know what all this DHCP stuff is. I can get 'configure network cards' up in Mandrake's configure but what do I do from here? If you can help I would be really grateful. A:: Cable modems generally work via DHCP, so you can simply configure the Ethernet interface in Mandrake to obtain the address automatically from the network and let it work everything out. You can also test it from the command line using the 'dhcpcd' utility, which will request a DHCP lease from the NTL server and allow you to access the internet. Using 'ifconfig' is only really helpful if you have a static IP assignment, for example, on an internal network, so DHCP is the way to go with cable and DSL services. Back to the list ****** Email authentication Q:: I am trying to use email with Ubuntu 8.04 without success. My service provider is tiscali.co.uk and I have no problem using Outlook Express with Windows XP. I've tried using Evolution, Thunderbird and Opera email without success. I see an error message: 'The server responded:[AUTH] invalid user or password'. I've tried connecting via a USB SpeedTouch 330 modem and a D-Link DSL-320T modem. I understand there may be a problem with Ubuntu 8.04 when trying to connect via a '.co.uk' provider. Can you help? A:: There's no reason why Ubuntu would not connect to a UK domain, or any other. The first step of connecting to any domain is to resolve that domain name to an IP address, which is clearly happening, or you would get a different error. Your mail program is definitely connecting to your ISP's mail server, because the error message you quote was received from the mail server. The problem is simply one of settings. When you connect to a mail server, you have to authenticate yourself with your username and password and this is being rejected by the server. The fact that three different programs generate the same error leads to the conclusion that the details you're giving are incorrect. For Tiscali, your username is your full email address (not just the part before the @ as with some providers). The servers to use are pop.tiscali.co.uk for incoming and smtp.tiscali.co.uk for outgoing mail. You can check all of these details by looking at the account settings you currently use in Outlook. The one thing you cannot read from Outlook is your password; you must type this exactly as set up with your ISP, remembering that it's case-sensitive. Tiscali has some useful information on how to set up Thunderbird to work with its service at http://tinyurl.com/tiscalimail. This uses the Windows version of Thunderbird, but the steps are exactly the same, except that Account Settings in the first step is under the Edit menu on the latest Linux version of Thunderbird. Back to the list ****** ODF oops Q:: I have a huge number of OOo files with meaningless filenames. I need to sort out those that have been created or modified in the last month, but all the files have the same timestamps. I hoped that Konqueror's Info List View would help, as it does for Exif info in JPEG files, but it doesn't give any columns apart from file name, even though the metadata is present on the mouseover tooltip. A:: If these are in Open Document Format, the process is surprisingly easy. ODF files are Zip archives containing several files that comprise the document and its metadata. Even if you rename the ODF file, the timestamps of the files within it remain unchanged. Because of this, it's possible to extract a file from each ODF archive and set the archive's timestamp to match that file. A short shell loop will update all the files in a given directory --- for f in *.ods *.odt do unzip -o "$f" content.xml && touch -r content.xml "$f" && rm -f content.xml done ,,, This loops through each ODS and ODT file, extracting the content.xml file. If that is successful, it uses that as the reference for touch to set the modification date of the original file, then removes the content.xml file. Back to the list ****** Disappearing display in OpenSUSE Q:: First off, I've only been a Linux user for approximately two years, but I've been very impressed with the various distros available based on the users' needs. I've successfully installed Mandriva One 2008, PCLinuxOS 2007 and Ubuntu 7.10 on a Dell desktop computer. However, I've had only limited or no success with Ubuntu Hardy Heron and OpenSUSE 11.0 on either the Dell or my main machine, an HP MCE desktop. I've been trying to load OpenSUSE 11.0. After initial boot, I'm given the choice to select OpenSUSE and do so. The image begins to load normally, only to switch to a black screen with a small information window stating 'OUT OF RANGE 46.4kHz/44Hz'. The machine then freezes and I'm forced to shut it down via the power-on button. A:: This is a warning message from your monitor. It means the computer has sent a signal that is outside of the monitor's frequency range, so the monitor has disabled the display. This is preferable to the system used in days gone by, when monitors used to fry their electronics upon receiving a frequency signal that was too high. The OpenSUSE installer is still running at this point, you just can't see it. This is usually caused by the installer loader misidentifying your monitor, but the solution is simple. When you see the first menu screen (the one where you chose OpenSUSE), press F3, then choose the lowest video mode that works. If you get the same problem with all video modes, try the text-based install. It is the same installer, but with a more basic interface that you navigate using the Arrow, Tab, Space and Enter keys. The video resolution chosen here is used only for the installer - the graphical system that will be installed is more intelligent and will probably detect and configure your monitor correctly. Even if it doesn't, there's an option to do this by giving it details of your monitor. Usually the model name and number is sufficient, but at the worst, you have to pick a safe resolution like 800x600 @ 60Hz. Once the system is installed and running, you can try different display settings in the hardware section of Yast. Back to the list ****** Recovering lost photos Q:: I think my camera might have crashed and rebooted when I was trying to delete a photo. I can view the photos on the screen on the camera, but when I try to download I get I/O errors for some of them. I can copy an image off the card using: --- dd if=/dev/sdc of=this-is-annoying.img ,,, When I use this image to recreate the files on a partition on a hard drive I get the same I/O errors, so I figure that the files are still there and can be read but something is preventing them from being recognised properly when I try to open them. A:: You did the right thing in creating an image of the card rather than trying to recover from it directly. When a filesystem is damaged like this, the worst thing you can do is write to it in any way. With some filesystems, even reading a file updates its metadata, causing a write. What about a solution? TestDisk is a useful suite that includes a program called PhotoRec, which can recover all sorts of lost files from many types of filesystems. TestDisk is available from www.cgsecurity.org/wiki/TestDisk, but check your distro's package manager first. When it's installed, run photorec from a root terminal. If run with no arguments, it will search for any partitions containing filesystems it recognises and asks you to select one to scan. You've made a copy of the disks data with dd, so you can use this instead, although it's wise to keep a spare, untouched copy of this file. Your recovery attempts could affect the copy you work with and your memory card may be in too fragile a state to generate another. Start photorec with --- photorec this-is-annoying.img ,,, When asked for the partition type, select Intel/PC if you copied the whole disk (sdc) with dd or None if you copied only the filesystem (sdc1). This is the type of partition table, not the contents of the partitions on the disk, so anything usable on a PC is likely to be Intel/PC. The main exception to this is something that has no partitions, as opposed to a single partition filling the whole device, such as a floppy disk. You've copied the whole disk, so you'll have two options on the next screen: one for the partition (assuming it has a single partition) and one for the whole disk. Try the partition first; if this doesn't recover all your files, run PhotoRec over the whole disk. PhotoRec can generate a lot of files with meaningless names, so save its output in a separate directory when asked. PhotoRec can take a while to scan the image file and even longer when run directly on a memory card, so leave it alone for a while. Then you'll find your recovery directory full of strangely named files. The file allocation table was messed up, so the names of the files are gone, but this isn't a big deal with digital camera files, because their names aren't that useful to start with and you'll still have the Exif data. You'll also find old files here, because deleting a file removes it from the index but leaves its contents on the disk and possibly quite a few duplicates. Back to the list ****** Too many passwords Q:: I've just completed an install of Ubuntu 8.10, which is ace apart from the nagging prompt asking me to 'Enter password for default keyring to unlock' every time I log in. The prompt says: --- The application 'NetworkManagerApplet'(/usr/bin/nm-applet) wants to access the default keyring, but it is locked. ,,, and I'm unable to connect to my wireless router until I enter my root password. It's not a major hassle for me, but how can I save my family this inconvenience? A:: Ubuntu uses NetworkManager to handle all wired and wireless connections. This remembers the networks that the computer has connected to before and tries to automatically reconnect when they are in range. Because these networks are generally encrypted, it needs to store the key or passphrase for each of these 'known' networks. It does that by using the Gnome keyring (KDE alternatives use KDE Wallet) and this is password protected. You should not be using the root password to access this. In fact, you shouldn't even be using your login password, otherwise it provides no extra protection beyond your standard login. If that's enough security for you, Ubuntu has Gnome Keyring set up to automatically open a keyring called 'login' when you log in. This can be used to store the passwords of other keyrings, meaning they can all be opened once you are logged in. There should be a file called login.keyring in .gnome2/keyrings. The next time you are asked for the default keyring password, check 'Automatically Unlock This On Login' and you shouldn't be asked for the password again. You'll need to repeat this once for each user, until each of them uses the login keyring to open any others. This won't work if you opted for the autologin feature during installation, because then you're not giving a login password, which is used to unlock the special keyring. However, if more than one person uses the computer, you should avoid auto-login anyway; each member of your family should have their own account to keep their settings and data safe. You can turn off auto-login in the Security tab of System > Administration > Login Window. Back to the list ****** Sudo on CentOS Q:: Why does CentOS say that my account 'is not on the sudoers list'? I've tried looking in the account settings, but to no avail. A:: CentOS doesn't use sudo by default. Unlike Ubuntu, where the first user set up in the installer has rights to run anything with sudo, CentOS gives no such rights to anybody. By default, the only way to run programs with root privileges is to log in as root, by running su in a terminal. If you want to enable sudo for you or others, you'll need to edit the sudoers list, using the command visudo. This uses the editor defined in $EDITOR or, if that's not set, Vi. This method checks the syntax before committing it to the real file, which avoids you locking yourself out with a typing error. Run it with --- su - visudo ,,, or --- EDITOR="emacs" visudo ,,, and add this line to the end of the file --- youruser ALL=(ALL) ALL ,,, to enable a user to run any commands. You can also specify a list of commands like this: --- otheruser ALL= /sbin/mount, /sbin/umount ,,, Permission can be granted to all members of a group, and you can restrict the arguments given to commands as well, as in this, disabled, example from the default CentOS sudoers file --- %users ALL=/sbin/mount /cdrom,/sbin/ umount /cdrom ,,, which lets any user mount or unmount the CD. You can remove password protection like so --- %users ALL=NOPASSWD: /sbin/mount /cdrom,/sbin/umount /cdrom ,,, but be careful what you allow with this. Sudo is generally considered a better way of controlling access to system commands, because you have fine control over what each user can do, and because no one else needs to know the root password. Back to the list ****** Login lockout Q:: Can you help me get PCLinuxOS 2007 to boot again? I have it installed on its own hard drive and I was slowly getting to grips with it, but during a recent house move I lost the notebook containing my login details. One forum suggested I just put the DVD back in and reload it - I've tried, but I still get asked for login details. I'm at a loss as to what to do next, so any help would be appreciated. A:: You're still booting from the hard disk. To boot from the DVD, you need to call up your BIOS's boot menu to choose the DVD as your boot device. You should see a message flash up when the computer first starts, telling you to press one key for settings and so on. Unfortunately, the key used varies from one motherboard to the next - the four computers here use F8, F11, F12 and Esc. Sometimes the message refers to a BBS menu. It should also be explained in your motherboard manual, if that isn't lost along with the book containing your passwords. If you can't get a boot menu, call up the BIOS settings page and change the boot order so that CD/DVD comes before the hard disk. Once you've booted from your Live CD, there's no need to reinstall; you can reset the password with a couple of terminal commands. Open a terminal by clicking the Konsole icon and run these commands --- su - mount /dev/sda1 chroot /mnt/sda1 passwd yourusername ,,, su gives you root access and the password is root. The next command mounts the root filesystem, PCLinuxOS installs to /dev/sda1 by default, then chroot enters that directory and makes it the root directory. Until you log out, you're now inside your original PCLinuxOS installation. You may see some errors about permissions in /dev/null when you run chroot, but you can safely ignore them. Now that you're inside your original installation, logged in as root, you can change the password with the passwd command --- passwd myuser ,,, Enter the password twice when prompted and try not to forget or lose it this time. If you have forgotten your login name too, you can see all the usernames in the file /etc/passwd. --- cat /etc/passwd ,,, Your username will be at, or very near, the end of this file. You can also reset the root password by running passwd with no username. Once you've reset the passwords, press Ctrl+D to log out of the root session and then reboot, letting it boot from the hard disk this time. You can now log in with your new password and username. Back to the list ****** On off internet Q:: I'm trying to restrict internet access (Wi-Fi) on one of my laptops, but only at set periods during the day and night. I presume it can be done as a Cron job, but I'm not quite sure of the syntax to set out the time frame and if I need to edit Cron as a user or via the root. Let's say the laptop can only access the internet from 08.00 until 17.00, then from 20.00 to 22.30 on Sunday-Thursday (school nights) but on Friday or Saturday it can connect from 08.00 until 23.00. Could it be done in a couple of lines, or do I have to run one command for each day of the week and the off times? Also, how do I stop all internet traffic, both Wi-Fi and Ethernet? A:: This can be done with Iptables, the program that controls the Linux kernel's firewall abilities. You can use this to block all outgoing traffic, but a cleaner solution is to block all traffic that's not destined for your local network. That way your child's computer can still access any shared directories or local servers, but outside internet access is prevented. The following command will allow connections to the 192.168.0.* network, but block everything else. --- iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP ,,, The -I OUTPUT part (that's a capital letter i) inserts the rule at the start of the output chain. Firewall rules are processed in order, with the first match used, so you want this to come before anything else you may have. This is important if you're already running firewall software, since that's normally set to allow outgoing connections and you want to override it. The d 192.168.1.0/24 part matches any traffic heading for the 192.168.1.* network, but the preceding ! inverts this, so any traffic not for your network matches. The final part, -j, tells Iptables what to do with this data - in this case, discard it. Since this rule doesn't specify an interface, it will block regardless of whether you're using a wireless or wired connection. You could put this command into a Cron task, and add the corresponding rule to remove the restriction --- iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP ,,, where the -I (insert) is replaced by -D to delete the rule and that would effectively switch on and off the computer's internet connection at the specified times. For example, by putting this in /etc/cron.d/firewall --- 0 8 * * * root /sbin/iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP &>/dev/null 0 17 * * 0-4 root /sbin/iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP &>/dev/null 0 20 * * 0-4 root /sbin/iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP &>/dev/null 30 22 * * 0-4 root /sbin/iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP &>/dev/null 0 23 * * 5-6 root /sbin/iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP &>/dev/null ,,, The first rule turns the filtering off at 8am every day, the next three turn on at 5pm, off at 8pm and on again at 10.30pm on Sunday to Thursday (days 0 to 4 in Cron terms). The final line turns on filtering at the later time for weekend use. There is one serious flaw with this approach: the computer has to be turned on for the Cron task to activate, so resetting it will cause the rule to disappear. One solution is a shell script that checks the time and sets the rules accordingly, which you can run from /etc/rc.local. --- #!/bin/sh DAY=$(( $(date +%u) % 7 )) HOUR=$(date +%H) if $DAY -lt 2 ; then if $HOUR -ge 8 && $HOUR -lt 23 then /sbin/iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP else /sbin/iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP fi else if $HOUR -ge 8 && $HOUR -lt 17 then /sbin/iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP elif $HOUR -ge 20 && $HOUR -lt 22 then /sbin/iptables -D OUTPUT ! -d 192.168.1.0/24 -j DROP else /sbin/iptables -I OUTPUT ! -d 192.168.1.0/24 -j DROP fi fi ,,, This may look complicated, but all it does is retrieve the day and hour from the date command and make decisions about whether to turn filtering on or off based on this. You may want to tweak it to suit your needs, but it's a good starting point. Back to the list ****** Sharing data Q:: I was running Linux Mint on an old laptop for about six months when I had to replace the hard drive. (Luckily, I was able to recover the partitions with Clonezilla.) I've been thinking about using all the extra space on my new hard drive by dual booting with other OSes. Except there seemed to be various pitfalls to sharing the home partition directory with hidden files. My question is, can you have a common partition to keep music, photos and text files that can be read and modified by the different OSes? If so, how do you go about setting this up, and would you still need to keep a separate /home for each OS or could the directory be left within the root folder? Also, would it be better to stick with the Gnome desktop in other OSes? You may say that I could use Samba or NFS to get at the files, but I've only been using Linux for about a year and just about productively since I installed Mint, so I'm not too au fait with how these work as yet. I'm still climbing the learning curve. A:: Samba and NFS are for sharing files across a network, not within a single computer. You should have a single /home partition, but use a separate directory within that partition for each distro. The idea is that you then have one partition for each OS that you install, plus a single partition for all your own data (and a single swap partition as well). By keeping separate home directories within the single /home partition, you avoid any problems with clashing configuration files. The only thing you need to do is make sure that your users have the same numeric user ID in each distro. As far as sharing data is concerned, you can do this with symbolic links. Let's say you have a username of steve on Mint, with a home directory of /home/steve and you install OpenSUSE. On that distro you would still use a username of steve, but set the home directory to be /home/steve-suse. Linux doesn't care what your home directory is called - /home/username is only used because it is easy to see which directory belongs to which user. Assuming you're incredibly well organised and keep your photos in /home/steve/photos, your music in /home/steve/music, your mail in /home/steve/mail and so on, create these symbolic links: --- ln -s /home/steve/photos /home/steve-suse/photos ln -s /home/steve/music /home/steve-suse/music ln -s /home/steve/mail /home/steve-suse/mail ,,, This makes the sharing totally transparent, and everything appears to be in your home directory, in the same layout, whichever distro you are running. If you're using KDE, you can create a symlink by dragging the folder you want to share over a directory and dropping it with the Control and Shift keys held down, or with no keys and choosing Link from the menu that pops up. The only reason to stick with the same desktop in all distros is if it's the only one you like. If you want to experiment, go for it. Each distro is separate, so what you run on one won't affect any of the others. Back to the list ****** Overly clever modems Q:: I recently converted a friend of mine to Linux - Ubuntu to be exact - and he really likes it, but he needs to connect to the web via a Bandrich C-100 modem. I've tried every suggestion on the forums and nothing works - it will not connect. What am I doing wrong? Because of this little glitch there are two other potential users who were going to switch to Linux but who are hesitant now because of this problem. A:: The Bandrich C-100 is an Express Card or USB 3G modem that uses the same trickery as the ones mentioned in that feature, presenting itself as a mass storage device (a fake CD-ROM containing the Windows drivers) as well as a modem. This modem is similar to the Novatel modem in the feature, meaning that when the storage device is activated, the modem is hidden. There are three possible ways to fix this. The manual method is to use the eject command to get rid of the fake CD --- eject /dev/sr0 ,,, at which point the modem should appear as /dev/ttyUSB0. The second option is to let udev handle this for you by adding one of these lines (not both) to etc/udev/rules.d/10-local.rules; create the file if it does not exist. --- SUBSYSTEM=="block", ACTION=="add", SYSFS{idVendor}=="1a8d", SYSFS{idProduct}=="1002", OPTIONS="ignore_device" SUBSYSTEM=="block", ACTION=="add", SYSFS{idVendor}=="1a8d", SYSFS{idProduct}=="1002", RUN+="/usr/bin/eject %k" ,,, The first option ignores the fake CD completely, the second ejects it as soon as it appears. Try each of these in turn and one should cause /dev/ttyUSB0 to appear when the modem is plugged in. Once that device appears, you can use any PPP dialler to connect to your ISP. A third option is to try it with the latest Ubuntu release, 8.10. We've tested it previously and found its detection and autoconfiguration of GSM modems to be excellent. There was no need to fiddle with udev rules or dialler scripts - it just worked. Back to the list ****** Best Linux distro for information kiosks? Q:: I am about to start developing an information kiosk that will run from a bootable CD. This is not an internet café-type application but rather what one might see in a mall or similar offering local information. I would appreciate your suggestions on good starting points. I'm leaning towards a Linux-based system but have no idea which distribution I'm going to use. It must have a small footprint window manager, like FVWM, and a browser without toolbar. A printer and magnetic card reader will be part of the hardware environment. A:: There are a selection of bootable Linux distributions that will do a great job of providing a dumb terminal for a kiosk. PCLinux OS is a great little desktop distribution that runs KDE and provides a selection of applications. FVWM is a little antiquated, but alternatives such as Sawfish or Blackbox are ideal for use on a simple desktop (see Roundup on page 36). Of course, you need not run a window manager at all, if all you need to do is provide browser access. Simply start up the browser and pass it the geometry to be full screen, and any moving or resizing capabilities are not required. Back to the list ****** Skype sound loss Q:: I like to use Skype to talk while playing online games. I've just switched over to Ubuntu 8.10 from Windows, but I've found I'm unable to use sound in more than one application at once. Furthermore, after one Skype call ends, I need to play a random sound, to 'reset' the sound device, otherwise I'm told there's an audio playback error. The worst thing is that if I'm playing games and someone calls me, I can't answer their call because of this, so I have to exit the game or start a call before playing. My webcam also won't work with Skype, yet it will with Ekiga. A:: Did you install Skype from a file downloaded from skype.com or via the Synaptic package manager? If it's the former, you should uninstall this and use Synaptic. Skype is not included in the standard Ubuntu repositories, so you'll need to add the Medibuntu repository before you can install software from it. This means you'll get versions tested for Ubuntu, be notified of updates and gain access to other useful software in the repository. Add Medibuntu by typing --- sudo wget http://www.medibuntu.org/sources.list.d/intrepid.list --output-document=/etc/apt/sources.list.d/medibuntu.list ,,, into a terminal while Synaptic isn't running. You can also find this command at https://help.ubuntu.com/community/Medibuntu, so you can paste it into the terminal to avoid typing errors. Then run Synaptic, click on Reload to get the latest list ofpackages and search for Skype. You also need to make sure you have the correct devices selected for Skype. As your webcam works with Ekiga, check that you have the same device selected in Skype. This is usually /dev/video0, unless you have a TV card fitted, in which case that will be video0 and your webcam is video1. I've also noticed that Skype only picks up devices that are connected when it starts, so ensure you plug in the camera before running Skype. Your sound problem sounds like (sorry) Skype is trying to use OSS, the older sound system for Linux, and not ALSA (Advanced Linux Sound Architecture). ALSA provides software mixing, so that more than one program can use the sound device at the same time, whereas OSS locks up the device for its own use, preventing any other program from using it. Skype gives an array of choices for audio devices, and the default option is often not the best one. If you try the other devices in turn, this problem will almost certainly go away. You may find a similar solution applies with the other programs as well, depending on whether they allow you to choose the sound device. If not, installing the alsa-oss package should enable any OSS programs to be run through ALSA. Back to the list ****** DVD into CD won't go Q:: I have an obsolete PC with a Pentium 2 running at 400MHz and 128MB of RAM. It has a floppy drive, a CD-ROM drive and a hard disk with only 4GB of storage space. There's Antix, which seems to be a perfect fit to give me a starting place for learning Linux, but I don't have a DVD drive. My friend who does have one is a Windows man who doesn't understand 'an ISO image for burning to a CD'. Do you know some kind soul who would make the CD for me and post it? The addition of Gambas would be wonderful as a replacement for QBasic. So would those newbie guides you say are there. Of course, I'll be happy to pay for the CD and any other costs involved in creating it. A:: An ISO image is just the contents of a CD or DVD as a single file. It's an exact copy of the data on the CD or DVD, ready to write straight to the disc. All CD/DVD burning programs can burn an ISO to a CD for you, although the exact options you'll need to select can vary. The first step is to copy the antiX-M7.5.iso file from the DVD to My Documents or any other convenient location. Then replace the DVD with a blank CD-R and start up your CD burning software. If you use Nero - a limited version is often supplied with PCs - you simply select Burn Image from the File menu. An Open dialog pops up to let you select your ISO image, although you may need to set the file type to All Files to see it. Select the Antix ISO image, press Open, in the options window that opens next, leave everything as it is and press OK, then press Burn - that's it. Burning ISO images is easier than creating a CD/DVD from scratch, because all the settings are taken care of in the image file. If you don't have Nero or a similar program that you can use, there's a free CD burning program for Windows called Express Burn, available from www.nch.com.au/burn. Install this in the usual way, run it and select 'Write ISO image to a disk' from the Burner menu. Select the Antix file and press OK when the Burn Target window opens. We're unable to supply individually created CDs, but you can copy any of the files from the DVD to a CD using any of the standard CD writing programs and then read them on your PC. Alternatively, for the price of a box of blank CDs, you could buy a basic DVD-ROM drive for your old computer. Linux treats CDs and DVDs in the same way - as far as the OS is concerned, a DVD just holds more. Even video DVDs are the same format as data discs, so there would be no compatibility issues to face in replacing your CD-ROM with a DVD-ROM drive for use with Linux. Back to the list ****** Software installation Q:: I'm a recent convert to Linux and I can't see how to install new software. When I read about software installation, I keep seeing instructions on compiling from source. Why can't it be as easy as installing in Windows? A:: Linux is an open source system, so it's normal for software to be distributed as source code. However, that doesn't mean you need to compile the software yourself, at least not in the vast majority of cases. The Windows method is quite haphazard - you have to go trawling various websites to find program installers, and then go back to them regularly for updates. There's also a risk that you'll download an infected program, as you're using a host of websites you know little about. Linux distros use a completely different method, which is based on package managers, such as Ubuntu's Synaptic. These use repositories - large collections of software ready to install on your computer. The package manager also handles dependencies, where one program requires another to run. For example, program A may need program B, which in turn needs library C. This is more common with Linux than Windows, since programs usually call on other programs and libraries to share the work instead of reinventing the wheel. The package managers take care of these dependencies, telling you they want A and will download and install B and C for you. How does this work with Ubuntu in particular? Run Synaptic from the System > Administration menu and you'll see a list of all the software installed and available. It's initially sorted into sections, so you can browse for software of a particular type. If you know the name of the program you need, type it in the search box. Once you have what you need, select it and press Apply. Synaptic will then download, install and configure the software for you. It will also let you know when there are updates to your program through the Ubuntu Update Manager. What if your program is not in Synaptic? The first step is to check the other repositories. Most distros split packages between various locations, and commercial or otherwise non-free software is often in a separate repo, so you can exclude it. There are also legal issues with distributing certain kinds of software in some countries, and Linux distros are global. So these programs, such as the CSS libraries to read encrypted DVDs, are kept in separate repositories, not in the mainstream distro. Ubuntu has Medibuntu (http://www.medibuntu.org), Mandriva has the Penguin Liberation Front (http://plf.zarb.org), SUSE has Packman (http://packman.links2linux.org) and so on. Check the websites for details of what they include and how to add them to the repository. This is a one-off task, adding a line or two to a file or GUI, after which the extra packages are always available to you. Back to the list ****** USB boot alternatives Q:: I have tried to install Ubuntu on my old HP Pavilion Notebook. Since I was short of disk space (and still need Windows) I installed it on a USB disk drive. The install went well, but rebooting produced a Grub error. I now understand from reading around that some old machines do not allow booting from USB, indeed there is no USB option in the BIOS (or Smart Boot Manager). So I restored my Windows boot sector with the MS recovery disk, which is now fine, but leaves me with a USB disk that is loaded up with Linux (fully intact from what I can tell), without any possibility of use. As Linux advocates this must must annoy you as much as me! I was wondering - since my machine will boot from a floppy disk, is it possible to put some sort of Linux kernel on a floppy that would boot Linux but access the redundant disk for the bulk of the Linux applications (perhaps even the graphical user interface)? I see advantages to this over the Live DVD, in that I could maintain my changed preferences etc and augment the distro with other applications I would like to have. Is this a viable option? I would greatly appreciate some advice. A:: This is not just annoying from a Linux perspective: if your computer won't boot from USB, the operating system is irrelevant. However, USB booting is a source of great frustration, as not only is the computer's BIOS a factor, but some USB devices work better than others, so there is much trial and error involved. It would appear that you installed Grub to your hard disk, so the system at the time of installation sees the hard disk as the first disk and the USB disk as the second, but when you try to boot up, the second (USB) disk is not there, resulting in the Grub error (which is almost certainly a "disk not found"). It looks unlikely that your computer supports USB booting if there's nothing in the BIOS, but some computers do provide a boot menu if you press a key (usually delete, F12 and F2 are most commonly used) immediately after starting up. Unfortunately, there is no standard for this, so you'll have the read the manual of watch the power on messages to see of there is such an option, or try holding down a different F key each time you boot until you strike lucky. You have ruled out the use of a Live DVD, but many of them have an option to mount a USB drive as your home directory, enabling you to save your documents and settings and even install extra software, although there isn't much that something like a Knoppix DVD doesn't include. If your computer really doesn't support USB booting, this would seem a better option than using an unreliable floppy disk. Boot from a Knoppix CD or DVD, plug in a USB drive and select the Menu item Knoppix > Configure > Create A Persistent Knoppix Disk Image. Answer the questions, and it will create a file called knoppix.img on the disk that contains your home directory and settings. When you reboot, Knoppix should detect this file and ask if you want to use it, or you can specify the location by typing --- knoppix home=/dev/sda1 ,,, at the boot prompt. Back to the list ****** Advanced file permissions Q:: I would like to share files in a particular directory between certain (local) users. Those users should have read/write access to these files. I would like to leave the umask set at 022 for default permissions on all other files. I can create a group and set the permissions on the files in the shared directory to 660 with the group set correctly, but that doesn't solve the problem for new files, which may be created by any user. Not all the users have Linux savvy, and anyway it feels unnecessary to have to change groups and permissions by hand every time. One option would be to write a daemon to watch for new files and change permissions. Is there a better way, and has anyone done it already to save me the coding practice? A:: You have discovered one of the limits of standard Linux file permissions. You could setgid the directory, meaning that any member of its group could create files in it, but they would still be writable only by the user that created them. The answer lies in ACLs (Access Control Lists) which provide much finer control over file permissions. There are three prerequisites to using ACLs. First, your kernel must include ACL support for the filesystem you are using (standard distro kernels will already have this). Then you need to mount the filesystem with ACLs enabled, by editing etc/fstab and adding acl to the list of options, for example by changing --- /dev/sda5 /home ext3 defaults,noatime 0 0 ,,, to --- /dev/sda5 /home ext3 defaults,noatime,acl 0 0 ,,, and either remounting the filesystem with --- mount /home -o remount ,,, or rebooting. This step is not necessary if you use XFS, as it has ACL support by default, but if you use ext2/3 or ReiserFS you'll need to change fstab and force a remount. Finally, install the acl package, which includes the userspace tools used to control ACLs. Now you can add the users to the same group, say, 'project', and set things up like this --- mkdir shared chmod 2775 shared setfacl -m default:group:project:rwx shared ,,, As long as your user has full write permissions in the directory where you do this, you do not need to be root. This creates the directory and makes it group-writable and executable. The last line does the clever stuff, setting a default access rule for the directory that all files have rwx permissions for all members of the project group. A default rule applies to all new files, so you need to issue this before you place any files in the directory. Or you can set existing files with --- setfacl -R -m group:project:rwx shared ,,, This has no default parameter, but -R makes it recurse over all existing files and directories. You can also set access controls for individual users, like --- setfacl -m default:user:fred:r-x shared ,,, which gives this user read-only permissions. While you are experimenting with setting ACLs, you will find these commands useful --- getfacl shared setfacl -x default:user:fred shared ,,, The first lists the ACLs for a file or directory, the second removes them. The syntax for -x is the same as -m, except that you don't give the permissions. Back to the list ****** Disk diagnostics Q:: Lately Seti@home is giving a signal 11 error while it does its number crunching. This error has been related to problems with hard disks. The problem is that I don't know what the best way to check my hard disk is. I use Kubuntu 8.04, and have four partitions: one for Windows (vfat), swap, root (ext3) and /home (ReiserFS). I had the ReiserFS partition because it was the default when I installed my first Linux (SUSE 7.2). I think, though I could be wrong, that I haven't formatted or checked the /home partition since I created it six years ago, which I believe is quite impressive (for me, at least) - six years without any problem! What should I do? Boot using a Live CD? Just use some CLI magic? Or is there something graphical I should use? A:: There are two separate entities to consider here: the physical disk and the filesystem residing on that disk. The filesystem is the most straightforward, as you can simply run fsck over it. Of course, nothing is quite that simple, and you should not try to fsck a mounted filesystem. You could unmount /home, but that would only work if you've set up a root login. Then you could logout of the desktop, switch to a virtual console with Ctrl+Alt+F1 and log in as root. You need to log out of the desktop because you cannot unmount /home while any user but root is logged in, this is why you need a separate root login and cannot use sudo. Now you can run fsck on that partition --- fsck /dev/sda4 ,,, The alternative is to use a Live CD or DVD, which would also enable you to fsck the root partition. This is usually the simpler option if you do not mind rebooting the computer. Any Live CD distro will contain the fsck tools, but my current favourite for this type of work is GRML (www.grml.org). This is based on Debian and aimed at system administration and rescue. Another alternative would be SystemRescue CD (www.sysresccd.org). Fsck checks the filesystem for corruption but does not look at the underlying hardware. For this, Smartmontools is a good choice. It's included in the software repositories of most distros. Smart (Self-Monitoring, Analysis, and Reporting Technology) is a way for hard disks to monitor their own performance, running a set of self-tests to detect and even predict failure. You may need to enable an option in your computer's BIOS for Smart to work, then install Smartmontools and edit /etc/smartd.conf to tell it which drives to test and how. A good starting point is --- /dev/sda -d sat -I 194 -I 231 -I 9 -W 5 -a -m me@example.com ,,, This checks /dev/sda, a SATA drive, to ignore attributes 9 (power on time), 194 and 231 (temperature) but report temperature changes more than five degrees. The -a option says to monitor all other attributes, while -m gives an email address for errors and warnings. If you have a PATA/IDE drive, use d ata. Set smartd to start at boot in your services manager and your driv (s) will be continually monitored. You can run an immediate health check on the drive with --- smartctl --health -d sat /dev/sda ,,, and run various tests with --- smartctl --test=TEST -d sat /dev/sda ,,, where TEST is one of offline, short or long. See the smartd and smartctl man pages for (a lot) more information on the various options, attributes and tests you can use. Back to the list ****** What he sed Q:: I have a large file containing 800-900 lines. Now I want to write a script that can find a particular expression in the file and then change or replace the one word in that line and another word in line below that particular line. For example, if a file contains --- I am in London Family is in Canada ,,, I want to find the expression "in London" and replace "I am" with "You are" and in the below line I want to replace the "Family is" with "girlfriend in". I took such expressions because my search may be a expression containing spaces and replacement also contain replacement of whole sentences. A:: Sed (Stream Editor) is the program you need. This can replace, delete or otherwise manipulate data in a file using regular expressions. The simplest replacement is something like --- sed 's/I am/You are/' myfile >newfile ,,, This is the most common usage of Sed, to replace occurrences of a string with another, but you have asked for something far more specific, to only replace on lines containing a string and the following lines. This can be done by limiting the command used to an address range. The command used here is s, the substitution command, and in the above example is applied to every line of the file. To limit its application, you prefix is with an address range, which can be a pair of line numbers, except you don't know the line numbers in this case, so we use --- sed '/in London/,+1s/I am/You are/' myfile >newfile ,,, Here the address range consists of two components, separated by a comma. The first is a pattern match, defined by the slashes surrounding the string, so it matches any line containing "in London". The second component is +1, which extends the range to one line after the line that starts the range. You can use any number here, but each line can only be in one range. So if lines 3, 4 and 7 match, the above command would be applied twice, first to lines 3 and 4 and then to lines 7 and 8. All other lines are passed through unchanged. The address is followed by the s command. In this example, we make only one substitution, to apply multiple commands to the addresses range, enclose them in braces, separated with semicolons. This will do just what you are asking for. --- sed '/in London/,+1{s/I am/You are/ ; s/Family is/Girlfriend/}' myfile >newfile ,,, In each case, we have redirected the output to a new file. Sed has a -i option to edit the file in place, but don't use this until you are sure your syntax is correct or you have a backup of the original file, as a slight typo could destroy your data. Sed doesn't replace the file until the command is complete, so a syntax error will leave the file unchanged, but a valid command that doesn't do what you expect it to would cause a problem. Sed is very powerful but the man page is basic, so read either the info docs or http://sed.sourceforge.net. Back to the list ****** Failsafe booting Q:: Most distros use Grub nowadays and I can see how it is easier to use than Lilo. But there appears to be one very important feature missing: Lilo can use the -R option to boot a kernel once only, so the next time it goes back to the old default, which is handy for testing new kernels. If it fails, a reboot goes back to the old kernel. It is really important to be able to do this on a remote server and I can't see how to do it in Grub. A:: This is possible with Grub, although a little less intuitive. There are a couple of methods available, but the fallback is the most flexible.Here is an example configuration --- root (hd0,0) timeout 10 default saved fallback 1 2 title new kernel kernel /boot/vmlinuz root=/dev/ sda1 panic=10 savedefault fallback title old kernel kernel /boot/vmlinuz.old root=/ dev/sda1 panic=10 savedefault fallback title rescue shell kernel /boot/vmlinuz.old root=/ dev/sda1 init=/bin/bb savedefault ,,, The default saved line tells Grub to use whichever menu item was saved in /boot/grub/default (or the first item if that file is not present). So Grub boots the first item, your new kernel and the savedefault line now sets the default to 1, the second menu choice (remember Grub counts from zero). The panic=10 kernel option means that if the kernel panics, it will reboot after ten seconds, and now Grub will use your new default of the second That should be enough, but for belt and braces you can repeat the process with this kernel, which will then use your second fallback if things go wrong. This final choice simply uses savedefault which saves itself as the default. You may have noticed a slight flaw with this process in that your fallback is always saved as the new default, even if the boot succeeds. This is handled by running the command --- grub-set-default 0 ,,, at the end of a successful boot, usually from your rc.local script, depending on your distro. Although this is a more complex system that Lilo's -R (there is a slightly simpler alternative) is is far more useful as it is always in effect. Any time your computer reboots without completing the full boot process, it switches back to your old kernel, whereas Lilo requires a specific command to do this. Back to the list ****** Automated virtualisation Q:: On start up I want Ubuntu to start without the need for me to input a username and password and then open VMware and start a virtual machine. I know how to do all of these things manually, but is it possible for me to write a small batch file (if that's the right expression) to do it for me automatically? A:: There are two steps here: logging in to your user's desktop automatically, and running a program after logging in. The first is achieved in Ubuntu 8.10 by selecting the option to automatically log your user in during installation. If you have an earlier Ubuntu or you have already installed 8.10, you can set this option by running System > Administration > Login Window, go to the Security tab, tick Enable Automatic Login and select the user you want logged in. Gnome will start any program you tell it to when it starts up. Go to System > Preferences > Sessions, press Add and type the command you want to run, along with a name and description. To start VMware Workstation with a particular virtual machine, use --- vmware -X /path/to/virtualmachine.vmx ,,, Where -X tells VMware to both start the virtual machine and switch to full-screen mode and the rest is the path to the .vmx file for the virtual machine. If you're using VMplayer, replace vmware with vmplayer in the above command. If you use VMware-server, you need to make sure that the server is running at startup (use the Ubuntu session manager for this), and then use vmrun to start the virtual machine. Back to the list ****** Protecting my data Q:: I was burgled last week! They took two TVs and a load of PC equipment including all my external drives. This has made me really paranoid about keeping the data on my laptops and desktop safe. Is there a way to encrypt the hard drives of my PC and laptops without reformatting them? A:: Sorry for your bad news. There are two main ways of encrypting a filesystem: one is to create an encrypted block device and then create a filesystem on top of that. This is the method I generally use, but the actual implementation depends on your chosen distro as it has to request a decryption key and unlock the filesystem before it can be mounted during the boot process. The commands (run as root) to implement this are --- cryptsetup luksFormat /dev/sda2 cryptsetup luksOpen /dev/sda2 home ,,, The first creates an encrypted block device on /dev/sda2, and only needs to be done once. The second opens the encrypted device, prompting for a passphrase when used like this, although it can also use a keyfile on a removable device. It creates the block device /dev/mapper/home, which you can format, mount and use just as you would any other block device. The main drawback here is that you have to reformat part of your disk to use it. The other option is a stacked filesystem, which is where one filesystem is layered on top of another. The most popular option for encrypted stacked filesystems used to be EncFS (a Fuse) filesystem, but now the Linux kernel has ecryptfs built in. You need to install ecryptfsutils, which should be in your distro's repositories and may already be installed. This contains the tools to create and manage ecryptfs filesystems. Then you can create one with the following: --- mkdir .private mkdir private sudo mount -t ecryptfs .private private ,,, You will be asked some questions; set the key type to passphrase, the encryption cipher to AES, key bits to 16 and passthrough to off. You can use different settings if you wish, but these are a good starting point. The encrypted layer is created and anything you write to private is actually saved as an encrypted file in .private. Try copying some files to private and then reading them. Now try to read the same files in .private. Unmount private with --- sudo umount private ,,, and this directory will now be empty while the encrypted files are still present in .private. To mount the directory again, you need to specify the options you gave when you created it. --- sudo mount -t ecryptfs .private private -o key=passphrase,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_passthrough=n ,,, You will be asked for the passphrase again, and then your files will be readable. You can attach this command to a launcher icon or call it from your desktop's session manager to have the directory mounted automatically when you log in. Make sure that you set the command to run in a terminal, as this will be needed if you want to input the password. You could also put the options in /etc/fstab, like this: --- home/user/.private /home/user/private ecryptfs oauto,user,key=passphrase,ecryptfs_cipher=aes,ecryptfs_key_bytes=16,ecryptfs_passthrough=n 0 0 ,,, The use of noauto prevents a mount attempt when booting while the user option lets any user mount it, provided they know the passphrase, without the need for sudo. You can either keep your confidential data in private and change the configuration of your programs, or move the various data directories into the private directories and set up symbolic links from their original locations. When making backups, copy .private instead of private, then your backed up data will remain encrypted. Back to the list ****** How to install via RPM Q:: I have been using your excellent Mandrake 10.1 distro with the Gnome 2.6 desktop. This has been installed on a standalone computer, rather than a dual installation. I am considering buying the next edition from Mandrake, as it has worked very well, but I've been rather puzzled by the software installation procedure. I have downloaded various items into the Mozilla download manager but installing these files has proved so far proved impossible. I've used Start > System Packaging > Install Software > RPM Drake and also Media Manager > Configure Media. RPM Drake lists all the software that was on the distro CDs and not installed. No matter where I tell it to search or whatever combination of file names or listing the answer is always 'Search results (none)'. I have also tried other ways, like putting the software programs on a zip drive and installing /mnt/removable, but to no avail. Do you think there is a problem with the downloads themselves? Is Install Software not working or have I got the whole thing wrong? A:: To access the RPM database or install software you have to be the root user. The database is designed so that non-root users can't easily establish what is installed, or modify the RPM database in any way. You can install software via the command line using rpm -Uvh <filename> to install a package you downloaded, which will allow you to te if it is an issue with the Mandrake package management front end, or something within the RPM tools.</answer> Back to the list ****** The fat lady crashes Q:: I tried Opera 9.26,9.27 and 9.52 from my Mandriva 2008.1pwp. I then installed some of the widgets that interested me. On trying to use it my PC [Dell Optiplex GX110] kept locking up. I rapidly removed it. Any suggestions? A:: Did Opera work correctly before you installed the widgets? If so, try a clean installation of Opera, then install the widgets one at a time, running them for a while between each installation. When you find the one that causes the problem, file a bug report on the developer's web site so they can fix it. You may find it more useful if you launch Opera from a terminal, just type --- opera ,,, to launch it. This won't prevent any of the problems, but it will give more information about what went wrong. Even if the output means nothing to you, it could be helpful to the developers in tracking down the source of the failure. If the problem is with Opera itself, try to open it again from a terminal to see any error messages that might come up. In many cases, pasting the error message into your favourite search engine will bring up reports of others sharing the same problem, and often a solution. Back to the list ****** Video for Androids Q:: I have recently treated myself to a G1 phone running Google's Android operating system. I really like it but am faced with constant battles with my iPhone-owning friends "can it do this" or "why can't it do that?". So far it has only failed to acquit itself on two counts; I can live with it not imitating a lightsabre, but trying to get videos on to the thing in a playable format is driving me nuts. Is there a simple way to convert video files or DVDs to a format playable by the G1? For that matter, what formats does it support? I am using OpenSUSE 11.0, if that makes a difference. A:: The G1 doesn't have a native video player, but there are a few available on the Android market, like the imaginatively named Video Player or Cinema. The range of formats supported by these programs is extremely limited, so you have to be careful to use exactly the right settings when converting your videos or they simply will not play. A good program for performing these conversions is Avidemux (http://fixounet.free.fr/avidemux) but this is not in the default repositories for OpenSUSE 11.0. To get it, go into the Software Repositoriessection of Yast, press Add and select Community Repositories. Add Packman, which contains Avidemux. You should also enable the Main Repository (OSS), if this is not already done, for its dependencies. Now you can search for and install either avidemux-gtk or avidemux-qt in the Software Management section. These are basically two faces of the same program. Fire up Avidemux, press Open and select a video file. Set the Video type to 'MPEG-4 ASP (lavc)', press Configure and set the Encoding mode to either 'Constant Bitrate' or 'Two Pass - Average Bitrate'. The latter gives better quality, at the expense of a doubling of processing time. Pick a bitrate, bearing in mind that you will be viewing this on a threeinch screen, so don't go silly. I find 384kb/s gives decent quality and small files. In the Filters window, double-click on MPlayer Resize and set the width and height to 480x320, press OK then Close. The Audio should be set to 'AAC (FAAC)' at a bitrate of 96 or a little higher. Set the format to MP4 and you're ready to go, but first you should select a small section of the video for a test run. Drag the slider to a suitable start point and press the A button, then move it on a minute and press B. Press Save, pick a filename and give it a few seconds to process the video, depending on the speed of your computer. Make sure you give the file a .mp4 extension, or Video Player will not find it. Copy the resulting file to the SD card of the phone (remember to disconnect afterwards to make the SD card visible to the phone again), and start Video Player. It should pick up your video wherever you saved it on the SD card. If it plays well, you can transcode the whole video. Back to the list ****** Ubuntu APT package caching with a proxy server Q:: We've been using Ubuntu 8.04 for computer learning in the community. I want to set up an Ubuntu update server that will be used by 30 client machines. This server updates the packages, and these packages should be available for all the Ubuntu clients and servers in the network so that none of the other machines need to go on the internet and use my limited bandwidth. If a client machine needs a package it should look for it in the local server, and if it's not there the server should download it from the internet and serve it to the client, keeping a copy in case other clients need the same package. Then I can be sure that any packages get downloaded only once, which should save time as well as bandwidth. A:: What you're looking for is known as a caching proxy server. These are commonly used by intranets and ISPs to reduce bandwidth requirements. The individual web browsers, or any other applications, request the files from the proxy, which downloads it, sends a copy to the program requesting it and keeps a copy for the next time that object is requested. The most popular open source proxy server is Squid (www.squid-cache.org), but this would be overkill for your needs. There are a number of lightweight proxy servers designed specifically for caching packages for Debian based distros, including Ubuntu. At least four of these are included in the standard Ubuntu repositories, one of which is apt-cacher. You only have to install this on the server (the computer that will be acting as the cache). Once you've installed apt-cacher, there are a few settings in the config file at /etc/apt-cacher/apt-cacher.conf that you'll need to change. The first is cache_dir, which is where apt-cacher stores the files it downloads. Make sure this points somewhere with a lot of space, if possible using a separate filesystem so that it won't affect your system if it fills up. The next settings you'll need to change are allowed_hosts and denied_hosts, which control the computers that are allowed to connect. In most cases, you want all computers on your LAN to have access and no others, so you can leave denied_hosts empty and set allowed_hosts to the address range of your LAN. This can be either a network address and mask or a pair of addresses defining the range, for example --- allowed_hosts=192.168.0.0/24 allowed_hosts=192.168.1.1-192.168.1.50 ,,, Read through the comments in the file, but you can leave the rest of the settings at their defaults to start with. Now edit /etc/default/apt-cacher and set AUTOSTART to 1, so the server starts each time you boot. Once the configuration is set up, reload the server with --- sudo /etc/init.d/apt-cacher restart ,,, Now you need to set up each of your computers to get their packages through apt-cacher, starting with the server running it. Create a file in /etc/apt/conf.d, say /etc/apt/apt-conf.d/10apt-cacher, containing this line --- Acquire::http::Proxy "http://127.0.0.1:3142/apt-cacher/"; ,,, Repeat this process on the other machines, but use the IP address of the server instead of 127.0.0.1. Try installing a package or two on one of the computers, then look in the packages directory under cache_dir and you should see the Deb files there. Install the same packages on another computer and you will see almost instant download times. Back to the list ****** Removing hidden Windows partitions Q:: I've been contemplating a Linux-on-USB-key project. I have a 4GB Toshiba USB which has an extremely annoying 0.3GB partition with some sort of automount 'smart' software (for Windows, naturally) that tries to self-install when connected to a Windows machine. Under Linux it appears as a separate partition, and does no harm. My problem is that when I tried PartitionMagic to remove it and recover the wasted space, it reports that the partition is formatted as a CD-ROM and I cannot get rid of it. Apart from judicious application of a large-ball peen hammer, is there any methodology for excising this canker? A:: Simply removing all partitions with a Linux partitioning tool should work (that's how I did it with my SanDisk device). Linux programs, unlike a lot of Windows software, tend to assume that you know what you are doing and if you want to reformat something it sees as a CD, that's your choice. It doesn't try to protect you from your perceived stupidity. If the graphical tools fail, there are a couple of other options you can try. Running --- cfdisk /dev/sdb ,,, in a terminal (make sure you use the right device name) should enable you to delete partitions and create a new one. It needs to be run as root, either from a root terminal or with sudo. If this fails, try --- cfdisk -z /dev/sdb ,,, The -z option causes cfdisk to ignore the existing partition table and start with a blank canvas. Either way, cfdisk is entirely keyboard-controlled: D deletes a partition, N creates a new one, Shift+W (capital) writes the new layout to the device and H brings up the help when you get stuck. If even this fails, you can use --- dd if=/dev/zero of=/dev/sdb bs=512 count=1 ,,, to overwrite the partition table of the device with zeros, guaranteeing a fresh start. Run this (as root) to overwrite the partition table and MBR with zeros; or you could omit the bs and count options to zero the entire device, but you really shouldn't have to resort to such an extreme measure. Back to the list ****** Fix forgotten WPA password Q:: I'm using Ubuntu Hardy on an Acer 5920 and running Windows XP in VirtualBox. I want to share folders between host and guest but am having trouble with my network settings. I can use the internet on the host over a WPA-encrypted link to my router but I've forgotten my WPA password. Network manager has it stored, but is set to roaming mode, which won't let me choose between static and DHCP addressing. If I disable roaming it wants me to put in the WPA key. If I switch roaming back on, it all works again so I know it's stored somewhere. I don't want to reset the router because I'm plain lazy and don't want to set up all the family's computers to use a new key. I saved the router config to a file on my desktop PC but that's sitting in the loft because it won't fit anywhere else. So where do I find my WPA key? Is it just stored encrypted like my users' passwords? Or can I find it in plain text? A:: If you're using the standard Gnome desktop, Network Manager stores its passphrases and keys in Gnome Keyring. Go to Applications > Accessories > Passwords and Encryption Keys to open the keyring. The network keys are held under the Passwords tab; select the one you want and press Properties (or just double-click on the key). Press the arrowhead next to 'Password' and then tick Show Password to see your password or key. It's a lot of mouse clicks, but that will teach you to be more careful with passwords in future. The alternative is to change the WPA passphrase on the router, (assuming you've not lost the router's admin password too), and then update each computer or operating system to use the new key. Back to the list ****** .dmrc and .ICEauthority permissions problems Q:: I need to implement a backup solution and have installed TimeVault. I also implemented Thunderbird as my email program. All seemed OK yesterday, but on startup this morning I got error messages: --- "User's $Home/.dmrc file is being ignored. This prevents the default session and language from being saved. File should be owned by user and have 644 permissions. User's $Home directory must be owned by user and not writable by others." ,,, following this: --- "Could not update ICE authority file /Home /Dave/.ICEauthority." ,,, Can you please advise as to how I correct this situation, bearing in mind I am a newbie but reasonably proficient with Windows. A:: It looks like the file permissions or ownerships of your home directory and at least some of the files in it have been changed. You can fix this from the GUI, although the details vary according to which distro and desktop you are using. Open the file browser in your home directory (/home/Dave), and go up one level to /home. Right-click on the Dave directory and select Properties (this works in KDE and Gnome). Check that the owner is you and that it has read and write permissions. It is possible to change these through the properties window, but only with root access, which some distros make difficult to access, so the quickest and easiest way is to open a terminal and, depending on your distro, either run the su command to gain root access or, particularly if you use an Ubuntu variant, prefix the first command with sudo --- chown -R Dave: ~Dave chmod -R u+rw,go-w ~Dave ,,, The first command recursively (-R) changes the ownership of all files in your home directory (~Dave) to Dave. The trailing colon after the username means that the group ownership of each file is also changed to the user's default group. The second command recursively changes the permissions on all files and directories in your home directory by setting read and write permissions for the user (u+rw) and unsetting write permissions for the group and others (go-w). Any other permissions, such as execute permissions for any scripts that may have been placed in your home directory, are unaffected by these commands. It will not increase the permissions for the group and other users on any files, which is important, as some programs will not run if their files can be read by anyone in addition to the owner. These steps should prevent the error messages you are seeing. Back to the list ****** Mandriva Nvidia driver installation Q:: I am having difficulties installing the latest Nvidia driver for Mandriva One. I have installed it on my main system, multi-booting with two XP Pro installations on a spare partition. My difficulty is in installing Nvidia-Linux-x86-177.80.pkg1.run. I have been able to execute the installer from outside X as suggested, but encounter only an error at the installer after unpacking. The error reports missing commands and with my fairly limited knowledge of the command line I can go no further! I am not presently connected online so I cannot use the package manager. A:: It really helps if you can quote any error messages you see, as we're now left to guess at which commands are missing. However, I suspect that you don't have a compiler installed. The Nvidia driver package contains pre-built modules for various kernels, but if there isn't an exact match for your system it has to compile one. This is nothing to worry about, as the installer takes care of the whole process, but it does mean that you need to have a full compiler toolchain installed. It also requires the source code for the kernel you're using in order to build a matching module. At the very least you need to install GCC 4.2 and Automake, which should pull in all the other requirements as dependencies. This is not possible with the Mandriva One CD, as these packages are only included with the full Mandriva DVD. Some package managers, like Synaptic used on Debian-based systems, have an option to generate a download script based on the packages you select. You can copy this script to a USB stick and use it to download the files you need on another, internet-connected, computer. Then you put the stick back in the offline computer and it will install all the packages you downloaded. However, Mandriva's package management system has no such option, making it a less useful option for an offline computer. Unless you can connect it to the internet for long enough to install the software you need, I would suggest either using Mandriva 2009, which is available as a full DVD, or a different distribution, like Debian or one of the Ubuntu variants. Back to the list ****** Imaging backup solution Q:: I have been looking to set up a backup solution for some time now and have decided that I would get a external hard disk enclosure and new hard disk. I need an incremental backup tool like TimeVault or rsync, but I would like it to be imaging so that in the event of a hard disk failure I can get the boot sector, partitioning and all the rest of it back. I have not found any Linux software that does that yet, and I was wondering if you could help. My idea is that I can plug it in once a week and have some udev rules or something mount it and start a shell script that would call the app with the appropriate options. Then in the event of a hard disk failure or accidental deletion of a file I can roll back to a previous backup or if I get a bug in a script I've made I can get the old one version back. I would greatly appreciate it if you could point me in the right direction to such a tool. A:: Incremental and imaged backups don't mix. An image backup is a copy of the disk, as you say, so by its very nature it contains everything. The most popular disk imaging program for Linux is Partition Image (www.partimage.org), which is most likely in your distro's repositories and also on many Live CD distros. The point about Live CDs is important, as it it not safe to make a whole disk (or whole partition) image backup while any filesystems on it are mounted read/write, as the data on the disk could change mid-backup, leaving you an inconsistent backup. Your best bet, disk space permitting, is to use Partition Image to back up your whole disk once in a while, then use something like rdiff-backup to make incremental backups of your important data. Rdiff-backup keeps older versions of files, so it fits in with your requirement of being able to recover older or deleted files, whereas recovering individual files from a whole disk image is a much bigger job. Unless you already have a spare disk of the exact same size, you may find that when your disk expires you want to replace it with a larger model, probably at a lower price than the old disk. In this case, you only need a backup of the Master Boot Record (MBR) and possibly the partition table. Create a backup of the MBR with this terminal command --- dd if=/dev/sda of=mbr.img bs=446 count=1 ,,, This copies the first 446 bytes of the disk - the area that contains the MBR - to a file that you should store somewhere safe. You can restore it by switching the if and of (input file and output file) parameters. If you want to back up the primary partition table too, change the bs entry to 512. If you are going to restore to a different size disk, you'll probably want different partition sizes, so the best approach is often to back up only the MBR and create your new partitions from scratch. Now you can back up the contents of each partition with --- tar czlf /path/to/backup.tar.gz /mountpoint ,,, The tar options are: c to create an archive, z to compress it with gzip (use j for bzip2 compression), l to restrict the backup to one filesystem (so when used with / it stops the likes of /home and virtual filesystems like /proc being added) and f to create the archive in the named file. Do --- fdisk -l /dev/sda >partitions.txt ,,, to create a list of the disk layout, then store this along with your backups and the copy of the MBR in a Safe Place. Back to the list ****** Switching distros Q:: I am running Ubuntu 8.10 and Windows XP on a single 80GB hard disk (50GB for Linux, the rest for Windows). I want to remove Ubuntu and install another distro (I've not decided which yet). The problem is that if I use Windows Hardware Management to wipe the Linux partitions it removes Grub and I can't load XP. If I don't do that, I'm ending up with the old swap partition floating about unused, as I don't have the skills with partitioning to deal with it. Is there a really easy method that won't waste the Swap space that I can use? A:: There are a couple of ways around this. One is to boot from your Windows rescue CD (or partition) and run fixmbr from the rescue console. This restores the bootloader to the default Windows setup, making Grub unnecessary. Alternatively, you can simply run your new distribution's installer without changing anything. When you get to the partitioning section of the installation, tell it to use your existing root and swap partitions, and your existing /home partition if you have one. Let it reformat the root partition but not /home. It will then reinstall Grub, set up to dual boot your new distro and Windows. Most distro's installers are quite sophisticated in their approach to partitioning, so provided you check what it's going to do before you commit anything to disk, this is generally the best way to approach partitioning problems. Most installers use parted to perform the partitioning (the same program used by most Linux graphical partitioning tools), so there's really no advantage in trying to second guess the system, especially if you're perfectly happy with your existing partition layout. Back to the list ****** Launching the Dolphin file manager Q:: I've noticed what appears to be a glaring omission in your tutorials: graphical file management. In Fedora/Gnome, there is a computer icon on my desktop from which I can open windows, create folders, move files around and so on, similar to older versions of Mac OS. How in the world do I launch the Dolphin file manager in KDE? I've had to do file management from the command line. None of your FAQs or 'New to Linux' articles that I've seen discuss file management in Gnome or KDE. A:: KDE 4 has moved everything around and a lot of people are having to learn new ways to perform the old tasks. The easiest way to add instant file management to KDE 4 is to right-click on the desktop and select Add Widgets. This opens a list of the various widgets you can have on your desktop. Select the Folder View widget to add a new desktop window containing your home directory. Clicking on any of the directories here will launch Dolphin. If you want to browse removable devices, USB disks, DVDs or suchlike, click on the New Device Notifier icon, next to the main Menu button, to get a list of devices. You can also launch Dolphin by selecting any one of these. Dolphin is still available in the standard menu, under Tools > System Tools. You can drag it from the menu on to the desktop if you prefer a desktop icon to launch it, or you could use the menu editor (available by right-clicking the Menu button) to move Dolphin to a more suitable (or less well hidden location). The version of Gnome supplied with Mandriva is similar to what you are used to - Gnome is not undergoing the upheavals that KDE users are having to adapt to - so you should be able to use it in exactly the same way as you do in Fedora. The Computer icon is a Fedora addition, but you get the same window open if you select Computer from the Places menu on the top menu bar. If you want a Computer icon on the desktop, drag it from the menu and drop it on the Desktop. This applies to just about any program, whether you are using Gnome or KDE, just drag it from the menu to create a desktop icon. Back to the list ****** Bash crash Q:: I am trying to install Gentoo Linux. I've been using the handbook, but when I get to Chapter 5 (chroot /mnt/gentoo/bin/bash) everything falls apart - I think it's something to do with /bin/bash. Also looking at the handbook, doing a Stage 3 install you need to bootstrap.sh but the link goes straight to compiling the kernel. Can you help? A:: Lots of things can go wrong when you're trying to run something from within a to access the /bin/sh binary itself to not having the supporting libraries. Assuming the installation was successful, you should at least be able to run bash. Not having information on the specific error responses from the chroot command makes it hard to answer this specifically, so it would be helpful to post these to the LXF forums at www.linuxformat.com/forums/. However, it would be a good idea for you to check that the prior installation stages succeeded and that /bin/bash is installed. Back to the list ****** Deleting a whole distro Q:: I'm seriously thinking about trying Ubuntu as a dual boot with XP. Is there a way to erase every part of Linux, including undoing partition changes, if I decide that I want to go back to Windows? A:: If you want to remove all traces of Linux, there are two main steps: removing the Linux partitions and removing the bootloader. The bootloader should be done first, and the easiest way is to boot from your Windows disc into rescue mode then run fixmbr. This will reset the bootloader back to the Windows one. If you don't have a Windows disc, there are other tools for resetting the bootloader. The Ultimate Boot CD (www.ultimatebootcd.com) contains a couple of these. Download the CD image, burn it to a disc and boot from it. Select Filesystem Tools, then Partition Tools, then MBRtool. Press 4 then 9 to reset the boot code. Alternatively, use MBRwork from the same menu - in this case the option you need is "Install Standard MBR Code" and you should also set the first partition active while you are here. Once the bootloader is reset, you can use any partition manager to remove the Linux partitions and resize the Windows one to fill the disc. If you have something like Partition Magic on Windows use that, or you can use GParted from the Ubuntu install disc. Boot to the Live CD desktop (the first option on the CD's boot menu), and select Administration > Partition Editor from the System menu. Here you can delete all but your Windows partition and then resize that to fill the disk. Hit the Apply button, wait for it to finish and reboot. Back to the list ****** Desktop distros vs server distros Q:: Could you explain the main differences between a desktop and a server distro? Also, I know you wouldn't do it on a production server, but how would you go about installing a basic GUI on Ubuntu 8.10, as my command line knowledge isn't up to exploring Ubuntu Server yet? A:: The main difference is in the software, and maybe the default security settings. A server uses different types of software from a desktop computer, so it normally doesn't have X or any form of desktop software, although some do. Security is more important with a server, because it is being deliberately exposed to the outside world. As a result the packages are usually more tried and tested versions, whereas Ubuntu, and other distros, often use very recent software on their desktops. As Ubuntu Server is still Ubuntu with a different set of base packages, you can still install anything you want. Adding a basic desktop is as simple as running --- sudo apt-get install xfce4 ,,, to install the Xfce desktop. If you only want to experiment with various types of server, you could start with the standard desktop edition of your favourite distro and install whichever servers you want to try. There is another alternative, to use a web-based administration tool like Webmin (www.webmin.com). This is particularly useful if you want to run a headless server (one without a keyboard or monitor). Once you have the software installed, you can use a browser from another computer (running any operating system) on your network to connect to https://hostname:10000 and perform most of the administrative tasks you need. Back to the list ****** Removing Windows completely Q:: I have inherited an old Gateway Solo laptop, model 2550, running Windows 2000 Professional - it has a CD and floppy drive. From what I've been reading, this is an opportunity to try Linux. I'm too old to be a geek and would like some pointers as to which version or distro to use and where to get it from. Also, everything I've read so far seems to indicate that you have to load Linux to a Windows machine and use both. As my version of Windows is so old, why can't I wipe the hard disk and install Linux from scratch? Any help would be appreciated. A:: The choice of Linux distro is very personal and the best advice is to try a few and see which one you like best. However, with older hardware, the latest desktops will run slowly, if at all. As long as you avoid anything that uses the KDE or Gnome desktops, you'll be able to run most distros, but for old hardware, especially when RAM is limited, Puppy Linux (www.puppylinux.org) is a good choice. Xubuntu (www.xubuntu.org), a version of Ubuntu with the lightweight Xfce desktop, may also run well on your hardware, if it has enough RAM. Keep an eye on our cover discs for other alternatives. Most months' discs have an unusual, lightweight or otherwise alternative distro on them for you to try out, in addition to the more popular heavyweights. You definitely do not need Windows to run Linux: wiping the drive and starting again is perfectly acceptable (some would say to be encouraged) and most distros' installers have an option to use the whole disk, wiping out anything that was previously installed. Dual-booting is a popular practice, as it allows you to have Windows and Linux on the same computer, choosing which to use at boot time, but of course it's by no means necessary. For example, the computer on which I'm writing this has never had Windows installed on it, nor is it likely to in the future. Back to the list ****** HP scanner permissions Q:: I have decided to upgrade from Fedora 8 to Fedora 10. First I downloaded a live CD and booted that. KDE 4 was a bit of a shock, but overall my first impressions were very good. Then I hit a snag. My HP PSC1410 needs hplip, which was not included on the CD. OK - seems an odd thing to do, but I used Yum to install hplip. Then I could set up and use the printer and scanner, but only as root. This is exactly what happened with my Acer One. I'd tried messing with udev rules to change permissions but only managed to break things, so I restored the files. I did get the printer working but not the scanner. That's no big deal with the netbook but it would be a real pain with my main work machine. Has something significant changed between Fedora 8 and Fedora 10? Or is it merely that 'installing' to an ephemeral live demo causes problems that will go away when I do a full install? A:: It sounds like your scanner device is being created with unsuitable permissions. A simple test is to run these two commands, both as root and a normal user. --- sane-find-scanner -q scanimage -L ,,, The first should discover the scanner no matter who runs it, whereas the second can only access the scanner if it has permission. If this one fails as your user, you definitely have a permissions problem. With USB scanners, the device name varies each time you connect it, so you cannot simply run a chown or chmod command from your startup scripts. You'll have to get dirty with udev, but it's not really that hard. First you need to identify your scanner - you can do this with dmesg, which will include something like this: --- usb 2-1: New USB device found, idVendor=04a9, idProduct=221c usb 2-1: New USB device strings: Mfr=1, Product=2, SerialNumber=0 usb 2-1: Product: CanoScan usb 2-1: Manufacturer: Canon ,,, Or you can use lsusb --- Bus 002 Device 002: ID 04a9:221c Canon, Inc. CanoScan LiDE 60 ,,, Or the tool that comes with Sane --- % sane-find-scanner -q found USB scanner (vendor=0x04a9 [Canon], product=0x221c [CanoScan], chip=GL842) at libusb:002:002 ,,, All of these will give you the vendor and product codes of the scanner. These examples are with a Canon scanner, so you should expect to get different values with your HP. Now you create a udev rule in /etc/udev/rules.d/10-scanner.rules. The name must end in .rules and the leading 10 ensures it's processed before the default rules. Substitute your own numbers in here --- ATTR{idVendor}=="04a9", ATTR{idProduct}=="221c", GROUP:="scanner", MODE:="0660" ,,, This makes the scanner device node readable and writeable by members of the scanner group. You then need to create the group and add yourself to it, as root, with --- groupadd -r scanner gpasswd -a USERNAME scanner ,,, Alternatively, if you're the only user of the computer, put your own group name in the udev rule. Udev will pick up the changes immediately; you only need to reconnect or power-cycle your scanner to pick up the new settings. If you made group changes, you will need to logout of the desktop and back in for those to take effect. Back to the list ****** Upgrading Ubuntu Q:: I have Ubuntu 7.04 installed in my laptop, and I would like to upgrade it to Ubuntu 8.10 that I have on a DVD. Is that possible? If yes, how can I proceed? A:: The in-situ upgrades provided by the Update Manager are only recommended for stepping one release version at a time; 7.04 to 8.10 is a jump of three releases. The safest option is to re-install from the DVD. If you've followed the advice that is often given to use a separate home partition, you can reinstall the operating system without touching your personal data and settings in your home directory. Even if you don't have a separate /home, Ubuntu has added a crafty trick to the 8.10 installer to preserve your home directory when reinstalling. Boot into the live disc's desktop and run the installer. When it reaches the partitioning section, select the Manual option. You can then set up your partitions as before, setting one for / with an ext3 filesystem and the other, smaller, partition as swap. If you have a Windows partition, leave this alone. The important point here is that you do not set the / partition to be formatted. You will be warned that the root partition is not marked for formatting. If you continue, the system directories, /etc, /bin, /usr and so on, will be deleted prior to installing the new release but /home will be left alone. As usual with any major operation like an OS upgrade, if you value the contents of your home directory, you should back them up first, but this is only a precaution; your existing installation should be replaced by Ubuntu 8.10 without touching your files. Back to the list ****** Switching from sudo to su Q:: I am dual booting while in transition to Linux-only OSes on my machines, and I found Ubuntu to be the most complete and overall user-friendly distro; however, I do have minor problems. First, releases later than 6.0.6 have trouble using my laptop screen (I've had similar problems with many distros on my desktop since I switched to an LCD panel). But the main issue here is that I simply don't like using sudo and would like to modify the system another way so that it has root and normal users, only I can't find how to make the change. A:: It's impossible to give any advice on your display hardware without more information, but it's surprising that an earlier distro works while a later version fails. It's usually the other way round, as support for more hardware is added with each release. Ubuntu 6.06 is almost three years old and is no longer supported. This means that you will not get new versions of software and, most importantly, there will be no security fixes. As vulnerabilities are discovered, your computer will become gradually more insecure and open to exploitation. Getting a root login in a terminal is easy with Ubuntu - run 'sudo bash' to run the shell as root. When you have finished either run log out or press Ctrl+D to log out back to your normal user session. While in the root shell, you could set a password for root so that you can use su in future. However, it would seem that you may be approaching this from the wrong direction. If you don't like one of the core features of how Ubuntu works, something that is part of the distro's philosophy, and it no longer supports your hardware, is it really the best distro for you? Fedora 10 uses the Gnome desktop by default (like Ubuntu), has good hardware support, has a root account and received 10/10 in our recent review. I recommend that you try out this, or one of the many alternatives in order to keep your system up to date and working how you want. Back to the list ****** Changing drive names Q:: I have a problem with recompiling a 2.6.22 kernel to be used on my Mythbuntu 8.04.1 setup. I use a Hauppauge HVR-1300 TV card and have discovered by trial and error using Debian Etch that all kernels greater than 2.6.22 cause problems with digital reception when tuned to 506 and 562MHz from the Crystal Palace transmitter in the UK. This means that I can't receive all the BBC and Five channels, the audio stutters and the picture is full of large pixels. All the other frequencies are unaffected. Mythbuntu 8.04.1 uses kernel 2.6.24, but when I compiled a 2.6.22 kernel using sources from both kernel.org and Ubuntu, it failed to boot because Ubuntu uses the sda, sdb numbering scheme to define its hard drives, and the recompiled kernel expects the numbering scheme to be hda, hdb and so on. I have tried changing the root=/dev/sdb3 to hdb3 in Grub's menu.lst and changing the fstab entries and got the kernel to boot, but none of the TV applications can get a signal lock! I've also tried using root=UUID=hexnumber to define the root drive, but that also failed. Is it possible to compile the kernel so that it recognises the Ubuntu numbering scheme, or have the Ubuntu devs patched their sources so that it's not possible to use a vanilla kernel? A:: You should be able to use the sd* naming of your hard drives with a 2.6.22 kernel as this option was added around 2.6.19. That was when the CONFIG_ATA option was added, you'll find it at Device Drivers > Serial ATA (prod) and Parallel ATA (experimental) drivers. You need to enable this option and pick the driver for your motherboard chipset. You must also disable ATA/ATAPI/MFM/RLL support (CONFIG_IDE) that handles IDE drives the old way. Start configuring the kernel by copying the .config file from your existing kernel into its directory and run --- make oldconfig ,,, This will prompt you for settings for any changed options and leave the rest as they were, ensuring maximum compatibility. You can always run make menuconfig or make xconfig after this to tweak the settings. However, what is the most recent kernel you've tried? There were some fixes in the 2.6.28 kernel that related to DVB (as used by digital TV). There have also been changes in the V4L (Video4Linux) part of the kernel that may affect the analogue side of your card. It may also be worth upgrading your Mythbuntu to 8.10, to get the later, hopefully more compatible, versions of everything. You don't need to reinstall to do this. Open Synaptic Package Manager and choose Settings > Repositories. Go to the Updates tab and set "Release Upgrade" to "Normal Releases". Close Synaptic and open the Update Manager, which will tell you "New distribution release 8.10 is available". Click Upgrade and follow prompts to upgrade your system to the latest 8.10. Back to the list ****** Fixing logrotate problems Q:: I use ClamAV for scanning all email. To make this more efficient, I use the daemon service clamd. Recently I noticed that the clamd log file was getting quite large (I know that there is an option in the config settings to limit the log size in kB or MB, but that's not quite what I want). IÊdecided to drop a job into my /etc/logrotate.d/ directory. --- /var/log/clamd.log { missingok notifempty daily rotate 4 create 0620 clamav clamav } ,,, This rotated the files, producing a clamd.log and clamd.log.1 file. To my surprise however, I found that the new clamd.log was empty and that the clamd.log.1 file was still being written to! I found that by restarting the daemon the new log file was used, so I added the following lines to the script --- postrotate /sbin/service clamd restart endscript ,,, Now all works as expected, the log files are rotated and clamd uses the new file. However, I now get the following email each time it is run /etc/cron.daily/logrotate: --- Stopping clamd: [ OK ] Starting clamd: [ OK ] ,,, Is it possible either to rotate the files without having to restart the daemon, or silence the output of the restart? A:: Clamd keeps the log file open, so when you rename it, it's still accessing the same file. A file is locked by its inode, so however many times you rename it, the process that has a lock on it will still access the same file. When you stop and restart the daemon, it releases its lock on the file and opens a new one, this time using the new file by the same name. It may be possible to force this without a restart; it depends on how your distro has set it up. Instead of the restart line in postrotate, try this --- /bin/kill -HUP $(cat /var/run/clamd.pid 2>/dev/null) 2>/dev/null ,,, This reads the process ID of clamd from /var/run/clamd.pid (the location may vary slightly from one distro to the next) and uses that as an argument to kill, which sends a SIGHUP to the process. SIGHUP requests a program to reload its configuration, which should cause it to release and reopen its log file. The two redirections are for the output from cat and that from kill, both being sent to /dev/null to avoid pointless emails from Cron. Cron will mail you when a program produces anything on standard output, so if you have to use the service command, redirect both stdout and stderr to /dev/null. --- /sbin/service clamd restart >/dev/null 2>&1 ,,, Back to the list ****** Get software from DVDs with APTonCD Q:: In Ubuntu, under Administration > Software Sources > Third-Party Software, it's possible to add packages stored on a CD-ROM - very useful as I can only connect my laptop to the internet occasionally, thus not install packages direct, but I can still browse on work or public library computers and copy packages. However, I have an older Dell Laptop Latitude C840 which has a read-only, but bootable, CD-ROM drive which I can no longer upgrade. I have added an external DVD writeable drive as a USB device, but the laptop's BIOS can't be modified to recognise this as bootable. Although I can't boot from DVDs, I would still like to use the USB DVD to add diverse packages to Software Sources and by using APTonCD. The problem is that the "Add CD-ROM" button insists on looking at the default CD-ROM drive - the old limited-capacity read-only boot drive /dev/scd0 (media/cdrom0). My question is how do I get Software Sources and APTonCD to read from the the larger USB DVD /dev/scd1 (/media/cdrom1) drive? Is there a command line way around this? Is there a file in /etc that I need to add an entry to? A:: As far as adding a disc as a third-party repository goes, we found it worked with an external, second drive with Ubuntu 8.10. You need to connect the drive and wait for the disc to be mounted, then try to add it. If the disc structure matches a standard Debian package disc, you don't even need to run Synaptic - a requester pops up asking if you want to use the packages on this disc. Alternatively, you can cheat with APTonCD. When you run APTonCD on the connected computer, it creates an ISO that you would normally write to a CD or DVD. If it's small enough to write to a CD, you have no problems, as you can read that CD in the internal drive. If it's larger than 700MB, write the ISO image to a DVD as a file. In other words, create a filesystem on the DVD that contains just one file, the ISO image. You can do this with your favourite GUI CD/DVD authoring program, or do it directly with growisofs --- growisofs -Z /dev/dvd -dvd-compat -R aptoncd-nnnnnnnn-CD1.iso ,,, When you put this disc in the target computer and run APTonCD, select ISO image when asked for a source and pick the ISO image on your DVD. This is a standard file chooser, so it doesn't care where the ISO image comes from. It's simple to do and effectively bypasses the single-drive limitation of the CD loading in APTonCD. Make sure the ISO image is below 4GB in size. mkisofs, the program used to generate the filesystem for growisofs, may create corrupted images of files larger than that. Back to the list ****** FAT permissions problem Q:: I decided I would try putting Ubuntu on a USB stick. But the extra area I allocated for my data files is read and execute only. All directories and files are owned by root. The USB stick partition for /dev/sdb1 is mounted on /cdrom and is a W95 FAT32 (LBA). As root in my hard disk system, I copied my data files to this partition. But I cannot change the ownership of the directories to my user account even as root, as I don't have permissions. Why? A:: FAT32 doesn't support file permissions and ownerships. You can give a default owner or group permissions for everything via the options given to the mount command, or in /etc/fstab. The options you can use are umask to affect the permissions and uid/gid to set ownerships. Add umask=000 to the options given to mount, or change the --- /etc/fstab line to something like /dev/sdb1 /cdrom vfat umask=000 0 0 ,,, The umask is subtracted from 666 for files and 777 for directories to give the actual permissions, so all directories will be rwxrwxrwx and all files rw-rw-rw-. Alternatively, you can specify the user or group owner --- /dev/sdb1 /cdrom vfat uid=david 0 0 ,,, UIDs/GIDs can be names or numbers. Combine options too: --- /dev/sdb1 /cdrom vfat gid=users,umask=002 0 0 ,,, Everything is group writeable and owned by the group users here. None of this metadata is saved to the filesystem; it's just to make a permissionless, ownerless filesystem work with an OS that expects such attributes. Change these settings for a mounted filesystem, like so: --- mount /cdrom -o umask=000,remount ,,, Back to the list ****** Forcing 80 columns in text mode Q:: The text mode of Knoppix sets the display to 132 columns, but I prefer the standard 80x25 layout, which suits a smaller monitor. Is this controlled by the kernel or by a program; and how do I change it, please? A:: Knoppix used the kernel framebuffer for its video, including the console. Pass a vga=normal to the boot prompt if you want a standard console. Go to Knoppix's Help screen for other resolutions, or refer to the table at www.desktop-linux.net/framebuffer.htm for a variety of common resolutions. There is a selection of boot options for the Knoppix distribution, and there are several help screens accessible prior to booting the kernel that list all of the possible arguments you can use. Back to the list ****** Nvidia 8800 drivers Q:: In December 2007 I built a new PC, the best machine I could afford. My thinking was that it would fly with Linux with 2GB of RAM at 1066, 3.4GHz dual core, 4TB of storage and a BFG Nvidia 8800GT graphics card. I then tried Ubuntu, Mandriva and SUSE. None of these could install or work as a live disc. I tried various other installations and basically it all came down to Linux not having a suitable driver for the graphics card. Now, having had to use Vista rather than Linux for the past year, I'd really like to come back to the fold. Has this driver issue been fixed? If so, which distros now include it? A:: Nvidia was a little slower than usual in getting out a Linux driver for the 8800 GT. It's a different GPU from the 8800 GTS and GTX. But a driver was released quite soon after the card came out. With the six-month release cycle of most distros, plus the package freeze for testing before release, it could be up to nine months before your favourite distro supports new hardware. Support for this card was added in release 169.07 of the Nvidia drivers, although that version had a bug that ran the fan at full speed all the time. The current release is 180.29 which lists the 8800 GT as supported. Any recent distro should have a suitable version of this driver. Mandriva and Ubuntu certainly do. Even if your favourite distro isn't supported, you can always download the drivers from www.nvidia.com and install manually. Back to the list ****** Booting Fedora from a USB stick Q:: I've tried creating a bootable USB stick in Fedora using a downloaded Fedora 10 ISO. However, when I try to boot from this USB stick I get the message: "Could not find kernel image: Linux". Any suggestions? A:: Are you using the right ISO image? It has to be a live CD image, not a standard installer disc ISO. The USB stick may have already contained another bootable distro that used Syslinux. In this case, it refused to overwrite some files, leaving a mix from Fedora and the previous installation. Delete everything from the flash drive before running the Fedora Live USB Creator. Back to the list ****** Fixing ATI video driver problems Q:: My PC is running Windows XP and Ubuntu 8.10 in dual-boot mode, with each OS on its own hard drive. Everything was running fine until I disabled the proprietary driver for my graphics card to try the Linux driver. The problem now is that when I get to the Ubuntu boot screen, the screen goes black and shows the message: --- Out of Range; H.Frequency=12.0kHz V.Frequency=11.0Hz. ,,, The graphics card is an ATI Radeon HD3870. Could you please tell me how I can get back into the system to enable the proprietary driver for the graphics card. A:: As the X graphics server is trying to send a signal that your monitor cannot handle, it's displaying this message, but the boot is continuing behind it. While you can't see the desktop, you can always get to a text console, where you can fix things. If you don't see a boot menu when you start up, press Esc to display it when the first boot message appears on screen. There should be an option for Ubuntu marked Recovery Mode; tick this and you'll see a text menu. You can try the 'xfix' option here, which may fix the server for you, then select Resume to see whether this has worked. (Unless you've set a root password, the root option is of no use here.) To get a shell prompt, let the standard boot proceed as normal, wait until the hard drive light has stopped flashing to know when it has finished, and press Ctrl+Alt+F1. This switches from the desktop to a virtual console, which is at a lower resolution, so your monitor should stop sulking and display it for you. Now you can log in as your normal user and fix things. Did you uninstall the proprietary drivers or just change the settings to use the free ones? If you uninstalled them, you can reinstall from the command line with: --- sudo apt-get install xorg-driver-fglrx ,,, This should also reconfigure the card. If you don't need to install the drivers again try --- sudo dpkg-reconfigure xserver-xorg ,,, to reconfigure the card instead. Back to the list ****** Rearranging LVM Q:: I run Fedora 9 on my PC, which has two 80GB hard drives configured with LVM [logical volume management]. I would like to create a home partition on one of them, but can't figure out how to do it. GParted doesn't work with logical volumes, so I'm a bit stumped as to what to do - can I remove or disable LVM? A:: Removing or resizing LVM to make room for another filesystem defeats the whole point of using LVMs. Is your volume group completely full? (I hope Fedora doesn't set it up like that). If not, you can add a new logical volume and use that as your home partition. Run vgs in a root terminal to see how much space is available on your volume group; you should see something like this --- % sudo vgs VG #PV #LV #SN Attr VSize VFree eee 2 5 0 wz--n- 17.45G 4.96G ,,, which shows that we have one volume group (called eee) with just under 5GB available (as you can guess from the volume group name, this is on an Eee PC, hence the small numbers). So we can create a 4GB volume called home with the following: --- lvcreate --size 4G --name home eee ,,, that will then be at /dev/eee/home, and I can format and mount it as I see fit. If your volume group is full, you'll need to resize one of the existing logical volumes before you can create a new one. The first step is to reduce the size of the filesystem, so that it is slightly smaller than the final size of the logical volume. Let's say you have a volume called 'myvol' on a volume group called 'myvg' - the lvs command will show you a breakdown of all your volumes and groups. If, for example, you want to resize it to 10GB, you can use the following commands: --- fsck -f /dev/myvg/myvol resize2fs /dev/myvg/myvol 9G lvresize --size 10G /dev/myvg/myvol resize2fs /dev/myvg/myvol ,,, Resizing an ext2 or ext3 filesystem requires that you run fsck on it first. The next command shrinks the filesystem on the logical volume to 9GB, to make sure the volume is never smaller than the filesystem, then we resize the volume to 10GB and the final resize2fs command has no size given, so it expands it to fill the volume size. Now you have free space that you can use to create your home partition. You cannot run these commands on a mounted filesystem, so do it all while booted from a live CD such as Knoppix or System Rescue CD. If you have plenty of free space available, you should create volumes at only the size you need. While reducing volume sizes can be a bit of a fiddle, increasing them is very simple and can be done while they are in use, so make them the size you need then leave the rest of the space unallocated until you need it. If you're not comfortable with the command line, you can use Webmin to manipulate filesystem and LVM objects, but the command line approach is faster, and easier once you are used to it. Back to the list ****** Monitor signal out of range Q:: A few years ago, I was able to install Ubuntu Edgy Eft on my HP Pavilion. The installation went without a hitch; even the X Window System worked well. I have since upgraded to Hardy. However, around the time Hardy was released, live CDs stopped working on my HP. I would boot to them and make a selection from the boot menu, but I would never get a graphical interface. The screen would go blank and my HP f1703 monitor would tell me that the signal was 'out of range'. This happens no matter which distro I try with a live CD or bootable USB. At least with a USB, I can just drop in my working xorg.conf file from Hardy and it will work fine. But why is it that all of the newer live CDs fail when X starts? In each case, Linux is able to correctly identify my monitor, but is unable to use it with X (I can get to a terminal). A:: Do the old live CDs still work correctly? If not, it would appear you have a hardware problem. If they still work and only the more recent discs fail, then you are probably a victim of a change to X.org that causes hardware detection of your monitor's specification to fail, or at least report the wrong information. In essence this means the discs are still working, but your monitor doesn't like the resolution they have chosen for it. The 'out of range' message is coming from your monitor, which is what such devices output when they receive a signal outside of their specification. This is to prevent an incorrect setup causing long-term damage to their components. All is not lost, though, because most live discs enable you to change the resolution, and many other settings, before the main boot starts. With Ubuntu, press F4 at the boot menu to get display options and choose Safe Graphics Mode. When booting from the OpenSUSE disc, press F3 to get a list of resolutions. Other distros have similar methods of picking a safer resolution that will not upset your monitor. Back to the list ****** Belkin wireless card connection problems Q:: I bought two Belkin F5D7000 PCI wireless cards, one for use with a Dell Dimension E500 and one for use with an RM Nimbus. The RM is running Xubuntu 8.04 and the Dell Dimension is running Xubuntu 8.10. I installed them both using NdisWrapper and am able to connect to my wireless network absolutely fine on the RM. The Dell connected the first time I tried, but once I had rebooted - even though I could see the list of networks within range - I was unable to connect. I disabled WEP security on the router and configured access control instead and was then able to connect again, but only once. Upon reboot, the list of networks was there, but again I was unable to connect. I then disabled access control and again was able to connect once, but never after that. I suspected it might a problem with the card, so I swapped it for the one in the RM. Again, I could connect once, but when I rebooted I faced the same problem. Both cards work absolutely fine in the other machine. I'm out of ideas now - I can't fathom why it'll only work once! A:: The use of NdisWrapper with the newer Ubuntu appears to be the cause of your problems. NdisWrapper should only be used as a last resort when there's no native driver. Aside from the security implications of running closed-source Windows code as root, which is scary enough in itself, there is the possibility of a clash with the native driver. That appears to be the case here, as the RTL8180 driver now supports these cards and this driver has been improved of late, increasing the hardware it supports. Xubuntu 8.10 has a more recent set of kernel modules, so it's likely that this module is being automatically loaded when you boot, locking out NdisWrapper. When you go through the manual configuration, you are somehow overriding this, so the NdisWrapper driver works - at least until you reboot. The solution is simple and an improvement on your current setup. Remove NdisWrapper and switch to the in-kernel RTL8180 driver. You should find that removing NdisWrapper and rebooting is enough to enable the kernel driver to work; if not, try loading it manually with --- sudo modprobe -v rtl8180 ,,, If that works, you can have the module loaded automatically at boot by adding rtl8180 to the end of the file /etc/modules, which lists modules to be loaded when you boot. By the way, the RTL8180 driver supports WPA encryption, which is far superior to the security provided by WEP (although a wet paper bag is more secure than WEP). If your router supports it, you should switch over to WPA or WPA2, which is an easy task in Gnome's NetworkManager. Back to the list ****** VirtualBox kernel module - permanent modprobe installation Q:: I am a complete beginner with Linux. I have installed Debian Sarge on my HP 6720s laptop because I hated Vista. I have then added VirtualBox and put Windows XP on it. I managed to get it working, but every time I restart VirtualBox and XP it fails with this message: --- as root enter modprobe vboxdrv ,,, If I do this, VirtualBox works, but I'd like to permanently install the modprobe instruction so that VirtualBox will just work. I have tried entering the instruction in etc/init.rc local and rebooting, but VirtualBox failed as usual. Can you explain, please, how I can install the modprobe command so that it works automatically after I boot up? A:: Debian Sarge (3.1) is over four years old and is no longer supported by the developers. That means that fixes for bugs and security vulnerabilities in current software are not applied to Sarge packages, leaving your computer at risk. The latest Debian release is Lenny (5.0), which is a much safer option. Upgrading from such an old release in one step is likely to cause problems, so a re-installation with Lenny is the best starting point. The correct file for running programs at startup or shutdown is /etc/init.d/rc.local - add the command to either the start or stop section, or both, depending on when you want it to run. But this is not the correct way to load modules, which have their own system. The file /etc/modules contains a list of modules to be loaded at boot time. If you add vboxdrv on a line on its own it will force the module to be loaded, which loads it much sooner in the boot sequence. However, rc.local is run last, which could explain why loading from it isn't working. If you need to add options when loading a module, create a file in /etc/modprobe.d, named after the module for convenience, and add something like this: --- options vboxdrv force_async_tsc=1 ,,, The modinfo command followed by the module name, run as root, shows the full list of options for any module. Back to the list ****** Booting USB hard drives Q:: I have been reading about creating a USB bootable hard disk, and I was wondering if I can just do the same with an external hard drive? Also I understand that you can test ISO images using VirtualBox, but how do I convert an ISO image to VDI? A:: It is possible to put a distro on a USB hard drive, but the procedure is different. Install the distro as normal, just as if it were an internal hard drive and let the installer put Grub on the drive, then tell the BIOS to boot from the USB hard drive and it should just work. Unfortunately, life is rarely that simple and there are a couple of problems that can arise, caused by the installer thinking this is the second drive on your system. These can be avoided on a desktop system if you disconnect the internal hard drive(s) - with your computer powered off, of course - then the installer will see only one drive. If this is not an option, you have two potential problems to fix. You may find that the second drive is referred to as sdb in /etc/fstab, causing all mounts to fail when you boot from it. This is because BIOSes tend to put internal drives ahead of external USB ones, except when you boot from the external drive. So the USB drive was sdb when you booted from the CD to install and sda when you booted from the drive itself. This is easily fixed by editing /etc/fstab, either while booted from a live CD or in your normal desktop environment. The other potential problem is that the installer may have tried to install your bootloader to the first drive. Watch out for any options during installation to specify where the bootloader goes, as some distro installers keep this well hidden. Specify sdb here and you should be fine, otherwise run grub-install after you've finished installing to make sure the disk is set up correctly: --- sudo grub-install /dev/sdb ,,, If the automatic installation of Grub doesn't work, it is easy enough to do manually. Assuming your USB disk is /dev/sdb and that the distro is installed on /dev/sdb1, you should run these commands: --- sudo grub root (hd1,0) setup (hd1) quit ,,, The first line takes you into the Grub shell, and the next tells Grub where your distro and its Grub files are installed. Then it writes the bootloader to the disk's MBR and exits the Grub shell with the remaining commands. Remember that Grub numbers from zero, so hd1,0 refers to the second drive, first partition. You don't need to convert an ISO image to a VDI file to use it in VirtualBox either - a VDI file is an image of a hard drive, whereas an ISO image contains the contents of a CD or DVD. Set up a CD-ROM drive in VirtualBox and set it to use your ISO image. When you boot the virtual machine, it will behave as though its CD/DVD drive contains the disc held in that ISO image. Back to the list ****** Restoring USB keys Q:: I've tried out Ubuntu 8.10 using a new 8GB USB stick. Ubuntu is a keeper, so I installed it to disk, but how do I get my USB stick back to where it was? A:: Did you set up the USB stick as a live device? If so, it still has a standard FAT filesystem and you only need to delete all the files from it in the usual manner. If you installed to it as if it was a hard disk, you need to restore the default setup of a single FAT32 partition, which you can do with GParted - you can install this through Synaptic. Make sure your USB stick is plugged in but not mounted - right-click its icon and select Unmount Volume - then run GParted. Make sure you pick the correct device (it's probably /dev/sdb, but the size of 8GB should be a dead giveaway), delete the partitions on it and click on Apply. You can now create a new partition using the FAT32 filesystem to fill the device completely. You may also want to remove the bootloader from the device. It won't do any harm leaving it there, unless you reboot with the stick in place, when the computer could try to boot from it instead of your hard disk and fail. Remove the bootloader by running this in a terminal: --- dd if=/dev/zero of=/dev/sdb bs=446 count=1 ,,, This command zeros out the first 446 bytes of data on the stick, which is the area that contains the bootloader. Make sure you specify the right device the first time, as dd doesn't offer you any second chances. You see, it's a low-level tool whose name could just as well stand for delete and destroy. Back to the list ****** Fixing modem and PPP problems using WVDial Q:: I am new to Linux and want to move away from Windows completely. I bought a compatible modem recently, but struggled with Ubuntu and Fedora for hours to set it up. Ubuntu did dial out, but then the KPPP daemon crashed. I restarted the computer with my USB copy of Puppy Linux 4.1, clicked three things, typed my login, password and the phone number then the new modem jumped to life. Including bootup, setup and typing, it took me less than five minutes. Why does this process have to be a nightmare on the 'sophisticated' distros? Please explain the madness! A:: Until recently, PPP (point-to-point protocol) was a fading technology with most people moving from dialup connections to broadband. This is now changing with the advent of mobile broadband services that once again use PPP. The main difficulty with using PPP is interpreting the error messages, as a failed connection often fails with a terse message along the lines of: --- The ppp daemon exited with an error nn ,,, leaving you to look up the meaning of the error code in the man page. This is probably what happened when you thought the daemon had crashed in Ubuntu; it's hard to tell a cryptic, unexpected exit from a crash. We think it's more likely that the connection failed for some reason. Either the default init string put the modem in the wrong state for connecting to your ISP, or the connect script (if there was one) was wrong, or one of a number of other config errors that can cause such a failure occurred in this case. Ubuntu and Fedora both use the Gnome desktop's Gnome-ppp, while Puppy Linux uses its own PupDial. It could be Gnome-ppp at the heart of your problems, as sometimes systems can try to be too clever. Most distros use Wvdial to handle the actual dialling, whatever front-end they use, and this includes Puppy Linux. You can keep a copy of the working config file that Puppy created, which will be saved either as /etc/wvdial.conf or ~/.wvdialrc (that's .wvdialrc in your home directory). Put this somewhere safe and you can use it with any distro. If you look in the file, you'll see one or more diallers defined by a section starting with --- [Dialer NAME] ,,, You can launch Wvdial directly from a desktop icon or menu item if you wish. Set up a launcher in the usual way (right-click on desktop or run menu editor) and add the command --- wvdial NAME ,,, If you want a similar setup to drop the connection, have it run killall wvdial. Back to the list ****** Can Linux rebuild a broken RAID mirror? Q:: I've been reading a book on RAID under Linux and have set up a RAID 1 system with two drives. However, if drive 1 fails and I have to replace, I'm unsure of whether I'll need to partition it first or if it will automatically do that during the reconstruction process. A:: You might not have to repartition the disk, but this depends on the disk configuration with RAID. If there are individual partitions, such as hda1, hdc3, etc which are used to create the md devices, the new disk will have to be repartitioned, as the kernel is unable to do this itself. The new partitions must be at least large enough to store the RAID image, which can be particularly difficult when two disks that are apparently the same size have a different number of cylinders. Using identical disks helps - however, as we all know to our cost, manufacturers usually have batches of disks that just fail, so using disks from the same manufacturer isn't encouraged. If you want an easy solution where you can just slap in a new disk and have it build the RAID image on it automatically, take a look at LVM, which is a high-level partition system. One can create logical volumes (which are similar to partitions out of the md device) which means they are not involved in the RAID array itself. All md0 will consist of is /dev/hda and /dev/hdc, and rebuilding is easy assuming the disks are the same. It's also worth remembering that most hardware RAID systems ignore partitions entirely and use the whole disk that is put into the system, but one can partition this device up into smaller partitions which are distributed across the array. Software RAID devices can be either whole disks or partitions, and the structure needs to be built on any devices added to the array. Back to the list ****** LVM migration Q:: I want to setup an LVM on a server that contains two USB disks. This LVM will contain 1.5TB across the two disks and once the LVM is full, I want to migrate the disks to a different server and retrieve their content. How complex is this? A:: This is a pretty straightforward task, provided you attend to a couple of details before you start copying data. When you create the partitions that will form the physical volumes, make sure you set the partition types to Linux LVM (8E). You can do this in fdisk, cfdisk (our preference for manipulating partitions) or one of the graphical partition manipulation programs such as GParted or QtParted. Secondly, when you create the volume group, give it a unique name. Don't be tempted to use a standard default such as VolGroup00, because LVM cannot handle two volume groups with the same name. If you transferred these disks to a machine with an existing volume group of the same name, only one of the volume groups would show up at most, probably neither. We usually relate the volume group name to the hostname of the computer running it, so there's no chance of a clash when using it on a different computer. Provided you take these precautions when setting up, the logical volumes should be recognised by the new computer. If that computer already uses LVM, connecting up the discs and booting it up should be all that is needed, because the new volumes will be detected when the existing LVM is initialised. If the target computer is not already using LVM, or you plug in the USB drives without rebooting, make it recognise them with: --- vgchange -a y ,,, when running as root. Provided you have your distro's LVM package installed, your logical volumes should now appear in /dev/volumegroupname/ for you to mount. Back to the list ****** Restoring user accounts Q:: I used to have Ubuntu 8.10 installed in /dev/sda1 and /home on /dev/sda2 with four user accounts. Then I decided to download Ubuntu 9.04 and replace 8.10 on /dev/sda1. After the installation had finished, I wanted to restore the other three user accounts, but Ubuntu told me that these accounts were there already (they're home folders), so I cannot add them again. Can you please tell me how I can reinstate these three user accounts? A:: This appears to be a limitation of Ubuntu's user administration tool, which refuses to create a user if the given home directory already exists. You could rename the old user directory, recreate the user, switch over the user and group IDs (UID and GID), delete the new user directory and then rename the old user directory - but that's a hell of a lot of work to go through just to use the 'easy' GUI option, and you'd have to repeat this arduous process for each user. It is much easier to do this with the command line-based useradd tool, which doesn't create a user directory by default. First, you'll need to find the UID and GID previously used by your users, which you can do with ls. Try running the following: --- sudo ls -l /home ,,, in order to produce some results that look roughly like this: --- drwxr-xr-x 2 1001 1001 4096 2009-04-16 11:07 fred drwxr-xr-x 2 1002 1002 4096 2009-04-16 15:23 jim drwxr-xr-x 30 nelz nelz 4096 2009-04-16 11:02 nelz ,,, The third and fourth entries contain the UID and GID. You'll see that for the user called nelz the output shows the user and group names, but the other two - the users that haven't yet been set up - have numeric values assigned to them. Now you can recreate these users with the useradd command line tool like this: --- sudo useradd --home /home/fred --uid 1001 --user-group fred sudo passwd fred ,,, This creates the user fred, who has a home directory of /home/fred and a UID of 1001, then asks you for a password for the user. The --user-group option creates an individual group for this user, which is the way Ubuntu handles users. If you want all users to be members of the same group (say, users), use this command instead: --- sudo useradd --home /home/fred --uid 1001 --gid users fred ,,, Repeat this for each user, then check with ls -l that each home directory is owned by the correct group and user. If this isn't the case, you can correct them with: --- chown -R fred: ~fred ,,, Although this process should have solved your immediate problem, there's no guarantee that you won't run into similar issues next time you upgrade. But rather than shy away from updating your distro, it's worth making a copy of /etc/passwd. This means that the next time you update, you can copy the relevant lines from this directory to the same file in the new installation, although you'll still need to set passwords for the users. An alternative to the command line for this task is Webmin. It isn't in the standard Ubuntu repositories, but you can download and install a Deb package from www.webmin.com/download.html. Then point your browser at https://localhost:10000, tell it to accept the security certificate (it's your own program running locally, so a self-signed certificate is fine) and finally go to System > Users and Groups to set your users up. Back to the list ****** Samsung monitor blank screen Q:: I have a Samsung 2343BWX monitor that won't work with Ubuntu 8.10. When I boot up with the live CD, the process stops, I get a blank screen and two of the three lights on the top of the keyboard blink on and off. Nothing further happens. The lights that blink are the one above the A and the light to its right with the arrow symbol under it. Initially, I thought the problem might be the video card, so I bought an Nvidia GeForce 6200, but switching the card didn't solve the problem. Then I tried the monitor on a few other computers I use (they have OpenSUSE 11.1, Mandriva 2009 and Fedora 10 installed on them), but it seems to work OK. This monitor also works when I boot Ubuntu 8.04. What do you think the problem is? A:: It's highly unlikely that this is anything to do with your monitor, Jim. In fact, we expect you'd see the same symptoms even if you switched your monitor for another. The clue is in the flashing Caps Lock and Scroll Lock keyboard LEDs, which indicate a kernel panic. This is when the kernel encounters an error it cannot deal with - it's similar to the Blue Screen of Death found in a certain proprietary operating system, but usually more informative. In other words, when the kernel hits an error it cannot resolve, it prints debug information to the screen and stops working. The flashing LEDs are an extra indicator should you be unable to see the screen output, as in your situation. The reason you cannot see the error message is that Ubuntu, in common with most other distros, hides the boot messages behind a splash screen. To remove the splash screen and see the error message, boot from the CD, pressing F6 to display your options when you see the boot menu. Delete the words quiet and splash from here and press Enter to continue the boot. Your computer will still hang with the kernel panic, but this time you'll be able to see what it says. Now plug the error message into your favourite search engine to find a possible solution. Unless you are using some extremely obscure hardware, it's likely someone else has already encountered this problem and the solution is out there. There are a couple of standard things to try before you hit your search engine, though. First, unplug all unnecessary USB devices. Fairly obviously, you'll still need your keyboard and mouse connected, but any scanners, printers, audio devices and external storage devices can go for now. This doesn't mean you won't be able to use them, only that they're causing a problem for the kernel on the live CD. Once you have the system installed, you can reconnect your devices. Another common problem is a buggy APIC (Advanced Programmable Interrupt Controller) implementation on your motherboard. You see, some manufacturers are content to do the minimum required to get your board to work with Windows, rather than build it to the specifications. This issue can be avoided by disabling the kernel's APIC support. Press F6 to get the boot options as you did before, remove quiet and splash so you can see what's going on and replace them with noapic. If this works, you can add the noapic option to your boot menu during installation to use it every time, but you should also check whether there's a BIOS update available for your motherboard. Even brand-new motherboards have updates available by the time they reach the shops and these often fix APIC-related problems. Back to the list ****** Automatic Subversion root parameter Q:: I have an old box that's running Ubuntu Server 8.04 and has a working installation of Subversion. It currently has a single repository under /svn, but I'm experimenting with having two separate repositories with a common parent directory. I created the repositories with svnadmin and they seem to work fine if I use svn checkout file: ///full/path/to/repo/, but if I use svn checkout svn:///relative/path/to/checkout then I need to tell svnserve where the root of the two repositories lies. After some research online, I've found that I can do that by passing the root as a parameter to svnserve - how can I do that when svnserve is started automatically during the boot process? A:: Are the two repositories under the same directory? If so, you can use the --root option with svnserve and include the repository name in the request. For example, if you are using /full/path/to/repo1 and /full/path/to/repo2, start svnserve with --- svnserve --root /full/path/to ,,, Then you can access them as svn://repo1/path/to/file or svn://repo2/path/to/other/file. The alternative is to use a separate server for each repository, running them on different ports or hostnames to differentiate between the two. For example, the following: --- svnserve --daemon --root /full/path/to/repo1 svnserve --daemon listen-port=3691 --root /full/path/to/repo2 ,,, will run one server on the standard port (3690) and one on port 3691. Requests to the second server would use a URI of the form svn://hostname:3691/path/to/file. This works, but it's clunky for users of the second repository since they have to include the port number each time. You could use different hostnames instead by editing /etc/hosts to make the second hostname an alias of the first. The lines in /etc/hosts look like: --- ip-address hostname aliases ,,, where you can have any number of aliases (each one should be separated by a space). Add an alias to your second hostname in here, and then on the other computers on your network (unless you run a local DNS server, in which case you need only make the changes there). After that's done, you can start the two instances of svnserve with --- svnserve --daemon listen-host=myhost --root /full/path/to/repo1 svnserve --daemon listen-host=newhost --root /full/path/to/repo2 ,,, Now you can use the appropriate hostname for each repository without all the unpleasant business of mucking about with port numbers. Also note that Ubuntu doesn't include an init script for svnserve, so you should put these commands in /etc/rc.local before the final line containing exit 0. Back to the list ****** LIRC forwarding Q:: Is there any software you know of that can receive an LIRC signal and then loop it back for resending, or would I have to write something that can do this myself? What I want to do is receive a LIRC input (from a remote) and resend this signal (via a transmitter). I currently have LIRC configured with options to receive and transmit, but I need a way to link the receiver to the transmitter. A:: For what you're attempting, it sounds like you need to make use of irexec, which will execute any command you require. This includes using irsend to send a command through your infrared transmitter. The commands to execute for each button pressed are defined in your .lircrc file, and they look something like this: --- begin prog = irexec button = Record repeat = 3 config = irsend SEND_ONCE remote CODE end ,,, The prog name must be set to irexec, while button = is the identifier of the button the command will respond to. To see the identifier each particular button on your remote sends, run irw in a terminal, press a few buttons and watch the output for each one. Meanwhile, the repeat = option prevents LIRC calling the program multiple times if you hold down the button longer than the duration of one transmission. A value of three means that three consecutive signals will be treated as one. Finally, the config option contains the command you're going to run - we're using irsend in this case, but it could just as well be anything else. You must have irexec running for this to work. If you run it from a terminal, it'll default to loading its configuration from ~/.lircrc. To use another config file, you'll need to provide it on the command line. Now if you run irexec at startup, it won't be able to find your home directory, so it must be given the path to the config file. You should also use the --daemon option in this case to run the program in the background. So, then, the complete command to run it would be: --- irexec --daemon /etc/lircrc ,,, assuming you keep the configuration in /etc/lircrc. Because irexec isn't being run from a user's shell in this case, it won't have access to your full environment, so we recommend you use the full path for any commands you specify in the .lircrc file. Back to the list ****** Acer Aspire One WiFi with Ubuntu Netbook Remix Q:: As a newbie to Linux, I've been reading your series on the Aspire One with great interest. I now want to install Ubuntu Netbook Remix (UNR), but I'm nervous that I won't be able to persuade the distro to find the Aspire One's built-in Wi-Fi card, which would be catastrophic for me. Could you also suggest where I can get a book or magazine that explains how to manage Linux in terminal and configure the operating system? A:: UNR can be run from a bootable USB stick that contains a live version of the distro. Live distros, whether on a USB stick or a CD, run directly and make no changes to the software already installed on the computer. Thus one of the (many) benefits of the system is that you can try the distro at no risk. If your wireless works with the live version, it will also work when you install it. If not, you have lost nothing but a few minutes of your time. When installing, depending on the amount of space you have available, you may be able to install UNR alongside your existing installation and choose which one to use at boot time. This is a great way of trying out distros, but it's also used by those who want to use Linux but need to retain Windows for some tasks. It will be offered as an option at the partitioning stage of the installation process. As for books and magazines, the best magazine is Linux Format, of course! For a good reference, try Rute. This can be bought as a book, or read online (http://rute.2038bug.com/index.html.gz). Keep an eye on our book reviews for any new gems that come along. Back to the list ****** Securely erasing hard drives Q:: Recently, I read the Which? article on erasing hard drives: www.theregister.co.uk/2009/01/08/hard_drive_hammer_destruction. I know that the Which? advice, which recommends taking a hammer to old drives, is a shade excessive, but what would your advice be? I'd normally use Darik's Boot And Nuke (DBAN), but lately I've started using dd to zero out a drive. The Great Zero Challenge information at http://16systems.com/zero.php explicitly says that using dd to zero the drive is pretty much a guarantee that the data isn't recoverable. After I found that out, I used dd to zero a drive and then ran the Ontrack disk recovery program for over two weeks without finding a single byte of data. Do you know if it's better to use /dev/urandom rather than /dev/zero for the input, though? A:: It's true that deleting files doesn't adequately remove their contents, but Which? magazine is also correct in stating the only way to guarantee that data won't be recovered is to destroy the drive. After all, TestDisk can easily recover the contents of deleted files and even blanking with zeros is considered insufficient defence against forensic data recovery equipment. That said, smashing your hard drive with a hammer isn't a good idea, however satisfying it may feel. Even if you don't get hit by flying shrapnel, destroying a drive in this manner is quite environmentally unsound. Ultimately, the lengths you need to go to will depend on the value of your data. No one's going to waste their time and use expensive forensic equipment to recover your holiday snaps (regardless of how good they are), but confidential company information is another matter. Again, though, the data isn't usually sensitive enough to justify physically shredding the drive. There's another aspect to consider here as well: if you're storing personal data about others - be they co-workers, minors, dependents or any other person - you are liable for the security of that information under the Data Protection Act. You've mentioned Darik's Boot And Nuke and this is an excellent tool for completely and thoroughly erasing all data from a drive - far more so than a simple overwriting of zeros. Running DBAN over a hard disk may take longer, but that's just a sign of how thorough it is. You also get the confidence boost of trusting the details of your data erasure to people who understand what's really needed. So, it really depends on how important this process is to you and how much time you feel it deserves. A simple zeroing will stop roughly 99% of attempts to recover data - ask yourself, is that enough for you? If you're storing sensitive data, you should also consider using filesystem encryption in the first place. Not only does it protect your data in the event of the theft or loss of your computer (desktop computers get stolen as well as laptops, so they're worth encrypting too), but it adds an extra layer to the recovery process after erasure. This is due to the fact that any recovery attempt will only find encrypted data, leaving yet more work to do to get at the real data. You only need to make it more effort to access your data than the data is worth to effectively protect yourself and others, so the 100% certainty of a destroyed drive is unnecessary unless you work for the security services or a crime syndicate. Back to the list ****** Keeping passwords on a USB key Q:: I've been toying with the idea of keeping a file full of passwords and other valuable data on a USB key that I've encrypted with GnuPG. The idea is to be able to plug it into any PC and then decrypt and read the file. I would want to put GPG itself on to the key so that I can always do this without having to install GPG on the PC and thus without leaving any trace of my data in the machine's records. How can I do this, and is it even wise? A:: You'd need to include your private key on the USB stick too, so the only thing protecting your data would be the passphrase. In that case, you're effectively using a single passphrase and nothing else to protect your data. Whether this is an acceptable risk to take can only be your decision - but if you do decide to go ahead, make this password unique and secure. It's not impossible, though - you can build a statically linked GPG executable with all the libraries it needs included in the one program file. Download the GnuPG source code from www.gnupg.org, unpack it and cd to the directory this created. After that, run the following commands: --- export CFLAGS="-static" ./configure --enable-static make ,,, You will need a compiler (GCC) and autotools installed to build from source, or to install the build essentials if you're using Ubuntu. If the ./configure stage throws up errors about missing programs or libraries but they are installed, check your package manager for a -dev or -devel version of the relevant package. These contain header files that are not required to use the software, but are needed when you want to compile other software that uses it. You may well find that having these headers will solve the problem. There is no need to run make install since you don't want to install this version to your path. Instead your new GPG2 program is in the g10 directory, so check that it's been statically linked with: --- ldd g10/gpg2 ,,, which should tell you this isn't a dynamic executable. Now you can copy it to your USB stick, but once again we advise you use a strong passphrase on your key. Since using GnuPG means carrying your private data on an easily lost or stolen device, you should also generate a separate pair of keys. Or you could try an alternative, such as Ccrypt (http://ccrypt.sourceforge.net). This uses 256-bit AES to encrypt and decrypt files, so it's secure enough, and it's linked to the libraries you will find on any Linux box. It contains the commands ccencrypt and ccdecrypt, both of which do what they say on the tin, along with ccat, which displays the contents of an encrypted file without writing an unencrypted version to disk. So, you could pull out just your banking password with --- ccat passwords.cpt | grep mybank ,,, leaving no trace of the unencrypted information on the device or, more importantly, the host computer. Back to the list ****** Acer Aspire One - programs too big for the screen Q:: I'm just starting to use an Aspire One with Ubuntu 8.10. However, I'm having a few problems with the screen shape. I was completely unable to configure Evolution mail because the second page's Continue button was off the bottom of the screen. After that, I gave up and installed Thunderbird. But that's not all - this correspondence was prompted by my inability to use Inkscape on the machine. Currently, the controls at the top and bottom of the screen just seem to move the page up or down. With all that in mind, how can I get program's windows to match up to the available screen space? A:: Unfortunately, some programs are just not netbook-friendly, having default or minimum window sizes that won't fit onto a smaller screen. Thankfully, there are a number of steps you can take to alleviate this problem. Sometimes it's just the default window size that's too big and pressing the Maximise window button will rearrange the window to fit. The next step is to reduce the size of any menus or task bars on your screen, making more room for the program itself. You can do a similar job in individual programs - smaller fonts and icons will mean the program needs smaller windows to run effectively. Some programs also have a full-screen mode, where the window's borders and gadgets are dispensed with entirely. We generally run Firefox like this on netbooks to give more space to the web content. Of course, all of this assumes that you're able to reach the various controls to change your settings, which takes us back to your original problem. Windows can be dragged partially off the screen, so you can drag a window upwards to get at the buttons at the bottom. You can't do this by dragging the title bar, because that's what you need off the screen and out of reach. Instead, hold down the Alt key and click and drag anywhere in the window's interior. With a combination of these methods, you should be able to get most programs to fit on the Aspire One's display. Back to the list ****** Ubuntu 8.04 upgrade problems - Gnome and Audacity Q:: After upgrading Ubuntu to 8.04 with kernel linux-2.6.24-23-generic and Gnome 2.22.3, I have experienced a few small problems on my Dell Inspiron 6400 laptop. The Gnome file browser gets stuck when I try to display my home directory. It works fine on all other directories on the filesystem. Audacity doesn't let me play recordings. I have tried every possible configuration of output device. When performing the upgrade to 8.04 I got a warning about the package manager being corrupt - something to do with not being able to configure hplip. A:: How long have you left the file browser to display the home directory? If you have file previews enabled it may be taking its time generating previews for one or more files, especially if any of them are on a network share and not the local machine. To test this, either turn off previews completely in the File Browser preferences by setting each file type to Never, or at least make sure they are set to Local Files Only with a small maximum size. If this makes a difference, create a temporary directory and move files and directories from your home directory into it until the delays stop. Then you'll know which file or files are causing the problem and be able to take appropriate action. Can you play audio files from other programs, or save an audio file from Audacity and then play it from another program? You need to try this to determine whether the problem lies with Audacity or your audio playback in general. Whatever program fails to play the audio, try running it from a terminal. That won't make it play the file, but you will be able to read any error messages from it. The error from the package manager probably came from a corrupt package, or one with an invalid signature. Refreshing the package list in Synaptic and installing any updated packages should clear this. Back to the list ****** Hiding an Apache directory listing Q:: I've just set up a new server with my corporate website running on Apache. I have a folder called /var/www/html/downloads with lots of files, which my customers or staff can download through various links in the website If I type www.secretdomain.com/downloads into my browser it gives me a listing of all the files -not necessarily something I want. Is it possible to limit people so that they cannot list the whole directory? I looked at using .htaccess to limit this type of access but I don't particularly want to base access on passwords either. A:: The ability to show files in a directory as a series of links is a feature of Apache known as indexes. You can turn this on or off by using the Options tag as follows. Search your httpd.conf file for an Options line which also includes the 'Indexes' statement, for example: Options FollowSymLinks Indexes. Remove the word Indexes, save and restart Apache. This can be set in several places, either for an individual virtual host or globally, so be sure to search for all applicable iterations. Back to the list ****** SSD lifespan on Eee PC Q:: I own an Eee PC 900 on which I have recently installed Ubuntu Netbook Remix 9.04. Besides now having Ubuntu's flexibility and updates at my fingertips, I am also in love with the NBR interface, which works surprisingly well after some minor tweaks (including a patched kernel). However, I have read some worrying posts about NBR (and other alternate distros) not being very easy on the Eee PCs built-in solid state disk. People suggest all kinds of precautions to be taken when installing such a distro, including: 1 - Never choose to use a journalling file system on the SSD partitions. 2 - Never use a swap partition on the SSD. 3 - Always edit your new installation fstab to mount the SSD partitions with noatime. Or 4 - Never log messages or errors to the SSD. There are other suggestions too, concerning the behaviour of certain applications, like Firefox's cache. So, can you tell me if any (or all) of these suggestion make sense? What seems weird to me is that a default install of NBR on the 900 gives me an ext3 filesystem, a swap partition and mounts with relatime. Don't they care about SSD lifespan at Canonical, or is the whole issue just rubbish? A:: These fears are all based on the fact that the SSD is essentially flash memory, which has a limited write cycle. But it is not used in the same was as flash memory is used in memory cards and USB sticks. The problem with flash memory is that each cell can only handle a limited number of write operations, USB devices and memory cards are normally rated for between 100,000 and a million writes, depending on the quality of the individual device. This sounds a lot, but some parts of a disk are written to very frequently, like FAT tables and filesystem journals. A damaged journal can be worked around, but a corrupt file allocation table on a FAT filesystem is close to terminal, and this is the filesystem used by removable flash devices. SSDs are different for a number of reasons. They generally use higher-quality components, netbooks don't use FAT filesystems and, most importantly, SSDs incorporate wear levelling. This means that the load is spread across the 'disk' and writes are not repeatedly made to the same sector. I have been using journalled filesystems (ext3 and xfs) and swap on my Eee PC900 for over a year. It is used every day and runs Gentoo testing so packages are updated almost daily. Combine that with extensive email and web usage (mailer caches are written to as much as browser caches) and the only disk errors I've had were on the SD card, the only component that doesn't use wear levelling (and was of unknown make and quality). You will need a swap partition (or suitable file) if you want to hibernate your laptop. Bear in mind that the Eee comes with a two-year warranty, and Asus isn't likely to include technology that is likely to fail in that timeframe. Canonical isn't in the business of breaking hardware (nor is any other distro maker), although I would question the use of atime when mounting a filesystem, but that is for performance reasons more than reliability - I use noatime on hard disks too. Apart from that, I would be happy to use the NBR setup. Back to the list ****** Looking for a remote backup server Q:: I am looking at setting up a remote backup server to back up various Windows desktops and servers. I am looking for a backup package for the Linux Server as well as the client software to manage the backup from the client PCs. A:: I suggest you take a look at BackupPC (http://backuppc.sourceforge.net). It is an entirely server-based backup program. By that I mean that no special software is required on the client PCs and all backups are controlled from and initiated by the server. This means that you do not need to rely on users remembering to start backups, or set up Cron tasks on each computer. BackupPC has a web-based front-end, so you can browse and restore complete backups or individual files or directories. Restoration can be directly back to the filesystem from which the files came, or the files to be restored can be downloaded as a tarball or zip archive. The web interface provides an overview of all the computers it manages, with details of all their backups. You even get email alerts if any backup fails, although BackupPC will normally continue when it can with no need for intervention (this occasionally happens to me when I shut down my laptop while it is being backed up). BackupPC communicates with the client computers using Samba, SSH, NFS or rsync, so you don't need any special software on the client computers - just ensure that BackupPC has permission to read shares or connect via SSH. As you are backing up a number of possibly similar computers, you'll be pleased to know that BackupPC saves on both disk space and backup time by storing multiple copies of files as hard links. If the same file is on 10 computers, the server stores only one copy. Back to the list ****** BT Voyager 105 USB modem not working Q:: I am using a computer with an AMD 1.6GHz processor. I have installed Ubuntu 8.10 in a separate partition from Windows XP. My internet provider is BT and the USB modem is a BT Voyager 105. This is where my problem starts, as BT tell me they don't recognise Ubuntu and cannot suggest how I can access the net. I downloaded some gobbledygook from the Ubuntu forums but that did not work. Surely there is a simpler, step-by-step way to get the internet in Ubuntu. By the way, my Lexmark printer would not work so I had to splash out on an HP 4100 printer, so don't ask me to buy too much software as I am a pensioner and at present skint! A:: I know you don't want to spend any money, but a few pounds spent on a decent modem will save you so much trouble. The best thing you can say about the free modems given away by ISPs is that they are reasonable value for money! A decent modem connects by Ethernet, not USB, and needs no special drivers or software on the computer. The standard networking stack and a web browser, which everything has by default, will do just fine. Most standalone modems also include a router and firewall, making your system more secure whichever operating system it uses. Because a proper modem handles the network protocols internally instead of offloading the work to the computer's CPU with a driver, both the network connection and your computer in general are more responsive. Such a modem can be bought for around £20. If you stick with the Voyager USB modem, you have to accept that it will work less efficiently and that you will have some work to get it set up. This is true on Windows too, but you do get automated driver installation there. To do this with Ubuntu 8.10, you need to download two files. As your connection is not working under Linux, do this in Windows or on a different computer. Go to http://eciadsl.flashtux.org/download.php and fetch the Ubuntu package, currently eciadsl-usermode_0.12-1_i386.deb. Then go to http://archive.ubuntu.com/ubuntu/pool/universe/r/rp-pppoe and get the latest i386 Deb file, which is currently pppoe_3.8-3_i386.deb. The version number in these files may change if updates were released after we wrote this. Copy these files to a USB stick and transfer it to your Ubuntu computer. After making sure the modem isn't plugged in, install each of the packages by double-clicking them, the pppoe file first. Now you have to configure the modem with the settings for your ISP (this is the part that Windows users get done for them). In your terminal, run "sudo eciadsl-config-tk" to open the graphical configuration program (if it fails to run use eciadsl-config-text). At the top of the window, set your username and password to those given to you by BT and set VPI and VCI to 0 and 38 respectively. Pick the correct modem from the list, set the PPP mode to VCM_RFC2364, click on the Remove Dabusb button (you can safely ignore any messages it gives you) followed by Create Config. This should open a window containing a number of messages ending in OK. Your modem is now installed and set up, and you can plug in the modem and connect to the internet by running --- eciadsl-start ,,, You can attach this command to a desktop icon by right-clicking on the desktop and selecting Create Launcher. Put eciadsl-start (or eciadsl-stop) in both the name and command boxes. You can also have the command run automatically when the desktop opens by going to System > Preferences > Sessions, clicking on the Add button and entering the program name. Back to the list ****** Creating a CD duplicator Q:: I'm trying to build a CD duplicator from an old Sempron-based machine with four IDE CD-RW drives and a SATA hard disk. I've installed Ubuntu 9.04 and tried GnomeBaker and K3b but they don't seem to support multiple CD burning. We need to produce about 200 CDs for various projects at the school where I work and I wondered if you had any ideas that a newbie could cope with. A:: I'd use a shell script for this, after creating the ISO image with whichever program you prefer. A script is far better suited to such a repetitive task than having to keep pressing GUI buttons. You could use something like this --- #!/bin/sh DEVICES="/dev/cdrom0 /dev/cdrom1 /dev/cdrom2 /dev/cdrom3" for DEV in DEVICES; do cdrecord -eject dev=$DEV "$1" & done ,,, Type this into a text editor, such as Gedit, list your CD writer devices in the DEVICES line, save the script somewhere in your path - say, /usr/local/bin/multiburn.sh or ~/bin/multiburn.sh - and make it executable. If you save it in the bin folder in your home directory, you can use your file manager to set the permissions: right-click on the file and select Properties. Otherwise, set the permissions in a terminal with --- sudo chmod +x /usr/local/bin/multiburn.sh ,,, Assuming you have made the ISO image with K3b or whichever mastering program you prefer, put a blank CD in each of the drives and run the script, giving it the path to the ISO image. --- multiburn.sh /path/to/image.iso ,,, When all four discs have ejected, replace them and run the command again. You could modify the script to run again after a keypress. There are only two keypresses to rerun it (Up and Enter) so it's not a time saver, but it may be a good learning exercise if you're so minded. Back to the list ****** X Window System not working in Fedora 10 Q:: Whatever happened to the concept of portability in the Linux community? I know that different distributions will work on a variety of unique machines. Well, shouldn't the same concept be applied to the distribution itself when considering upgrades? I have an Acer Aspire 64-bit dual processor machine with a GeForce 8200 Graphics card, and everything works in Fedora 9. I must admit that I have to poke around with the screen resolution a bit to get what I want, but that's minor. When I try to upgrade to Fedora 10 64-bit, X Window System fails to start - there is no GUI, just a login prompt, and startx doesn't do anything. Why would X work in Fedora 9 but not Fedora 10? Shouldn't the same stuff work in the same operating system even if it's an upgrade? A:: The answer to your question is "yes" but I suspect you were hoping for a little more. I doubt that startx does nothing; it may not start X, but it will report to the terminal what has gone wrong. You can find further information in the log file at /var/log/Xorg.0.log. Errors are marked with "(EE)", and you can extract them from the general information in this file with --- grep EE /var/log/Xorg.0.log ,,, To some extent, distros are at the mercy of the various software developers. In this case, the changes to X.org are the most likely culprit. The X.org team are trying to move away from the cumbersome and sometimes cryptic xorg.conf file to a system based entirely on auto-detection, the idea being that any hardware you have should just work, which yours clearly does not. The quickest solution, if you still have your old Fedora 9 installation, or a backup of the important files, is to copy /etc/X11/xorg.conf from Fedora 9 to your Fedora 10 setup. X.org respects anything in this file in preference to auto-detected settings, so it should start working again. If you don't have an xorg.conf file, you can create one for your hardware by running the following as root: --- yum install system-config-display system-config-display ,,, This presents the same display setup GUI as you saw during installation of Fedora 9, and creates /etc/X11/xorg.conf based on your choices. If it still fails, because it cannot detect suitable settings for your hardware for example, you can specify your choices on the command line, for example --- system-config-display --set-resolution=1024x768 --set-depth=24 --set-driver=nvidia ,,, Run system-config-display --help to see all the options. Back to the list ****** Problems with Bash script to install packages Q:: I'm writing a Bash script that, among other things, installs assorted packages. I'm using Zenity to make it look sexy in a GUI sort of way and running it on 64-bit Debian 5.0. Unfortunately, if one of the packages I want to install is available on the Lenny installation DVD, the thing just sits there: under the hood, it has said 'please insert DVD', but Zenity doesn't see that and doesn't relay that. So to the user, it just looks as if the thing has hung. The relevant bit of code is as follows: --- Select all ( echo "33" aptitude -y install gcc echo "66" aptitude -y install sysstat echo "99" ) | zenity --auto-close --progress --text="Fetching software..." --title="Installing Software " --width 300 ,,, A number of questions arise. First, is there a way to make Zenity 'event aware' so that it displays messages like 'please insert your installation DVD'? I don't think there is, but I could well be wrong. If so that would be the ideal solution and you don't have to read this question to the end (just let me know the relevant piece of magic!). Alternatively, I could write code that says 'if they've got the installation DVD in their /etc/ apt/sources.list, comment it out. And 'if they haven't got http://volatile.debian.org/debian- volatile in their sources, add it'. Unfortunately, to analyse the textual contents of files, I can feel heavy doses of awk and sed coming on, neither of which make me feel particularly clever, inspired or confident! In fact, I'm clueless about both, despite reading the man pages and Google articles about them until I'm cross-eyed. Would you therefore be able to suggest some Bash code that tests for the existence of the two sources in the sources.list file, removes the DVD one if present, adds the online one if it's missing and doesn't add it if it is already present?! A:: Is the "please insert" message being sent to standard error rather than standard output? If so, you should be able to capture it by adding 2>&1 to your aptitude call. Then you should look at the output for the relevant string before passing it to Zenity. You are quite correct in your assumption that modifying text files from the command line needs sed or awk - sed in this case - but these are tools well worth learning. They can be daunting at first, but are indispensable once you get the hang of them. To comment out any CD/DVD source from sources.list, you would use this command --- sed -i 's/^deb cdrom/# deb cdrom/' /etc/apt/sources.list ,,, To break this down into manageable chunks, the -i means 'replace the existing file with the modified contents'; the s means 'substitute anything matching the first string with the second'. So this replaces any instance of deb cdrom at the start of a line with # deb cdrom. This is the same way that Synaptic modifies the file so the source can be re-enabled in there should the need arise. Adding a source line is as simple as echoing it to the file, but messing with someone's sources file is not particularly friendly, so you could check whether you need to make the modifications with grep, back up the file before changing it and restore the backup when you've finished. --- cp /etc/apt/sources.list /etc/apt/sources.list.$$ sed -i 's/^deb cdrom/# deb cdrom/' /etc/apt/sources.list grep -q "^deb http://volatile.debian.org/debian-volatile" /etc/apt/sources.list || echo "repository line" >>/etc/apt/sources.list # do your stuff here if diff -q /etc/apt/sources.list /etc/apt/sources.list.$$ then rm /etc/apt/sources.list else mv /etc/apt/sources.list.$$ /etc/apt/sources.list fi ,,, $$ is the current process number, giving a unique(ish) name for the backup file. The grep command means your repository is only added if missing, and the final part checks whether sources.list has changed, and restores the old one if so. Back to the list ****** Removing Firefox extension updates Q:: Is it possible to remove certain Firefox add-on updates? I ask this as I prefer Yahoo toolbar v1.5.2, over 1.6 as there are many options missing from the menu bar in the later version. All my other updates for my add-ons are fine. I use Firefox 3.0.10 on Ubuntu 9.04. A:: As with many Firefox questions, the answer lies deep in about:config. If you type about:config into Firefox's URL bar, you get a huge list of settings that should normally only be touched by wizards.Searching this list reveals an entry called extensions.update.enabled, which controls whether Firefox should check for updates to the installed extensions (extensions.update.interval sets the frequency of this check and defaults to 86,400 seconds, or one day to us humans). Setting extensions.update.enabled to false (the default is true) turns off update checks for all extensions. This is not quite what you need, but there are even more hidden, secret options in about:config. You can override this setting for individual extensions by setting extensions.{GUID}.update.enabled to false, where GUID is the Global Unique ID of the extension. You can disable checking for updates of that extension by setting extensions.{GUID}.update.enabled to false, or you can leave the checks enabled but extend the period between checks. This means that you will still be notified when new versions come out, just not every day. For example, setting extensions.{GUID}.update.interval to 2592000 will cause this extension to be checked every 30 days, while everything else is still checked daily. When adding one of these settings, you need to create the correct type of setting, which will be a Boolean for update.enabled and an integer for update.interval. So how do you find the GUID for the extension you want to affect? Each extension's data is kept in a directory named after the GUID in the extensions directory of your Firefox configuration directory. The name of the configuration directory is contained in ~/.mozilla/firefox/profiles.ini in an entry that looks like this: --- Path=default.xyz ,,, Now you can find the GUID of an extension from its name with: --- grep -r "extension name" ~/.mozilla/firefox/default.xyz/extensions ,,, Create a suitable setting in about:config and your extension will either never update or only bother you infrequently. It is possible to get the GUID for an extension with a single command, should you ever need to script this, although you normally only need this occasionally and the above procedure is fine. For maximum geek points, use this one-liner. --- grep -ril "extension name" .mozilla/firefox/$(awk -F = '/^Path=/ {print $2}' .mozilla/firefox/profiles.ini)/extensions/*/install.rdf | sed 's/.*{\(.*\)}.*/\1/ ,,, Who said the command line was cryptic? Back to the list ****** Best distros for VMware Q:: I want to do video capture, watch TV, audio and video editing. I definitely want to learn to program and if possible get into development, I also want a good gaming system and I want to improve my Linux sysadmin skills. It seems that there isn't one distro that will do it all well, so I'm thinking that running virtual machines will do the trick. Of all the flavours of Linux, which is the best for virtualisation with VMware; does it run well on a laptop and are there some distros that don't work well as a virtual machine? A:: Most distros on VMware play nicely together. However, there are some limitations to a virtual machine caused by the fact that the guest operating system does not have direct access to your real hardware, only the virtual hardware. As a result, gaming is often out of the question as the virtual graphics cards have no 3D acceleration. The same goes for video capture and watching TV, and your capture card is not available to the guest OS - although capture is possible from a camera with a USB connection. However, any general-purpose distro can do all of what you ask. Some may be better suited for certain tasks in their vanilla configuration, but all are the same Linux underneath. Since you want to learn more about administering Linux, I recommend you pick one distro and look at making it fit your needs, which brings us back to the "Which distro?" question. If you're serious about learning Linux administration, I would recommend Gentoo (www.gentoo.org) since it takes a 'kit car' approach to building a system. There is more work involved and you must be prepared to read the instructions before you do anything (as opposed to the usual approach of reading them after you have broken something). Debian is also a good choice. It has a huge range of software available, is popular among server admins and has a large community and plenty of online help. Any distro can be used as a starting point to get where you want, but your interest in system administration makes these two the top choices. Back to the list ****** Sluggish wireless under Mepis Q:: I installed Mepis 8 on my ThinkPad T60 and wireless is extremely slow. The ThinkPad has an Atheros AR5212 802.11abg wireless card using the ath_pci driver. Mepis 7 and Mepis 8 use the same home partition, and I can't understand why Mepis 7 loads pages in under two seconds while Mepis 8 takes more than a minute to load the same pages using the same laptop with the same wireless card and graphics card. Other distros take under two seconds to load the same pages. I disabled IPv6 on Mepis 8 and it's still very slow, while IPv6 is not disabled on Mepis 7 and it is still very fast. I installed Mepis 8 on my tower, which uses wired Ethernet, and everything works just as quickly as Mepis 7. A:: First, check the output from dmesg and the system and network log files for error or warning messages by running these commands as root. --- dmesg | grep ath grep ath /var/log/messages cat /var/log/mepis-network ,,, If you find an error, plug it into your favourite search engine or ask about it on the Linux Format forums. It may be that the madwifi driver supplied with Mepis 8 is not working well with your card, which would account for the wired connection on your other computer being fine. However, you do not have to use madwifi with Mepis 8. This has a recent enough kernel to include the ath5k driver, which is a native, in-kernel driver for Atheros wireless cards. By default, this driver is blacklisted in Mepis 8 so that madwifi is used as standard. To reverse this situation, edit the file /etc/modprobe.d/madwifi, as root, to comment out the line blacklisting ath5k (by adding a # at the start of the line) and uncomment all the lines blacklisting ath and wlan modules. The resulting file should look like this --- #blacklist ath5k ## madwifi (non-free) blacklist ath_hal blacklist ath_pci blacklist ath_rate_amrr blacklist ath_rate_onoe blacklist ath_rate_sample blacklist wlan blacklist wlan_acl blacklist wlan_ccmp blacklist wlan_scan_ap blacklist wlan_scan_sta blacklist wlan_tkip blacklist wlan_wep blacklist wlan_xauth ,,, Save the file, reboot and you'll be using the newer ath5k driver instead of madwifi. Back to the list ****** Unable to empty waste basket in Evolution Q:: I am running SUSE 11.1, for the last three days I have been unable to clear my waste basket on Evolution and it is slowly expanding. I have tried to delete and re-install the application but this doesn't solve the problem. My other problem is that since installing SUSE 11.1 it has always demanded a password before shutting down. How do I shut this off? A:: Are you using POP or IMAP to collect your mail? If you're using IMAP, Evolution may create two waste baskets. One is a local folder, while the other is the .Trash folder on the mail server. If you are having problems emptying the folder on the server you should take this up with your ISP or whoever provides your mailboxes. Before you do this, open the Debug logs window from the Help menu to see what messages you get back from the server. You should also try running Evolution from a terminal (just type its name) instead of the GUI, and then you get to see any error messages that it throws up. If the problem is with your local waste basket, or you are using POP for your mail, the likely cause is incorrect permissions, which should show up if you try to empty the waste basket when running Evolution from a terminal. Fix this by running this command in a root terminal, replacing USER with your username. --- chown USER: ~USER/.evolution ,,, Another possibility is a corrupt file. You could try deleting the files from the Trash folder manually, while Evolution is not running. You'll find the folder somewhere like ~/.evolution/mail/accountname/servername/folders/Trash. Remove all the files starting with a number, and if you get no errors, restart Evolution. If you do get errors, you may have a problem with the filesystem and should boot from a live CD and run fsck over your home partition. The password request will be from either the KDE Wallet or the Gnome Password Manager, depending on which desktop you are using. These will ask for a password when a program you run wants a password from them. If you open the wallet or keyring-manager, you can see which programs it holds passwords for, and by a process of elimination you should be able to work out what is causing the problem. It is unusual for a program to require a password at shutdown, so this is something you should investigate as soon as you can. Is it asking for the root password? This would be because the shutdown command can only be run as root. Go into the KDE settings and select Login Manager under the Advanced tab. On the Shutdown tab, make sure that local shutdown is enabled for everybody. Back to the list ****** Managing domain trust between Windows and Linux Q:: At our company we are trying to migrate our intranet from a Windows/IIS solution to Apache 2 on SUSE Linux Enterprise Server 9. The main problem is integration with the rest of our network, which runs Windows (Win2K on the servers and WinXP Pro on the clients). Some of the intranet apps we have use NTLM mechanisms to get the user credentials and to provide personalised information as well as various degrees of access to different areas of the intranet. We were looking at replacing these by using apache2-mod_ntlm which - even though is not directly available for SLES9 - we could compile and load. We found information on the web (www.hannesschmidt.de/drupal/node/12) that apache2-mod_ntlm would work fine in a situation in which you use it in one domain only. Unfortunately that's not the case with us - we have a main tree and sub-domains, in which case trust relationships between domains are used to provide authentication and access. Our Win2K servers are using Active Directory to authenticate users. Can you confirm that the information we found in the online article is correct? What sort of thing would you recommend? Would a minimal subset of Samba help? A:: Samba can be used to replicate information from an Active Directory server, which can then provide information to the mod_ntlm system under Apache. It looks like other people have had problems with multiple domain mod_ntlm, judging by the open bugs on the SourceForge project page - mod_ntlm doesn't appear to be that well maintained any more. It's worth remembering that Active Directory implements LDAP, so mod_ldap can be used to access the directory information. There's more on this at www.wlug.org.nz/ActiveDirectoryAuthenticationNotes, which suggests some success in using Active Directory with Apache. Back to the list ****** Dell and 54g Q:: I have a Dell laptop running FC2 with a Netgear WG511 card. I've downloaded and installed the Prism54 driver and firmware, and worked through the Linux Unwired book, setting all the config files, etc. I've also worked through Negus' Linux Bible for FC2, and trawled the net looking for help. The card is set not to come up on boot but on PCMCIA services start. On startup everything works, the eth0 interface comes up, the green light comes on the card and the yellow light flashes encouragingly every now and then. When I start up Mozilla the icon to the bottom right tells me I'm online, but I still can't open any pages - I get an error message saying "not available, check spelling". I'm very happy with my Linux system and am now a Microsoft-free zone, but this is driving me nuts! A:: Linux support for 54g Wifi networking is still somewhat nascent, but, that said, your card should be supported by some of the current driver schemes. (Beware though, wifi manufacturers are well-known for changing chipsets even on the same model of card.) If indeed the lights are coming on, particularly the data light, it would seem that at least the card has been detected and has been prompted to communicate, but what you need to do is fathom what exactly the problem is before you attempt to solve it. There are various possibilities -the driver may not be working, the driver may be working but be misconfigured, or the card is configured but the network isn't set up properly, and so on. The wireless tools are very useful for determining if the card is working or not. Just type: --- iwconfig ,,, in a terminal. This will list all the known networking devices and either report "no wireless extensions" or give a list of wireless parameters for the card. If your device isn't listed here, then either the correct module isn't loaded (presumably you are using the ndiswrapper method) or it isn't working - probably a driver problem. If the card is listed, you need to check that the parameters are correct. Try looking at the man page for iwconfig for the settings. At the very least you will need to have the ESSID set up, and any encryption key configured to connect to the access point. Back to the list ****** Mounting Mac HFS disks Q:: I'm running Mac OS X (10.4 and 10.5) and Linux (Ubuntu and Fedora) systems at my house and I have external hard drives. My problems arise when backing up a system or files; there is plenty of information on Windows/Linux or Windows/Mac filesystem compatibility, but not on Linux/Mac options. I have a few 1TB external hard drives and need to store files over the 4GB limit of FAT32. I tried using NTFS, which works, but MacOS will start giving me an 'error 36' message. Linux used to have support for Apple filesystems but I do not see any support for Mac OS Extended (journalled) filesystems. Also Linux used to have terrible problems with Mac filesystem after the 30th mount and would then consider the drive dirty and unusable. Any recommendations on what format I should use for external drives between OSes? A:: The Linux kernel does support the newer HFS+ MacOS filesystem, with some limitations. The main one is that it does not support writing to a journalled filesystem. This used to be a potential problem with earlier kernels, because it was still possible to mount a journalled filesystem read/write, write to it and cause inconsistencies between the filesystem and its journal. Newer kernels, since around 2.6.16, will mount a journalled HFS+ filesystem read-only, avoiding any potential corruption. If you want to be able to share an external drive between Linux and Mac OS X, you should disable the filesystem's journal in OS X by selecting the device, pressing the Option key and clicking File > Disable Journalling. Another potential problem is that Apple tries to be clever on some devices and wrap the HFS+ filesystem inside an HFS filesystem that displays a warning if you try to use the device with an older Mac OS that does not support HFS+. This can confuse the mount command's filesystem detection, causing it to recognise the drive as HFS and show you the warning message. The answer is to explicitly specify the filesystem when you mount it. --- mount -t hfsplus /dev/sdb1 /mnt/external ,,, You could also try adding the hfs module to your distro's modules blacklist file, which you will find in /etc (the exact name can vary). This may cause the system to recognise it only as an HFS+ disk, although we did not have an opportunity to test this. You would do no harm by trying this, as the drive will either mount correctly or not at all - the hfs module is not used to access HFS+ filesystems so blacklisting it cannot affect the drive contents. The other, admittedly minor, drawback is that HFS+ filesystem tools are not easily available for Linux (it is possible to recompile the Darwin tools) so formatting and similar tasks are best done on a Mac. The warnings after 30 mounts without checking the drive should also disappear if you routinely use and check it on a Mac. There is another option, using the ext2 filesystem. There is a Mac driver for ext2 available from http://sourceforge.net/projects/ext2fsx. We haven't tried it, so test it on a drive that doesn't hold the only copy of something important, but using ext2 as a common filesystem would avoid the problems associated with HFS+, and there a Windows ext2 driver should you ever need to share the drives with a Windows system. Back to the list ****** Expanding the spellchecking lists used by Ispell or Aspell Q:: I wish to expand the spellchecking lists used by Ispell or Aspell into a plain list of words in alphabetical order. I want to know how the compression algorithms work, but I can't seem to find any information explaining this. If I knew this I could write a program to decompress the lists - or perhaps use an existing program if there is one included with Ispell or Aspell. A:: This can be done with both Ispell and Aspell, although the task is much easier with Aspell, where you use the Aspell program itself, like this: --- aspell dump master ,,, to output the master word list, one word per line. If you want them in alphabetical order, pipe the output through sort --- aspell dump master | sort ,,, The output from aspell dump can be used as the input for aspell create, so you can dump the word list to a file, edit it then feed it back to aspell create to build a new dictionary. --- aspell --lang=LANG create master ./mydict <mywordlist ,,, Although it is possible to manipulate Ispell dictionaries in a broadly similar manner, the process is more involved, so I'd stick with Aspell for this unless you have a particular reason for using Ispell. Back to the list ****** Output video from Linux box to a TV Q:: I'm trying to output video from a Linux box to a TV for my son. I've tried Dreamlinux, Macpuppy, GeeXbox Live and Ubuntu 9.04. Only Ubuntu worked, and that needed the Nvidia driver, and that meant the resolution dropped to 800x600. Even then I had to tweak xorg.conf to get it working. Two of my laptops output to composite and one to S-Video. The Ubuntu success was on an old Toshiba Satellite PIII 1.6GHz output to composite. Is there an easier way? I do hope so, as it's embarrassing to hear my son say 'but Windows just does it'. A:: Using the composite or S-Video output from a graphics card usually requires that you enable this, in xorg.conf, the BIOS, or both. The BIOS option, if present, sets which video outputs are available, while the xorg.conf setting determines which is actually used. With a composite or S-Video output, you often have little or no control over the resolution beyond an option to use NTSC or PAL values. As composite is output only, it has no way of getting information from the display in the way that X.org does with monitors, so you will need an xorg.conf to set this up, unless the BIOS has a setting to force all output to the composite output (the VIA Mini-ITX boards do this, with output being sent to TV, VGA or both, according to the BIOS setting). You define the settings in a Device section of xorg.conf. The settings can be different for each driver (see the man page of your particular driver for details) but for an Nvidia card you would use: --- Section "Device" Identifier "TV-Composite" Driver "nvidia" Option "TVStandard" "PAL-I" Option "UseDisplayDevice" "TV" Option "TVOutFormat" "COMPOSITE" Screen 1 EndSection ,,, The TVStandard setting is country-specific; PAL-I is the correct value for the UK. The Nvidia driver documentation lists the various options. The TVOutFormat setting should be set to SVIDEO if appropriate. You should also add 'Screen 0' to your existing Device section. Now you have two screens, one on the VGA output and one on the TV output. Add a new Screen section to xorg.conf: --- Section "Screen" Device "TV-Composite" Identifier "TVscreen" Monitor "TV-monitor" EndSection ,,, You also need to create a new Monitor section for your TV, calling it TV-monitor (or whatever you used in the second Screen section). Now edit the ServerLayout section to contain --- Screen 0 "Screen0" # or whatever your original screen is called Screen 1 "TV-composite" "Clone" ,,, This will duplicate the primary screen on to the TV screen, so everything should be repeated on the TV too. With an Nvidia card, much of this work can be done with the nvidia-settings program, although you will need to run it as root if you want it to be able to rewrite your xorg.conf for you. As always, you should make a backup copy of xorg.conf before you do any of the above. Back to the list ****** Custom Ubuntu install images with SquashFS Q:: I'm creating my own distribution based on Ubuntu Intrepid (8.10) and am getting along nicely except for one little problem. I've removed Evolution using apt-get, but there is still a launcher for it in the top panel. That launcher doesn't do anything, but it is still present and I'd like to get rid of it. Also, I would like to add a launcher for a Bash terminal to the top panel. (Maybe I can just change the configuration of the Evolution launcher to make it launch a terminal instead.) Finally, I would like to get rid of the 'Install' launcher on the desktop. I've managed to do everything else I want with this distro except manipulate the launchers. How would I do that? A:: Have you created your distro by installing Intrepid to a hard drive, modifying it and then recreating the compressed filesystem for installation? If so, it's only a matter of making the changes to the installed system before recreating the installer package. You have already done some of this when you used apt-get to remove files. To edit the top menu, right-click on the Ubuntu icon at the left end of it and select Edit Menus. Add and remove what you want, then proceed as before. Remove the desktop icons by deleting the corresponding .desktop file from the Desktop directory. If you are manipulating the ISO image at a lower level, by extracting the squash filesystem, unpacking it and working with a chroot, you can still manipulate the menus. When you have unpacked the filesystem with the following code, remove the desktop files relating to the menu entries you no longer need: --- sudo unsquashfs tempdir casper/filesystem.squashfs ,,, For Evolution, the file is /usr/share/applications/evolution-mail.desktop. You can remove other files here, and add more of your own. The format of a desktop file is documented at www.freedesktop.org/wiki/Howto_desktop_files, or you could add a menu entry for your installed system, then copy the desktop file that this creates to the unpacked squashfs files. The Install icon is created by /usr/share/applications/ubiquity -gtkui.desktop, so remove that file too. Finally, re-create the squash filesystem with --- sudo mksquashfs tempdir filesystem.squashfs ,,, Now rebuild the ISO image as before, but using the new filesystem.squashfs file. Back to the list ****** Allow normal user accounts to install packages Q:: I sysadmin a number of machines with various distros installed on them. For network security reasons, we tend not to give users privileged accounts (neither root nor sudo). However this can be a pain if they require a new package - they have to request that IT-support (me and my small team) install it for them. Is there a way, perhaps an option in dpkg or similar, that would allow a non-privileged user to install a package (RPM or Deb) into their local userspace tree without requiring rights elevation? A:: While it is possible to tell RPM to install elsewhere by using the --prefix argument, this is not without problems. Firstly, you still need access to the RPM database, which requires root access. You could get around that by setting up a wrapper script that runs rpm --prefix=/home/user/local --otheroptions and adding an entry to /etc/sudoers to allow that script to be run with sudo. That way users can run your script but not RPM directly. However, this leads to a more serious problem, as many packages are not relocatable and have to be installed to the path that was given when they were compiled. At best, RPM will refuse to use --prefix with such packages. Similar problems arise with Debs or any package containing files that have been compiled to run from a particular location and, more importantly, look for libraries and configuration files in particular locations. One option is for your users to build the programs they want from source. This sounds like more work than it is as you could have a installation script that handles the vast majority of cases. All it needs to do is unpack the tarball, cd to the working directory, run configure with the correct options, then run make and make install. Something like this: --- #!/bin/sh tar xf "$1" cd $(tar tf "$1" 2>/dev/null | head -n 1) ./configure --prefix $HOME/local && make && make install ,,, The cd line may look odd, but it's just listing the tarball to grab the first item, which is the directory the rest of the tarball unpacks into. Then it changes to this directory and runs the usual autotools commands, but with the installation prefix set to $HOME/local. All the files will be included in directories under here, so you'll need to add ~/local/bin to the users' paths. Alternatively, use --prefix=$HOME, which will install to directories like bin, lib and share in the user's home directory. If you only allow users to install certain programs, you could create your own RPM and Deb packages, which would involve getting the source packages, changing the spec files to install to a non-privileged directory and rebuilding. Unfortunately, however you approach this, there will be some administrative and support work involved. Alternatively, if you are comfortable with users installing to the system directories, you could give them restricted sudo access - say to allow them to install software but not remove it. Back to the list ****** Dial-up internet in Ubuntu 9.04 with Gnome-PPP Q:: I'm trying to get on the internet with a dial-up connection in Ubuntu 9.04. After searching the forums, I tried to install Gnome-PPP and Wvdial using the package manager, but this asks for an additional six files to be downloaded from the Ubuntu archive. So - catch 22. No files, no internet: no internet, no files! I've downloaded the files manually using another computer, but where should they be located in Ubuntu so that they will be found by the package manager? A:: There are two ways to deal with this, one of which you've partially done already. Instead of trying to identify and download the files manually, select the packages you need to install in Synaptic and then select Generate Package Download Script from the File menu. This creates a shell script that you can run on another Linux computer. Save the script as, say, gnome-ppp.sh on a USB stick, plug that stick into another computer, cd to the stick's directory in a terminal and run: --- sh gnome-ppp.sh ,,, This will download all the files you need on to the stick. If you then plug it back into the original computer, select Add Downloaded Packages from Synaptic's File menu and give it the directory containing the files on the USB stick, Synaptic will download them all for you. This is the only option when you have no other internet connection on the computer, but your dial-up connection will work from the basic Ubuntu installation because it includes the standard PPP command line software - Gnome-PPP is just a GUI front-end to this. Open a terminal and run: --- sudo pppconfig ,,, to see a configuration program that you can use to add an internet connection. Use the cursor keys to move between choices, Space to make a selection and Tab to move to the OK and Cancel buttons. Also ensure you select the option to write files on the last screen. Once the program's created a profile with a provider name, you can connect and disconnect using the pon and poff commands, like so: --- sudo pon myisp ,,, It's not as pretty as Gnome-PPP, but it works in exactly the same way and you can use it to install Gnome-PPP if you'd prefer to use the GUI on a day-to-day basis. Back to the list ****** Disable automount for specific USB drives Q:: Is there any way that I can disable automount for a specified external USB drive on Ubuntu 9.04? I'd still like all the other USB drives on the system to mount normally. A:: This used to be done by creating udev rules to give that particular device a specific name and then telling your volume manager or automounter to ignore it. Now, however, it's a single-stage process of telling HAL to ignore the device. This means a device node will still be created by udev, but HAL won't do anything with this information. This has the advantage of being a single solution that works with all systems, whichever volume manager they use. However, if you thought udev rules files were a pain, you've never tried to edit HAL policy files. The files are kept in /etc/hal/fdi/policy and you can either add to the existing preferences.fdi file or create a separate file for this task. The easiest way to differentiate your drive from all others is to use its UUID (Universally Unique Identifier). If you've looked in an Ubuntu /etc/fstab file, you'll have seen these before. First, we have to find the HAL details for your drive, so plug it in and see which device name it receives - you can run the mount command in a terminal and look at the last line. If you have a single internal drive, it'll probably be /dev/sdb1. Now run this command: --- hal-find-by-property --key block.device --string /dev/sdb1 ,,, This returns the HAL identifiers of all devices containing the key block.device with the value /dev/sdb1. There really should be only one, something like /org/freedesktop/Hal/devices/volume_uuid_623C_6219_0. Now plug this identifier into lshal, which will show all the HAL properties of the device. The one we're interested in is volume.uuid, so run the following: --- lshal --show /org/freedesktop/Hal/devices/volume_uuid_623C_6219_0 | grep volume.uuid ,,, and put the resulting value into an FDI file in /etc/hal/fdi/policy: --- <?xml version="1.0" encoding="UTF-8"?> <deviceinfo version="0.2"> <device> <match key="volume.uuid" string="623C_6219"> <merge key="volume.ignore" type="bool">true</merge> </match> </device> </deviceinfo> ,,, If you're adding to an existing preferences.fdi file, just use the part from <device> to </device> and put it before </deviceinfo>. This looks for a match on volume.uuid against the string given. If it's positive, it adds (merges) a new key of volume.ignore, set to true. In plain English that equates to: if the UUID has the value you want to bypass, ignore this volume so that nothing from it is then passed on to the volume manager. You need to restart HAL to have it pick up the new configuration. You could do this from the Service Manager, but since you already have the terminal open, just type: --- sudo /etc/init.d/hal restart ,,, Hopefully, when DeviceKit replaces HAL, it will make this whole process far less cryptic. Back to the list ****** Automatically rip DVDs in Linux Q:: I'm currently using DVD::Rip to back up a load of DVDs on to my server at home. This takes about 2 hours 20 minutes for each DVD, but it's the setup that I find annoying. Is there a way to run a script to do it automatically? A:: DVD::Rip can do this in a roundabout way. First, go through the process for the first DVD until you're ready to hit Transcode. Then select Show Transcode Commands from the Debug menu. Right-click and choose Select All, then copy and paste this into your preferred text editor. The two sections you want are the Rip and Transcode commands. Each should be a single line, so adjust them if they wrap over several in your editor. Then add Eject as the third command, save this file as, say, ripdvd.sh and run it with: --- sh ripdvd.sh ,,, You may need to tweak this for each DVD, but it's quicker than trawling though the GUI. However, you may find AcidRip better suited to your needs. This has similar options to DVD::Rip, although it uses Mencoder to do the transcoding. Most importantly, it has an option to queue conversions and can export that queue as a shell script. Provided that you have the spare disk space, it's more efficient to rip each of the tracks you want direct to disk in plain DVD format (MPEG2) and then use AcidRip to batch convert all of those files. The easiest way to rip a single track from a DVD is with MPlayer, and: --- mplayer dvd://1 -dumpstream -dumpfile firsttrack.mpg ,,, will rip the first track of a DVD to a file. Repeat this for each track that you wish to transcode. Then select each track in turn in AcidRip, set the parameters - which shouldn't need changing after the first DVD - and press the Queue button. When you've added them, you can either press the Start button and leave the computer to process them overnight, or press Export to write the commands to a script at ~/acidrip.sh. Run this to process the files. Back to the list ****** Generate OpenOffice.org quick help guides Q:: I suffer from short term memory loss and would like to be able to print out some OpenOffice.org instructions, such as how to replace row and column headings with labels. Is there an easy way to do this and make pages from just the information I need? A:: The OpenOffice.org documentation is available through the Help menu. While there are ways to get the documentation in other formats - which we'll mention shortly - this may be a good option for you. It has a search facility, which you can't replicate with printed documentation, and individual pages can be printed. What's probably more important for your needs is that you can bookmark individual pages or sections and thus keep a list of the pages you need to reference. The Bookmarks tab effectively becomes your personal table of contents. There's also documentation in other formats available from the OpenOffice.org site, http://documentation.openoffice.org, although it isn't as complete as the built-in help. You will find PDFs and wiki pages covering the various parts of the suite, along with a number of FAQs and tutorials, although the most recent Calc PDF is an incomplete guide for OpenOffice.org 2.0. The most flexible alternative is to build your own PDF manual using the Collections facility of the wiki described at http://wiki.services.openoffice.org/wiki/Help:Collections. Go to http://wiki.services.openoffice.org/wiki/Documentation and find the documents you want to read. When reading a page, there's a Create A Book section in the left-hand menu. Click Add Wiki Page to add the current page to your collection. Once you've added at least one page, a Show Collection link appears, which takes you to a page where you can manage your collection. Here you can reorder pages, organise them into chapters and download a PDF or Open Document file of the book you've created. If you download in Open Document format, you can also edit the layout in Writer before printing. Back to the list ****** Setting up a minimal mail server with Exim 4 Q:: I'm trying to set up a remote host so that I can send logs via Mail, Mailx or Mutt using Cron jobs on, say, a weekly or monthly basis. Setting up the Cron jobs isn't an issue, it's the mail setup that I need help with. Currently, I have installed Exim4 on Ubuntu 9.04, but I require the configuration side so I can send mail via my ISP. Any help would be great, because all the docs I have read are conflicting. All I want to do is send emails from a script command or shell. A:: You don't need to use a full-blown mail transfer agent if all you want to do is relay mails through your ISP's mail server. There are a couple of minimal mail relays designed to do just this. They don't handle local delivery or receive mail from other servers - all they do is pass your mail on to a proper mail server, such as the one provided by your ISP. Each of these relay agents has its own set of advantages and drawbacks. First up is SSMTP, which doesn't appear to be under active development right now. Its other disadvantage is that it doesn't handle failure well - it can report success when the mail wasn't really sent. On the other hand, it supports sending over SSL/TLS encrypted connections, which can be important if you're using a wireless laptop and don't want anyone sniffing out your mail login and password. Nullmailer performs a similar task, is still in development and provides better information about the result of a mail attempt, but it doesn't support encrypted connections. Each program provides a Sendmail replacement, and Mullmailer also runs a mailer daemon. Both are available in the Jaunty repositories to be installed via Synaptic. To use SSMTP, just edit the /etc/ssmtp/ssmtp.conf file. There are only two settings you really need; Mailhub should be set to the address of your ISP's mail server and Root should be set to an address to receive mail addressed to the root user (the kind of mail sent by Cron tasks, for example). If your ISP's server uses a port other than 25, add this to the Mailhub setting, for instance: --- mailhub=mail.myisp.net:465 root=me@myisp.net ,,, Nullmailer uses several files in /etc/nullmailer, each containing one setting. These are as follows: 1. Adminaddr - Contains the address that root mails are received at. 2. Defaultdomain - Contains a domain that's added to addresses only containing a host. 3. Me - The fully qualified host name of the computer running Nullmailer. 4. Remotes - Contains one or more servers used to send the mail. The format of the remotes file is: --- address protocol options ,,, The address can be a domain or IP address, the protocol is almost always SMTP and the options can be used to set a different port or add user authentication. All of these are valid: --- mail.example.com smtp mail.example.com smtp --port 465 mail.example.com smtp --user=michael --pass=peekaboo ,,, With Nullmailer you also need to set the service to run at boot. Now you can use Mailx or Mutt to your heart's content and let your ISP take care of delivering the mails. Back to the list ****** Hiding SMTP error messages Q:: I've recently been finding a lot of messages like the following in /var/log/maillog:NOQUEUE: server.domain.com [192.168.1.39] (may be forged) did not issue MAIL/EXPN/VRFY/ETRN during connection to MTA. Can you tell me if this message is something meaningful? And if there is anything I can do to get rid of it? A:: Sadly I cannot tell much from this message alone. It basically means that someone or something has connected to the SMTP port but has not sent a message and then broken the connection (or been disconnected by the server). Maybe you have a spam blacklist configured and it will not allow this sender through, or it may be a probe to check what mail daemon software you are using. It could be as simple as a dropped connection during a mail send. You will probably find that there is another entry in your logs just before this one, which will tell you more as to why this is happening. Back to the list ****** SD card not being recognised Q:: I have an Asus Eee PC 900, which is quite happily running Mandriva 2008.1. To give myself some extra storage space, I decided to purchase a 16GB SD card. This solution works well, but when I try a different operating system on the Eee or use my desktop, which is running Mandriva 2009.1, the card isn't recognised. I find I have to reinstall Mandriva 2008.1 to access the SD card. Do you know why this is, and is there any way you can you help me with this problem? A:: The problem stems from the fact that your 16GB card isn't really an SD card - it's an SDHC card instead. These formats look identical from the outside, but they use different standards. SDHC is the high-capacity variant and can reach capacities up to 32GB, while standard SD cards are limited to 4GB of storage space. Unfortunately, not all SD devices can handle SDHC cards, and this is particularly true of many card readers. This means that the inability of your desktop computer to read this card could well be down to your card reader. It's possible that it's a design that's simply too old to handle the newer format. As for reading the card in the Eee PC 900 when trying a different OS, you've already determined that it works. There's a 16GB card permanently mounted in our Eee PC900 as well, so that rules out software compatibility. This means the issue probably boils down to whether the kernel of your chosen distro has SDHC support. You don't say which other distros you've tried on your Eee PC, but most of them should recognise an SDHC card, especially any distros that are designed for use on netbooks. However, we've found that recognising an SDHC card can take quite a few seconds on some distros. It may be that you just need to be patient for a bit longer and you'll find the operating systems will eventually do what you want. Back to the list ****** Grepping by date Q:: I've set up a Cron job that runs a Bash script to check a certain server's health - its disk space, load and so on - then email the results to me. The script uses the grep command to find a log file and cat to output lines containing a certain string. All of this is working well, but I only want grep to tell me about the last few weeks or days of the log file that contains my string, not everything from its creation. Can you tell me if there's any way of doing this? A:: You can use awk to extract the date from the start of each line and convert it to a Unix timestamp that can be compared with your start point. This is expensive in processing time, so run this after you have used grep to save resources: --- #!/bin/sh DAYS=7 FROM=$(($(date +%s) - 86400 * $DAYS)) grep whatever /var/log/messages | while read LINE; do DATE=$(date -d "$(echo $LINE | awk '{print $1, $2, $3}')" +%s) if [[ $DATE -gt $FROM ]]; then dowhatyouwantwith $LINE fi done ,,, What this does is set a cutoff date based on the value you set for DAYS (86,400 is the number of seconds in a day). Then it reads each line of the grep output and uses awk to grab the first three items separated by spaces of the line. The standard syslog line format is --- Jun 26 12:30:37 zaphod dhcpcd[4037]: eth0: renewing lease of 192.168.1.1 ,,, so the awk command returns Jun 26 12:30:37. The echo | awk section is enclosed in $(...), which runs the command between the brackets and substitutes its output before running the command containing it. With this example line, that command would become: --- DATE=$(date -d "Jun 26 12:30:37" +%s) ,,, The %s tells date to return the date as standard Unix time, seconds since 1 Jan 1970, and the outer $(...) set means this value is passed to DATE=, which is set to 1246015837 here. Now we simply compare this with the FROM value to see if this log entry is more recent and process it if it is. You can also use backticks instead of $(...) for command substitution, but we use $(...) for two reasons. Firstly, it's more readable. Secondly, you can nest it, which you can't do with backticks. You could make this slightly more readable (at the expense of making it marginally slower) by separating the two commands like so: --- DATESTR=$(echo $LINE | awk '{print $1, $2, $3}') DATENUM=$(date -d "$DATESTR" +%s) if [[ $DATENUM -gt $FROM ]]; then ,,, Pick whichever suits you best. Back to the list ****** Puppy Linux Broadcom and Realtek networking not working Q:: I have a HP Pavilion laptop with a Broadcom 802.11b/g WLAN and Realtek RTL8139/810X FamilyFast Ethernet NIC. I've fallen for Puppy Linux, but when I run the connection wizard and install a module, which I'm told is installed correctly, it asks me to try a new one when I try to connect. What do I do? A:: Realtek cards use either the 8139too or 8139cp module, while Broadcom wireless cards usually use the b43 module. There are a couple of tests you can do to see which is running - all these commands should be typed in a root terminal and run as the root administrator, so type su first and give your root password. Then run: --- modprobe -l | grep 8139 modprobe -l | grep b43 lsmod | grep 8139 lsmod | grep b43 ,,, Modprobe -l lists all the modules available on your system and lsmod lists all the modules that are currently loaded. Both are filtered by grep. Ideally, the b43 module should show up in both lists. If it's available but it isn't loaded, run this: --- modprobe -v b43 ,,, followed by: --- ifconfig -a ,,, to see your network interfaces. If wlan0 shows up, proceed with the network config. If you don't have the b43 module available, or it doesn't work with your wireless card, you may have to use NdisWrapper instead. This is included with Puppy, you just need to set it up. Find the driver INF file on the CD that came with your computer or wireless adaptor, and then run the following: --- ndiswrapper -i /path/to/driver.inf ndiswrapper -l ,,, The first command adds the driver, the second lists all loaded drivers and the one you just added should appear here. Now tell the system to not use the b43 driver by adding "blacklist b43" to /etc/modprobe.conf with: --- echo "blacklist b43" >>/etc/modprobe.conf ,,, Add "alias wlan0 ndiswrapper" in the same way. Your wireless card should now be called wlan0 and the changes you made to modprobe.conf means it will stay this way after a reboot. Back to the list ****** How do I install and use Wine? Q:: I'm a radio ham and use several programs that only work in Windows. However, I wish to use these programs in Linux, Ubuntu being my preferred OS. I'm aware of the Wine program, but I can't seem to obtain it. My request is simple. Where can I download a copy of Wine? A:: The Wine project's home page is at www.winehq.com, but this isn't the first place to look for it. Ubuntu, in common with other Linux distros, has large repositories of software packages pre-configured for the OS and ready to install at the click of a mouse. Run Synaptic Package Manager from the System > Administration menu and press the Reload button to make sure it has the most up-to-date list of available packages. Then type wine in the search box to see a list of Wine variants. Click the checkbox to the left of the main Wine entry, select Mark for Installation and press the Apply button to install it. Not only is this an easier way of installing the program, but you'll be automatically notified when there are newer versions available. OnceWine is installed, you'll see a new item in the Applications menu, although the only program available so far is Notepad. If you download a Windows program as an EXE file and double-click it, Wine will start and run the program. If this is an installer - as is usual for a downloaded Windows program - it will install the program to Wine's C: drive, which is the .wine/drive_c folder in your home directory. Once installed, it will appear in the Applications > Wine > Programs directory. Wine works by emulating the Windows programming interface, so the programs think they are running on Windows. This is an imperfect science, so not every program will work perfectly, but every release of Wine adds better support for more programs. The packages in the Ubuntu repository are recent, stable versions that have been thoroughly tested with that distro release. There may be a more recent or beta version available from the project's website. If your program doesn't work with the supplied version of Wine, go to www.winehq.org/download/deb and follow the instructions for adding their repository to Ubuntu's list of software sources. You don't say exactly which programs you're using for your ham radio activities, but there are several Linux programs in this field. All else being equal, a native program is a better option than running one on an emulation layer. It would be worth asking a few questions on some of the popular ham radio websites, forums and mailing lists to see what other Linux users recommend. Switching to Linux is usually easier if you start by dual-booting. Rather than trying to switch to Linux for everything in one go, install your chosen distribution alongside Windows and you can make a more gradual transition. Most distros have an option to do this, enabling you to choose which operating system you run each time you boot. Alternatively, if you only need one or two Windows programs and they don't work in Wine, consider installing VirtualBox and running Windows in a virtual machine for those programs. Back to the list ****** Getting started with TightVNC Q:: I need your help with setting up two TightVNC servers: Windows XP and Mandrake 10.1. I also need help using the client, as I have never used this tool before. I used to have a KVM switch so I could switch between them both but that's now shuffled off this mortal coil. I need desktop connections to both my PCs and have been told that TightVNC was the best application to do the job. I have installed it in Linux but have no idea how to configure it or start the service running. A:: You can run TightVNC on Linux with vncserver, which will create a new X server instance on the system for you. Usually this will be :1, which runs on port 5901 for VNC; you can then access this using VNC client on the Windows system. A selection of basic X applications will be started, through which you can get a terminal going. Back to the list ****** Fixing Apache virtual host problems Q:: In Apache, Virtualhosts aliases don't work using PHP, though they do work with TXT. Here's a part of my httpd.conf: --- RewriteMap lowercase int:tolower RewriteMap host-map prg:/var/www/hosts.php RewriteEngine On RewriteRule ^/icons/(.+) /var/www/icons/$1 [L] RewriteRule ^(.+) ${lowercase:%{HTTP_HOST}}$1 [C] RewriteRule ^(www\.)?([^/]+)/cgi-bin/(.*) /var/www/users/${host-map:$2|$2}/cgi-bin/$3 [T=application/xhttpd-cgi,L,E=VHOST:$2] RewriteRule ^(www\.)?([^/]+)/(.*) /var/www/users/${host-map:$2|$2}/$3 [E=VHOST:$2] ,,, Here's the hosts.php: --- #!/usr/local/bin/php -q mysql_connect("localhost","iglou","Frosties"); $fdin=fopen("php://stdin","r"); $fdout=fopen("php://stdout","w"); set_file_buffer($fdout,0); while($l=fgets($fdin,256)) { fputs($fdout,key_lookup($l)."\n"); } function key_lookup($key) { $res=mysql_query("SELECT dir FROM iglou.web_aliases WHERE alias='$key' LIMIT 1"); if(@mysql_num_rows($res)) { return @mysql_result($res,0,0); } else { return $key; } } ,,, And the MySQL 'web_aliases' table in the database 'iglou' looks like this: --- CREATE TABLE 'web_aliases' ( 'id' int(6) NOT NULL auto_increment, 'ws_id' int(6) NOT NULL default '0', 'dir' varchar(50) NOT NULL default '', 'alias' varchar(50) NOT NULL default '', PRIMARY KEY ('id') ) TYPE=MyISAM AUTO_INCREMENT=2 ; INSERT INTO 'web_aliases' VALUES (1, 0, 'original-domain.net', 'alias-domain.net'); ,,, The problem is that when I try to access alias-domain.net it points to /var/www/users. What do you think is the problem? A:: The first step for you to take is to run the PHP script from the command line, and pass it a domain name which you can then use to verify that is sending the correct information back to Apache. It may be that there is a hiccup with connecting to the database, or that the script is bailing out at some point. The fact that it returns nothing, rather than the original entry, suggests to me that it's not very happy with something in the database. We tested your script, and it worked. However, as we were building the database config from scratch, it's likely that we missed a problem existing in your configuration. Back to the list ****** Chain loading grub with two hard disks Q:: I have two hard drives in my PC, labelled as hda and hdb. Hda contains MDK 10.1 and Win2K. I put in hdb yesterday and installed Ubuntu on it, hoping that the boot loader (Grub) would detect the operating systems on hda. It did so - however, when I came to try to boot any of the options it couldn't find the image -or something like that. I installed Mepis over Ubuntu and got the same problem. Do I need to change the boot loader of hda instead of hdb? A:: If you installed Ubuntu on to hdb, you will need to tell your BIOS to boot off the second disk, or install chain loader in Grub on the first disk to jump to Grub on the second. If you want to temporarily boot off the hdb Ubuntu drive without making the changes permanent, most BIOSes have a 'select boot device' option available by pressing F11 or F12. Back to the list ****** Accepting domain literals on a mail server Q:: I have a Red Hat 8.0 server with one primary domain. A friend of mine recommended I check out www.DNSreport.com, which performs a variety of useful tests on the DNS records as well as the server itself. Everything went through fairly well but my domain failed on one test. The following is from DNSreport.com: --- ERROR: One or more of your mailservers does not accept mail in the domain literal format (user@[0.0.0.0]). Mailservers are required RFC1123 5.2.17 to accept mail to domain literals for any of its IP addresses. ,,, I'm not sure how to go about fixing this problem - or even if it's worth fixing. A:: RFC1 123 requires the ability to use domain literals (ie using [s and ]s) tospecify the IP address of a mail server, and thus bypass normal DNS mechanisms. For security and for spam prevention reasons, not all mail servers are configured with it enabled by default. If you would like to have your Sendmail daemon server accept mail sent to it in this way, you can add a line containing only [10.10.10.10] to /etc/mail/local-host-names where 10.10.10.10 is the IP address you would like sendmail to listen to. Back to the list ****** Using fdisk to repartition a drive Q:: I have an 80GB hard drive, which is configured as follows: --- hda1 12.9GB Vfat (Win98 SE) 0-27045 hda2 776MB Linux Swap 53789-55366 hda3 16.7GB Linux 55367-90269 hda4 32.9GB Linux 90270-158800 ,,, Using YaST I managed to reduce the size of hda1, thus leaving a space at 27046-53788. Unfortunately fdisk does not see the spare space and I must first delete a partition. Is there any way I can use fdisk to delete hda1 and reconfigure it as two partitions without destroying the data on hda1? If I reconfigure hda1 as 0-27045 and the new partition as hda5 would hda1 be reformatted and the data destroyed? A:: As you already have hda1-4, you can't create any other partitions without deleting one and creating an extended partition. The swap filesystem is an ideal candidate for this, since you can dump it without losing any important data. You can create hda2 as 27046-55366, then create hda5 and hda6 as logical partitions within hda2. Being forced to have to have four primary partitions is a 20-year-old legacy issue, which unfortunately is still sticking around. Usually it's a good idea to avoid creating more than one primary partition, and just use extended partitions for everything else, so that if additional partitions have to be created it's easy enough to do so without deleting anything. Each disk can have a single extended partition, which can contain as many logical partitions as you like. Back to the list ****** Best virus scanner for Linux? Q:: I have been looking at (free) antivirus software for Linux and the two that have come to my attention are F-Prot and Panda. I have installed the F-Prot RPM but note that it's command line-only, and try as I may I can't seem to actually get it to load. Panda also seems to be a command line scanner so I suspect that every time I want to scan a file I have to do it manually. Is there such a thing as a GUI-based antivirus scanner for Linux that will just keep running in the background? A:: There really is limited demand for a virus scanner under Linux. Most virus scanners for Linux are built for use on mail servers, which filter email destined for Windows systems to protect the users. As there are so few viruses that target Linux applications, there is no need to run a dedicated virus scanner. However, you may want to ensure that you regularly run chkrootkit to verify that the system has not been exploited through a vulnerable service, or an exploited binary. If you're set on a scanner, one of the best we've used is ClamAV, available fromhttp://clamav.sf.net/. Back to the list ****** Using fdisk to verify a partition Q:: Last night I installed Mandrake on my daughter's PC to run an MP3 player not supported by Win98. Initially I tried Partition Magic to create the partition on the C drive. Needless to say it crashed, as did Windows, and a reset was needed, so I used the Mandrake partitioner. So far, so good. The problem we have is the D drive, which I made no attempt to touch. Windows can't read it, though Mandrake could last night. I decided to copy the contents of the D drive on to C so that the D drive can be formatted. However, this morning Mandrake can't see the files. There's about 4GB of data still there comprising program files for installation and MP3 files (life-threatening). I strongly suspect you will say there's nothing can be done other than to bite the bullet, but anything more positive from you will be much appreciated. A:: You can use fdisk to verify which partitions exist on which disks, by running fdisk -l /dev/hda, and so forth. You didn't indicate if drive D is on a second physical disk, or a partition on the first disk. It may be that the partition structure became corrupted during the resizing process. You will have to manually mount the disk in Mandrake by running: --- $ mount /dev/hda2 /mnt/dos ,,, replacing /dev/hda2 with the partition containing the files. Back to the list ****** Permission denied reading CDs Q:: I want to give myself, as user, permission to use my CD-ROM, CD writer, floppy and so on, and avoid the 'Permission Denied' error message when using programs such as CD Player. If I go into a terminal as su - or log in as root - and type chmod 666 /dev/hdc, all is then OK for the duration of the session. Unfortunately, when I reboot I lose the new settings, and am back with Permission Denied! What am I doing wrong? Is there any way of getting Linux (Fedora) to remember the settings? This seems such a basic problem and must affect many newbies, yet it never seems to be covered in articles or textbooks (which I have studied by the score). A:: You can solve your permissions problems by adding your user to the cdrom group, by adding the username to the line starting 'cdrom' in /etc/group. Unix is traditionally rather restrictive about users accessing physical devices, so everything is split up into multiple groups in order for granular access to be provided. For other devices, check out the file in /dev and see which user and group it is set up as. Back to the list ****** Solving file permissions problems with Gallery Q:: Before we begin let me tell you I'm not some super computer-user and all I know about Linux is that my website runs on it. I have Gallery installed on my website so that my family and friends can check out my holiday photos. I run the site on web space I was given on the server at work, but they have recently upgraded their system and apparently Gallery has been upgraded from version 1.3 to 1.4. When I try to move my albums back into Gallery it tells me that I need to upgrade all my albums. This is fine by me, but when I click Upgrade it tells me that it cannot access the album.dat file - Permission Denied. The guy who runs the server said he set all the ownerships to my user account to try to overcome this. He can't really help me much more as he is already doing me a favour by doing all this during working hours. What could I recommend to him to try to resolve this quickly? A:: Gallery is extremely fussy about the permissions and ownership of its files. I think it's more than likely that the data he copied off the old server has some inconsistencies with the data on the newer system. Ask him to make sure that all the file ownerships are the same as before, rather than setting them all to your user account, as there are some files that need to be readable and writable by, for example, the Apache user. The best thing to do would be to take the exact error that you're getting to the Gallery website (http://gallery.sourceforge.net/) and see if they have it listed in their extensive documentation section. If your error is not listed anywhere it might be worth posting a comment in the forums. Back to the list ****** Via graphics drivers Q:: Is there a Linux driver for the built-in graphics in the KM400/A? A:: I like short and to the point questions. The answer is yes, but it depends what distro you are using. Go to www.viaarena.com/default.aspx?pageID=2&Type=3 to access the drivers for Fedora, Mandrake, Red Flag, Red Hat, SUSE and some non-specific options. Back to the list ****** Making passive FTP work Q:: Unfortunately, I have recently had my system security compromised and I am now running iptables to filter traffic into my Fedora server. The server runs a corporate website and anonymous FTP for software downloads. I am allowing incoming traffic to port 80 and ports 20 and 21. However, I am unable to download files from the server via FTP when connecting using passive transfers - the connection just times out. Can you help me? A:: The reason passive FTP is not working is that in passive transfer mode the server tells the client to open a new connection to it on an arbitrary high port (above 1024). However, the firewall needs to be configured to accept traffic on this port in particular, as if you open up all the high ports then essentially your server will be wide open again. This was a classic problem with older software firewalls and packet filters (such as ipchains and ipfwadm). However, iptables can do stateful connection tracking. First you'll need to verify that you have ip_conntrack and ip_conntrack_ftp compiled into the kernel or compiled as a loadable module (this should be the case with a stock Fedora kernel). With this done you can add the following iptables rules: --- # The following two rules allow the inbound FTP connection iptables -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # The next 2 lines allow active ftp connections iptables -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # These last two rules allow for passive transfers iptables -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT iptables -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT ,,, In the active FTP transfer rules above the client sends the server a high port to connect to and the server connects to this port from port 20 to initiate the transfer. However, if the client is also behind a firewall that isn't stateful then this will not work, and passive transfers will be required. Back to the list ****** How to read root mail Q:: In my daily activities I run my system as a normal user, not as root. So far, so good. What I want to know is if it's possible to read the mail of root while logged in as normal user. I run SUSE 8.2 and there I have the possibility of running a root console, but when I enter: root and password for root and after that enter: mail, I only get the mail of the logged-in user. I would like to see the mail of the root user but to do that I have to log out and log back in as root; and after reading the mail, I have to log out and them log back in again. There has to be a better way to read the mail of the root user without having to go through the whole logout and login process. Can you help? A:: You can configure the mail routing for root through /etc/aliases, and mail can be delivered to any system user rather than the standard root user. Depending upon the mail software, you may have to run newaliases to rebuild the database used by the MTA (mail transfer agent). Back to the list ****** Making Conexant modems work on Linux Q:: I have just bought Linux SUSE 9.2 Professional. The problem I am having is with my internal modem. Written on the modem is CONEXANT CX06834-11. This modem runs well with Windows XP but will not run with SUSE 9.2. Can you tell me what modem (internal) will run with Linux, and where it is available? I have tried linmodems.org and linuxant.com but had no luck. A:: Conexant do not write a Linux driver for any of their modems. However, Linuxant offer Linux support for your modem, though they do charge a fee for this driver. They previously made a free beta driver, which they have since taken down, but you may still find it somewhere. The paid-for version costs $15 and entitles you to a year of upgrades on this driver. SUSE 9.2 is fully supported. You can download a free version which limits you to 14.4 from www.linuxant.com/drivers/hcf/full/downloads.php so that you can verify this before buying. Back to the list ****** Solving Bluetooth connection problems Q:: A while ago I took the plunge and installed SUSE 8.2 with a Centrino wireless card on my Acer laptop. I've since upgraded to 9.2 and have 9.1 Professional installed on my desktop. I expected all sorts of problems with the laptop, but they didn't materialise, except for one major issue - I can't get it to talk to anything. That's not quite true - it will connect to the internet if I cable it into my Linksys router, but... Neither of my Linux boxes can see one another (I set the Samba server up on the desktop using YaST, and the laptop up as a Samba client). The wireless card won't connect to the internet or see the other computer. I can't get a connection via bluetooth to my T610 phone. Infrared works according to YaST's Test button, but won't do anything beyond this. The inbuilt modem is recognised and dials telephone numbers, but beyond that PPPd crashes - no connection again. I'm on the verge of stripping SUSE off the laptop because it's using up so much of my work time just trying to get connected to deliver work to clients. Why does it have to be so difficult? I'm not a techie but I'm a very competent Windows user. If I could find someone to talk me through this, I'd feel different, I expect. As it is, I only have limited time to trawl the internet and then start tailoring dangerous-looking configuration files. I like Linux and I support open source software and make donations but it's just getting to be too much. If you can help me get wireless and bluetooth operating on my laptop and make my two Linux boxes 'see' one another, I'd be mighty pleased. Otherwise, I guess it'll be back to Windows but using as much open source as I can. A:: The simplest way to solve this problem is to begin at a low level and try to ping hosts on the network. If you can ping the other system by its IP address, then the chances are that the basic network is OK. There are situations where pinging works and file transfers do not, but these are few and far between, and are generally limited to complex network configurations. You can verify the IP configuration and routing on the laptop using ifconfig -a and route -n. Your on-board Ethernet will be eth0, and your wireless will be eth1 or wlan0, depending on how the distribution handles wireless access. If you can access the wireless router, but can't get out onto the Internet, then the fault is likely to be a routing issue on the device; either because a default route is missing, or the system is trying to send all traffic out of the wired Ethernet interface. Without information on specific configuration options, and the current state of the system, it's difficult to put my finger on an individual cause of your network problems. Samba on each host will need to be configured via the etc/smb.conf file, so they'll both belong to the same workgroup. Even without this change, you'll be able to access shares permitted in etc/smb.conf by specifying the IP address of the host in the Samba client. Rather than using Samba, file sharing on Linux is better done using NFS, which can be configured using the SUSE system configuration tools, or by editing etc/exports. With all laptops, it's a good idea to start over at www.linux-laptops.net, and see what success others have had with Linux and specific configuration options used. Laptops are, unfortunately, rather strange beasts, and it can be difficult for developers to get their hands on every single variant out there. You may want to give a distribution such as Mandrake or Fedora a try and see if you have anymore success. Often, different Linux distributions have kernel patches installed, which resolves any problems interacting with various hardware devices. Back to the list ****** Best server hardware for Linux compatibility? Q:: I've got a good one for you. I'm not sure that this can be categorised as a technical question but the only other places I can find answers are bound to be biased. I'm a Linux business user. Most of our back-end servers and services are Linux-based. Our users don't care whether we use Red Hat, SUSE, Microsoft Windows or Baron Samedi-style voodoo - they all have Windows desktops and essentially just want to browse the net and get their mail and files. We're upgrading our server hardware, which is extremely dated and is about to fall out of warranty (it's already been End of Life for some time). If fortune favours us we'll be able to do this upgrade one server at a time, so we're not under immense pressure to get the entire network done in one go. We have a decent budget but can't go on a complete shopping spree. Now for the questions: what's the best server hardware to go for if we're looking for Linux compatibility? We'd like the vendor to have official Linux support - not just some guy on the net who's got some source for BSD we can try to cross-compile with mixed results. Secondly, is it really worth going for one of the paid-for Linux distros? All our current servers use Red Hat 8, which works pretty well. A Red Hat-based distro would seem an obvious choice but is Red Hat's Enterprise Linux the best option, or would we be better off with Fedora? Having said that, if we're going to be paying money for this, would SUSE would be better? Thanks for your help. A:: Wow, an IT department with a budget, fantastic start! Before I give you my view and trigger an onslaught of hate mail please remember that this is only the opinion of one simple man trying to make his way in the universe, based on my own experience with server hardware and Linux distributions. Most of the hardware vendors out there are really very good. I'd say there are two main categories to choose from here. The top tier hardware vendors like Dell, HP, and IBM etc. These guys make phenomenal hardware - it's their business to, but many of them only support Linux as an afterthought. From my own experience Dell have it covered on their rack-dense servers. They can offer you Red Hat Enterprise with SUSE preinstalled at the factory, which means you can be confident of having good driver support. Another big company taking bold strides is IBM. IBM has always been a favourite of mine, and with the millions of dollars they're pumping into open source they'd be a safe bet. At the other end of the spectrum you get the true grass roots Linux companies that make their own servers mostly out of commodity clone hardware. There are loads of such companies around, and most of them are small and so give a more personalised service than the big hitters. These companies are built on Linux so providing a product built with Linux in mind is what makes them tick. When it comes to picking a vendor for your software it gets more blurred. Here are the main reasons I would be willing to pay for a Linux distribution: If a company provides a Linux package they're obliged to keep it running securely. There's someone to call; even if it may cost a little money. Different levels of support are available for different budgets. Often the people paying the cheques like to know that there is somebody they can hold accountable for a failure in service, either of the product or any of the ancillary services. I'll focus on Red Hat in particular as I have no real experience with Novell/SUSE's commercial offering. Red Hat will give you the actual operating system license as well as a subscription to their up2date service for patches. Also, if they release a newer version (such as the upcoming RHEL4) you'll be able to download and install that too. For the approximately £500 standard package they'll answer an unlimited number of queries within four hours during business hours. This level of service can be upgraded all the way to one-hour response times, 24/7. SUSE's free product support will also work very well for you but don't expect anything more than Google for help; you really do get what you pay for when you're talking support. Having said that, if you've been using the free Red Hat product for some time, you can probably support yourself quite adequately whatever distro you go for. Back to the list ****** Fix slow USB ports on Linux Q:: I have a D845WN Intel motherboard, which (according to the Intel website) has Hi-Speed USB 2.0 ports. Unfortunately I've not been able to attain high speeds, even after installing all the relevant software for my motherboard from the Intel site. I'm running Fedora on my machine and have an external USB 2.0 hard disk. Using it on USB 1.1 ports is extremely frustrating. Can you tell me how I can I enable Hi-Speed USB 2.0 speeds? A:: USB 2.0 under Linux requires a supported USB 2.0 controller and the use of the EHCI module to access the USB subsystem. You can verify which USB modules your system is loading by using dmesg, which displays kernel information from system boot time. However, what you describe may be related to a problem with the EHCI module itself. The kernel 2.6. 01 source included EHCI driver software which seems to confuse some controller cards (To be fair, the EHCI drivers are still rather experimental). This can cause quite a few problems. The easiest way to fix your woes is to change your kernel. It is possible to go back to an earlier version, but before you try that, check out http://download.fedora.redhat.com/pub/fedora/linux/core/updates/3/ for updates to the Fedora kernel. Back to the list ****** Can't get bzip2 to work Q:: I'm having problems trying to copy the Gambas application from a magazine CD. Using find /mnt/cdrom2 gambas-1.0.1.tar.bz2 -print results in mnt/cdrom2/Magazine/Gambas/gambas-1.0.1.tar.bz2; but adding this pathname to the command tar xvf --bzip2 generates the error message 'btar: --bzip2:Cannot open: No such file or directory'. Replacing 1.0.1 with 2.1.0 in the filename produces the same message. What am I doing wrong? A:: There doesn't seem to be anything wrong with the steps you are trying, but the error message suggests your command isn't formatted properly - try this: --- tar xvf --bzip2 /mnt/cdrom2/Magazine/Gambas/gambas-1.0.tar.bz2 ,,, You can also use the slightly shorter "tar xvfj [filename]" with most versions of tar. Back to the list ****** Samba on SUSE Q:: I'm having trouble getting Samba to work on my SUSE 9.2 box. I use a Belkin 10/100 Ethernet card on my Linux box; a Via Rinefire onboard 10/100 Ethernet card on my Windows box; and a five-port Belkin Ethernet switch. The Belkin NIC is detected and configured by SUSE, but I can't figure out how to get Samba to work. It worked under my old system (Fedora) but I don't want to go back to this, as I want to listen to MP3s and use the software (like Scribus and KMyMoney) that comes with SUSE. Any help would be appreciated. A:: You'll need to verify that Samba is running and that the firewall is turned off - use YaST for this. If the firewall is enabled, remote systems will be unable to access the Samba service. As an alternative, you can open up specific ports on the firewall to permit access to Samba: --- TCP: 137, 138, 139, 445 UDP: 137, 138 ,,, Put the following lines in the etc/sysconfig/SUSEfirewall2 configuration file: --- FW_SERVICES_EXT_TCP="microsoft- ds netbios-dgm netbios-ns netbios- ssn" FW_SERVICES_EXT_UDP="netbios- dgm netbios-ns" ,,, You will also need to enable broadcast packets on the firewall: --- FW_ALLOW_FW_BROADCAST="yes" ,,, Et voila! Hope this works. Back to the list ****** Modem configuration across Linux and Windows Q:: I have just installed SUSE 9.2 as a dual boot alongside Windows XP Home, which I still use. I am completely new to Linux, and the installation couldn't have been easier. There are, however, some questions I can't easily find answers for on the net. YaST has recognised most of my hardware, graphics, sound etc, including the fact that I have a USB ADSL modem. But it doesn't recognise the modem itself just the fact that I have one. When I click on the modem entry in the hardware list, the Configure button stays greyed out. The modem is a Sagem Fast 800/840, which I have connected via USB rather than Ethernet card. There are instructions for Linux on the modem's install disc, but I'm afraid it's all a bit over my head. Is there an easy way to install this modem on SUSE? Are there simple instructions in newbie terms? More importantly, if I configure the modem for SUSE, will it still work when I switch to Windows to go online from there? Can it be used on both OSs without problems? My ISP is Tiscali, and I connect on a 512k broadband connection. Are there any problems from Tiscali's side if I connect with both Linux and Windows? Also, where do I find the options for setting up an internet connection in SUSE? (ie is it as simple as it is in Windows?) And is email set-up similarly pain-free? With anticipation of some help for a helpless Linux new boy, thanks very much. A:: We were able to find some documentation on the configuration of this modem with Linux, although it is fairly complex - and in French. You can find it at http://lea-linux.org/hardware/sagem.html?v=t. You'll be able to use the DSL modem from both Windows and SUSE as you dual-boot, although your ISP probably won't support the connection for Linux -if it doesn't work, don't expect them to help you. Configuration for internet connectivity via supported devices in SUSE is performed through YaST, so if you have any internal Ethernet connections, or a dial-up modem, you can set them up this way. You have quite a choice of mail clients - we would recommend Thunderbird from www.mozilla.org, which is a great client with an easy to use interface. Back to the list ****** Best ADSL model for Linux users? Q:: I've just installed SUSE 9.2 from your latest DVD. I religiously installed each of the main distros as you published them, hoping against hope that I would eventually have a Linux platform which would allow me to connect to the internet. I have a broadband connection via Wanadoo using an Alcatel SpeedTouch USB modem, which looks rather like a green, limbless crab. I was able to connect with this modem back in the days of Mandrake 8, but have been unable to connect since upgrading. I've tried Mandrake, SUSE, Fedora and Red Hat, all to no avail. Can you please help me, or (if I need to purchase a different modem) recommend one that SUSE will recognise? I would be forever in your debt - as would be my barber once I stop tearing my hair out. A:: Lots of information on the Alcatel SpeedTouch USB modem (otherwise known as 'the frog'), can be found at http://linux-usb.sourceforge.net/SpeedTouch/. This includes open source versions of the drivers, as well as setup documentation to get you onto the internet using the modem. As you are running SUSE 9.2, you can follow the instructions at http://linux-usb.sourceforge.net/SpeedTouch/suse/index.html to get it up and running. Wanadoo gives you the option of using either PPP over Ethernet, or PPP over ATM (PPPoA); but the SpeedTouch USB documentation suggests that using PPPoA is a better option. In either case, you'll need to follow the specific instructions for the PPP method used to connect to your ISP. Back to the list ****** AMD vs Intel support for Linux Q:: I have finally gained the intestinal fortitude to have a go at using Linux. I have been interested in the concept of Open Software for some time and I use Firefox, Thunderbird and Open Office as my main programs of choice. While I like to think that I am reasonably computer literate, (I am 69 years of age), and I assemble my own computers, I am a bit perplexed on one point. I use mainly AMD processors for my computers and I have found that almost all of the Linux distros I am interested in - Fedora or Red Hat - seem to call for an Intel-based computer. My question is - can I install Linux on my AMD-based computers or would I require an Intel processor? A:: All of the distros you mention will work quite happily on AMD processors, as should any others. The requirements quoted are often confusing in several respects, and when 'Intel' is specified, it usually means that the processor must follow the Intel architecture, IA32. This is the case for AMD and many of the other drop-in replacements for Intel processors, such as those made by VIA and others. There are very few differences in the capabilities of these processors, which are normally limited to multimedia extensions such as MMX or 3DNow! These will make very little difference to all but a few applications. Back to the list ****** Installing unsupported software on RHEL Q:: I would like to upgrade from MySQL 3.23 to MySQL 4, but my Red Hat Enterprise Linux ES server does not have the relevant package available. I can see that a package is available from the MySQL site but I'm worried that it will break my server. Can you give me any advice on this job? I know how to do the actual install of the RPM - I just don't know what the consequences will be. A:: The upgrade itself should pose no problems. However, please bear in mind that the RPM from MySQL will probably have a different username to those used by the Red Hat version, as well as some different paths. Any third-party programs you have that go into the MySQL 3.23 libraries may also need to be updated. The table structure between 3.23 and 4.x is totally compatible, but the MySQL table has a few extra columns that will need to be added. There is a script included in MySQL called mysql_fix_privilege_tables which should resolve any issues with this. One thing to think about before you go through with the upgrade is that Red Hat does not officially support MySQL 4 - so you'll lose all support for this aspect of your operating system. I've seen this combination work many times, but if you decide to go ahead don't forget to add MySQL to the up2date ignore list, or you will automatically downgrade to 3.23 next time up2date runs. Back to the list ****** Which routers are Linux-compatible? Q:: I've been running Mandrake Linux 8.2 with Windows 98 SE on my PC. After five months of running both, I've decided to get rid of Windows and the partitions, and use Linux full-time. I have just bought your Complete Linux Handbook 2, and intend to install Mandrake 9.2 from the DVD. I've ordered 2MB broadband (without tech support) from Madasafish, who cater for Linux, and I want to use a wireless connection to my PC as it will not be staying where it is. Can you advise me on a wireless modem/router? Would I be better off with two separate units, and will I need some sort of a card in my PC? I'm not having any luck finding something suitable on my own (not knowing what I'm looking at doesn't help). The products need to be reasonably simple for someone as ignorant as me to set up. Any help you can give me would be gratefully appreciated. A:: We would recommend you start out by installing a recent distribution of Linux, such as Mandrake 10.1 or Fedora, rather than trying to fight with something a year or two old. You can find a list of wireless devices that work with Linux from www.prism54.org, and you should probably pick a DSL router from the vendor that you're purchasing your wireless adaptor from. As you have LUG nearby [Malvern], you may wan to join their mailing lists and find out what success others have had with specific devices. You can either purchase a wireless bridge, which provides wired Ethernet access to your device, or a PCI card which has a wireless adaptor built in. Many manufacturers (including D-Link and Netgear) make DSL and wireless devices, so you have quite a selection to pick from. There are also a number of low-cost vendors - we recommend that you avoid these, otherwise trying to find support from a LUG or on the internet is going to be quite a trial. Back to the list ****** Sound problems with VIA chipsets Q:: I'm a newbie with regards to Linux, but with the offer on your cover of SUSE 9.2 I thought I'd give it a try, and set my machine up to dual-boot both Windows 98 SE and SUSE 9.2. I must say I'm very impressed. The install was a lot easier than Windows' and I'm thinking of doing away with Windows altogether. The only thing stopping me is the inability to get my onboard sound working. My PC specifications are: AMD Duron processor running at 1,600MHz, 512MB DDR RAM, ASRock K7VT2 motherboard with onboard sound, LAN, USB 2.0, etc, Maxtor 40GB HDD, Bearpaw 1200Cu scanner, Epson 810 Colour Stylus photo printer, Compaq Presario 1425 monitor. A:: The ASRock K7VT2 motherboard uses a VIA chipset, which has onboard AC97 compatible audio. If you are running a 2.6 version of Linux you can add the following to your /etc/modules.conf file: --- #--- START ALSA ---# #--- ALSA ---# alias char-major-1 16* snd alias snd-card-0 snd-via82xx # (sound-card-0 is probably not needed, but just in case) alias sound-card-0 snd-card-0 #--- OSS ---# alias char-major-14* soundcore alias sound-slot-0 snd-card-0 #--- ALSA - CARD ---# options snd cards_limit=1 #--- ALSA - OSS ---# alias sound-service-0-0 snd-mixer- oss alias sound-service-0-1 snd-seq-oss alias sound-service-0-3 snd-pcm- oss alias sound-service-0-8 snd-seq- oss alias sound-service-0-12 snd-pcm- oss #--- ALSA - /dev (OSS) ---# alias /dev/sequencer* snd-seq-oss alias /dev/dsp* snd-pcm-oss alias /dev/mixer* snd-mixer-oss alias /dev/midi* snd-seq-oss #--- END ALSA ---# ,,, Once the audio device is accessed, it will automatically load the modules for you. Back to the list ****** Speedtouch USB modems on Linux Q:: I finally have a broadband connection thanks to a USB SpeedTouch 330 modem, which, according to a multitude of pages on the internet, can be used with Linux. Here is the problem: they all mention that I need to download firmware and perform several steps with the firmware in order to get the modem working. My understanding of the meaning of firmware is that it is the software that sits on the modem itself; so if I carry out the instructions as spelled out on http://linux-usb.sourceforge.net/SpeedTouch/fedora/index.html for Fedora I should end up with a working modem for my Fedora system. If I have to update the software on the modem to get it to work with Linux, will it stop the modem from working with my existing Windows XP installation? I really don't want to proceed any further until I find this out as flashing things like BIOS/firmware scare the living daylights out of me! A:: As the firmware is distributed by SpeedTouch, you shouldn't encounter any problems when you use the modem under Windows XP. As always when doing any firmware or BIOS upgrades, you should ensure that you have a backup of the existing image - in fact, this should be at the top of your to-do list. Most likely, if something goes wrong when you try to update, you'll have to send the whole thing back to SpeedTouch for them to fix it for you. We've rarely had problems ourselves with flashing devices, other than if there is a hardware issue on the device which corrupts the image. We think it would be fine to flash the modem, although you may wish to check with the nice people in SpeedTouch's technical support department first to verify that the image will work. Back to the list ****** Configuring RAID arrays Q:: I have been experimenting with Linux for the past two years and would consider myself to be an enthusiast - if only at quite a basic level. I recently purchased a new computer from MESH and decided to opt for an AMD 3200 Athlon 64-bit processor on an ASUS K8VSE Deluxe motherboard with the intention of installing my favourite distribution, SUSE Professional 9.2, in dual boot mode with the pre-installed Windows XP. This is where the problems started. The motherboard has an on-board Promise FastTrak 378 controller, which the 200GB SATA hard drive was configured to use in a RAID 1+0 array. When I tried to install SUSE Professional 9.2, having made space on the hard drive using Partition Magic from the Windows XP OS, the installation procedure advised me to disable the hardware RAID 1+0 array and to create a software RAID 1+0 array within SUSE using YaST. I was concerned that if I did this I would not be able to use the Windows XP OS installed and therefore have not been able to install the SUSE distribution. The ironic thing is that I do not need to have the computer configured to use the RAID 1+0 array as I only have one hard drive installed. I would like to know whether it is possible to install SUSE Professional 9.2 in dual boot mode with the pre-installed Windows XP o/s or whether I have to re-build the computer from scratch not using the Promise drivers during the installation and not configuring a RAID 1+0 array? I have also installed a separate 40GB ATA hard drive connected to one of the motherboard's IDE connector's to see whether I could install SuSE on to this drive but was not successful. I would be grateful for any advice you could give me. A:: We're rather confused as to why the Promise FastTrack controller would let you create a RAID device with a single disk, much less a RAID 1+0 array, which requires at least four disks. You can try to disable any RAID capabilities in the FastTrack BIOS, and as you've only got a single disk, the BIOS should boot from it quite happily. SUSE will detect the RAID array as a device, and allow you to partition and write information to it. As a test, you can boot using a Knoppix 3.7 CD, or attempt to install Mandrake 10.1 or Fedora which may have better support for the SATA controller on your board. Many boards that provide SATA only recognise certain ATA controller ports. If you can't install SUSE onto an ATA disk, there is probably a misconfiguration within the BIOS. You can try to turn off 'Legacy Mode', to allow both SATA and ATA to work on their own: Legacy Mode is designed for older Operating Systems that get confused when SATA is available. Back to the list ****** How to make Iomega zip disks work Q:: I've installed SUSE 9.2. However, I've had to revert to 9.1 as I couldn't get my Zip drive to run on 9.2 - the Iomega Zip drive wasn't even identified. I give below the entries I made in /etc/fstab: --- /dev/hdb hdb4 /media/zip subfs auto auto noauto,fs=floppyfss {nothing},procuid,exec, user nouser,dev\nodev,rw ,,, I think you'll agree I tried all reasonable combinations. Some of them merely echoed SUSE's entries for /dev/fd0 (floppy disk). I liked everything else about 9.2 and so am disappointed not to be able to use it, but my Zip disks are my main archive at the moment, and contain a lot of data. Thanks for your time and attention! I do hope you can help. A:: You should start by verifying that the Zip drive actually exists by running dmesg. This will output a whole slew of information, which should hopefully include IDE devices located during the boot process Assuming the device really exists on /dev/hdb, you need to mount /dev/hdb4, which can be done manually with the following: --- mount --t vfat /dev/hdb4 /media/zip ,,, If this fails to mount the Zip drive, the error output should indicate what causes the problem fairly quickly. Should it work, you can add it to fstab with the following: --- /dev/hbd4 /media/zip auto defaults 0 0 ,,, You can then manually mount the device with the command: --- mount /media/zip ,,, Good luck! Back to the list ****** Setting up VNC for friends Q:: I have been trying to set up a Linux box running Fedora for my friends to play with. They're mostly Windows guys and don't have a lot of command-line experience, so I'm trying to help them into the wonderful world of open source by setting up VNC on the Linux box so they can log in and play around on separate X session. I have created a script called vnclogon to start a VNC server session but am having a lot of trouble making it work. The script is as follows: --- #!/bin/bash echo "Hello There "$USER echo "You are about to run the VNC server service." echo --n "Do you want to continue...(Y/N)" read Decision if [ "$Decision" = "Y" ]; then echo "Starting your VNC Session." echo "Please wait..." vncserver :1 -name $USER >/dev/null 2&>1 echo "VNC Session Loaded!" else echo "Then why did you run the script?" fi ,,, The strange thing is that the script seems to execute correctly but when someone tries to connect using a VNC viewer they get the following error: 'Unable to connect to host: Connection refused (10061)'. Even stranger is the fact that when I try to kill the VNC session by using the vncserver -kill:1 command, I get the following error: 'Killing Xvnc process ID 4790, Kill 4790: No such process'. The strange thing is that when I run the VNC server service manually, I manage to connect. Please help me make this work. A:: You seem to have a nice script going and I admit that I was a bit puzzled by the error you got when you tried to kill the VNC session generated by the script. It appears that you've tried to suppress the output generated by VNC when a VNC server session is launched by redirecting the output to dev/nul using I/O redirection. The mistake is in the syntax of the command - instead of >/dev/null 2&>1, you should have typed >/dev/null 2&>1. The 2&>1 is actually a neat piece of code which is used to send standard error to the same place as the standard output. You've sent your standard output (1) to /dev/null, and so standard error (2) also goes to /dev/null. The & in 2&>1 is simply to put the job in the background so that you get your shell prompt back. All in all the script seems quite good and I believe that this correction should help solve your problem and allow your friends to make better acquaintance with Fedora's X front-end. Back to the list ****** How to encrypt a Linux filesystem Q:: I am keen to test Linux and eventually would like to migrate to it from Windows XP. Before I do that though, I really need to know if Linux has the following security features, which the Windows XP program DriveCrypt has. I am currently able to encrypt (AES 256-bit) the entire Windows XP operating system before it is booted, entering my password at MBR boot stage. This uses the DriveCrypt Plus Pack program at www.drivecrypt.com, and requires a two-line password. I can encrypt a separate data partition with 1,344-bit Triple Blowfish encryption, and in addition to four-line password entry, I can use a fingerprint sensor to keep my data secure (I'm especially keen to keep this feature as it is so cool). Lastly, I am able to image my Windows OS regularly using Acronis's True Image software. Would I be able to do all of the above with Linux, using separate open source programs to achieve the same end results? A:: Encrypted filesystems for Linux do exist, including CFS and TCFS, both of which provide an encrypted layer for any block device. These systems are designed mainly to encrypt specific filesystems running under Linux; however, the 2.6 kernel supports cryptoloop filesystems, which allow any cipher known to the kernel to encrypt the filesystem. You may want to review the documentation at http://linuxfromscratch.org/~devine/erfs-howto.html. CFS/TCFS will not work with external sensors, but you can generate an encryption seed of any length. There is a wide range of algorithms to choose from, although AES is probably the best choice. Back to the list ****** Video mode problem: blank screen and mouse cursor Q:: I wonder if you could point me in the right direction to seek appropriate support? I've just bought a new Evesham PC, and wanted to explore whether I could move away from Windows so I ran SUSE 9.2 from your DVD (having initially partitioned my HD and created a Linux partition and swap partition). I followed the guidance for installing the 32-bit version and have now tried to install it perhaps a dozen times. Most times it freezes completely at some point; but even when it went the whole way I got no display (just a blank screen and the X mouse cursor). I think I've narrowed this down to a problem with the monitor - a Viewsonic VX912. A friend advised me to try installing text mode by choosing run level three. This has worked, but isn't very glamorous! Any ideas about how I can resolve this problem or where I'd go for support? A:: If the system locks up during install, it is likely to be because of a hardware conflict or a kernel issue with the specification of your system. I'd recommend installing an alternative distro as a test to verify if the problem is specific to SUSE, or impacted by other hardware features, such as SATA, USB2 or expansion cards in your system. The monitor should not cause problems, as all it will do is advertise its model information to the system so the appropriate hsync/vsync values can be configured. The monitor definitely won't make the system lock up or otherwise not install. It may be an issue with your video card, although you didn't say what spec card you have. You may also want to try Knoppix, which will boot directly from CD-ROM and will attempt to automatically detect all of your hardware devices. This should be a good indicator of anything that may have problems with other Linux distributions. Of course, we're assuming that Windows XP ran stably on the same hardware -otherwise you'll want to get in touch with Evesham and find out if there is a hardware problem with your box. Back to the list ****** X.org configuration is wrong - virtual screen size problems Q:: I am having a very hard time with the new X.org, where I previously did not. I have installed Slackware 10 and BSD 5.3 with an NVIDIA card on the same machine and am having the same problems with both. One of these problems is with xvidtune. I started it up in a small X screen in a terminal and adjusted the screen sizes, but it wouldn't write the changes anywhere that I could find them. I finally got the modes and screen sizes right by writing them down and editing the xorg.conf file. For some reason the xvidtune changes work but the resolution starts up wrong, and I have to change it once KDE is up and running to get rid of that blasted virtual screen size. startx still doesn't work in either system. I can start kdm in BSD and select a few different desktops but Slackware only goes to a pre-chosen default. In my older Slackware I could choose which desktop I wanted right from startx. Right now with Slackware - installed from the store-bought disks - the KDE splash screen and a few announcements about 'no sound' come up - but no tool bar or anything that works. I got Gnome up but the file manager doesn't work in it. I installed everything to a disk which had plenty of space available, and it seemed to go OK. So where are all the possible configuration files for X.org, and what should they look like? How do I get xvidtune to work, and is there any way to get rid of that irritating sliding screen and pointer thing so I can change resolutions and keep the whole desktop on the monitor? A:: Your X.org configuration should live in /etc/X11/xorg.conf, and will include a section defining how to handle your screen. Within this, there will be a list of possible resolutions that your monitor and video card can handle. X will set the root window to be as large as the maximum resolution, but will use the first one in the list, which may be smaller. You can use Ctrl+Alt+(number pad)+ and Ctrl+Alt+(number pad)- to switch the resolutions without restarting X. By changing the order of the resolutions, or simply taking out the ones you don't want, you should be able to end up with a desktop that doesn't scroll around anymore. If startx fails, you will see a dump of all the log output from the X server this should give you a clue as to what is going on, and in turn give a good indicator of what your problem is. Slackware is a good distribution, but for a newer Linux user, something like Mandriva or Fedora would be a preferred option. These should install and get everything up and running for you without having to fuss around with manually editing your X configuration. Back to the list ****** Putting Linux on an old laptop Q:: Hope you can help! I have a laptop which is not being used, but I would like to replace Win98 with Linux. It is a Sony pcg745 with a Pentium 266MHz with MMX, 128MB of memory and a 3GB drive. I have unzipped SUSE 9.1. I'm unable to find out how to get rid of Win98 altogether and totally replace it with LINUX. The laptop has a CD drive and floppy drive, but no DVD drive. Any help with this would be most helpful. A:: With such a configuration you may need to be a bit careful about what software you install. It isn't a huge amount of memory, and not exactly a cavernous hard drive, so you may not want to install all of KDE 3.3, OpenOffice.org et al. Having said that, though, pretty much all the current distros will install on that equipment. The trick to removing Windows is simply to delete all the partitions on the disk during the Linux install. Some of the new distros may not let you do this by default as a precaution, but there will usually be an 'Expert' or 'Custom' option when partitioning the hard disk. Simply delete all the partitions before creating new Linux partitions. On this limited drive, I would recommend a 256MB swap partition and the rest formatted as one '/' partition. Back to the list