Linux Gazette... making Linux just a little more fun! Copyright © 1996-98 Specialized Systems Consultants, Inc. _________________________________________________________________ Welcome to Linux Gazette! (tm) _________________________________________________________________ Published by: Linux Journal _________________________________________________________________ Sponsored by: InfoMagic S.u.S.E. Red Hat LinuxMall Linux Resources Mozilla cyclades Our sponsors make financial contributions toward the costs of publishing Linux Gazette. If you would like to become a sponsor of LG, e-mail us at sponsor@ssc.com. Linux Gazette is a non-commercial, freely available publication and will remain that way. Show your support by using the products of our sponsors and publisher. _________________________________________________________________ Table of Contents September 1998 Issue #32 _________________________________________________________________ * The Front Page * The MailBag + Help Wanted + General Mail * More 2 Cent Tips + 2 Cent Tip from the 'Muse + Tips and Tricks: Keeping track of your config files + 2 cent tip: Cross platform text conversion. + XFree86 and the S3ViRGE GX2 chipset + Clearing the Screen + Re: Shell Scripting Resources Re: Recognising the AMD K5-PR166 Your atapi CDROM Tips: simulataneous kernel versions Creating man pages made easy! 2c Tip Re: Cross-platform Text Conversions Un-tar as you download megaraid drivers Re: simultaneous versions of Kernels + News Bytes o News in General o Software Announcements + The Answer Guy, by James T. Dennis + A Convenient and Practical Approach to Backing Up Your Data, by Vincent Stemen + Graphics Muse, by Michael J. Hammel + Installing StarOffice 4.0 on Red Hat 5.1, by William Henning + An Interview with Linus Torvalds, by Alessandro Rubini + It Takes Its Toll, by Martin Vermeer + Java and Linux, by Shay Rojansky + Linux Installation Primer, by Ron Jenkins + Linux Kernel Compilation Benchmark, by William Henning + Linux Kernel Installation, by David A. Bandel + New Release Reviews, by Larry Ayers o Patch For Beginners o A Simple Typing Tutor + Open Source Developer Day, by Phil Hughes + Paradigm Shift, by Joe Barr + Running Remote X Sessions on Windows 95/98/NT/Mac/PPC Clients, by Ron Jenkins + Searching a Web Site with Linux, by Branden Williams + The Standard C Library for Linux, Part 3, by James M. Rogers + The Back Page o About This Month's Authors o Not Linux The Answer Guy _________________________________________________________________ TWDT 1 (text) TWDT 2 (HTML) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. _________________________________________________________________ Got any great ideas for improvements? Send your comments, criticisms, suggestions and ideas. _________________________________________________________________ This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ The Mailbag! Write the Gazette at gazette@ssc.com Contents: * Help Wanted -- Article Ideas * General Mail _________________________________________________________________ Help Wanted -- Article Ideas _________________________________________________________________ Date: Sat, 1 Aug 1998 09:50:38 -0500 From: The Wonus House, wonus@w-link.net Subject: Accessing Microsoft SqlServer vi DB-lib or CT-lib Do you have additional information sources on connecting to MS SqlServer via the Sybase CT/DB libraries? I am most interseted in how this could be done from a Solaris client machine. Any info is greatly appreciated (thanks), Kevin Wonus _________________________________________________________________ Date: Sun, 2 Aug 1998 21:55:03 -0400 (EDT) From: Paul 'Tok' Kiela, tok@gemini.physics.mcmaster.ca Subject: R2000 Mips 2030 I just recently came into a used R2000 "Mips 2030" desktop slab. Aside from opening the box and finding that it is indeed running an R2000 CPU, I know nothing else about the computer -- literally. I have found absolutely zero information about any computer bearing the markings 'MIPS 2030'. To make matters worse, I don't have a proper BNC monitor to actually use the box yet, but I'm searching. My question, where can I find information about the R2000 port of Linux? I have visited the Linux/MIPS page, but the only mention of the R2000/3000 CPU port is an URL which points at SGI's statistics on the R3000 CPU. I was hoping I could pop Linux on on the box, and happily run it alongside the little army of Linux boxen I have now. Any help would be very appreciated. Thanks. Paul. _________________________________________________________________ Date: Thu, 06 Aug 1998 12:57:21 +0000 From: Gulf Resources Co, grc2000@kuwait.net Subject: Some Ideas Anyone there who is dreaming of running Delphi in Linux? _________________________________________________________________ Date: Wed, 12 Aug 1998 09:21:12 +0200 From: Jesus A. Muqoz, jesus.munozh@mad.sener.es Subject: LILO Problems I installed Linux in a secondary IDE hard disk booting from a floppy disk. Then I tried to install LILO in the MBR of the primary IDE hard disk and I did it. My idea was to maintain Windows 95 in the primary disk. I configured LILO to be able to start Windows 95, but after installing LILO the primary disk cannot be seen either by DOS nor by Linux. If I run msdos-fdisk it says that the disk is active but in the row where it should appear FAT16 puts unknown or something like that. Can I recover the information of the hard disk ? _________________________________________________________________ Date: Tue, 11 Aug 1998 09:17:54 +0300 From: Mehmet Mersinligil, memo@tr-net.net.tr Subject: Matrox Productiva G100 8M AGP !!??? Is there a way to configure my Matrox Productiva G100 8MB AGP under X? Except buying a a new accelerated X server for 125$ from http://www.xig.com ? What should I do? _________________________________________________________________ Date: Sat, 08 Aug 1998 18:21:54 +0000 From: Alexander I. Butenko, alexb@megastyle.com Subject: Some questions to be published 1. I wonder can I use the EPSON Stylus Color 400 printer with Linux... The interesting thing is that my buggy GIMP beta says to be supporting it but can't really print anything.... 2. Has anybody encountered such a bug? JavaICQ doesn't run properly under KDE (when I open the send or reply or even preferences window this window closes immediately). This problem is only under KDE... 3. I can't use this Real Player 5.0, because it reports the compression errors even with files obtained from www.real.com or that file that was installed with it on my hard drive... _________________________________________________________________ Date: Thu, 06 Aug 1998 15:33:37 -0400 From: Bob Brinkmann, Bob.Brinkmann@mindspring.com Subject: Being new to the Linux community I'm in the process of developing a secure, encrypted tunnel for access to my company's enterprise network. The clients on the outside dialing into the system will be of a Windows 95, 98, NT 4.0 and probably 5.0 when it decides to rear its ugly head. My question is this, are there solutions on the terminating server side written in Linux to handle clients' tunnel access and also provide for IPSEC level encryption? A while back I played with Red Hat's 2.0 release of the software and I just purchased Red Hat's 5.1 (Manhattan) version utilizing 2.0.34 kernel and find it to run quite nicely on both a desktop and several Toshiba portables. Thanks for any advice or information you can provide. Bob Brinkmann _________________________________________________________________ Date: Wed, 12 Aug 1998 13:07:22 -0500 From: Dennis Lambert, opk@worldnet.att.net Subject: Help Wanted : newbie I recently purchased Red Hat 5.1 and got it running. Evidently I was lucky in that I have a fairly full FAT 32 Win 98 drive and kind of stumbled through the defrag / fips / boot to CD / repartition / full install with LILO process. Everything worked, but I'm a little nonplussed. A few topics I'd absolutely love to get feedback on... * Turns out I have a lousy WinModem. I can see the feedback now, (Run it over with your car) * I have grown fat and lazy with Win 98 and find myself looking for "Display Properties" and such. I'm very familiar with C and such and am not afraid of hacking scripts or the like, but my problem is thus: Where is a (succinct) list of what gets run when, from where, and why. I'd love to tweak everything if only I could find it. * I have something called an "Ensoniq Audio PCI" sound card with "legacy emulation" I don't even know how to begin to get this thing working. What are the first steps in enabling hardware? * Where do I get information on mounting drives (FAT 32 especially) * I think my printer works (at least text does), but how do I print things (man pages) I'm not an idiot, not even a "dummy", but what is a good book to answer the basic questions? I have "Linux in a Nutshell" and it has a very good command reference and a few other things, but doesn't help in tweaking things. I don't really expect anyone to answer all of these concerns, but any little help would be greatly appreciated. Dennis Lambert _________________________________________________________________ Date: Mon, 17 Aug 1998 16:54:20 +0100 From: Fabrice_NORKA_-_SAPHIR@PECHINEY.COM Subject: Deb to RPM translator I changed from a Debian distribution to a Red Hat 5.0 lately and was wandering if there were a tool like 'alien' to convert Debian packages to Red Hat packages. My personal e-mail is NORKAF@AOL.com Thank you and God save Linux community :-) _________________________________________________________________ Date: Mon, 17 Aug 1998 13:18:31 -0400 From: Chris Bruner, cbruner@compulife.com Subject: Idea's for improvments and articals An idea for an article. (You may have already done this but I couldn't find a search engine to look up past articles). I have yet to get my Red Hat 5.1 to connect to the Internet. (Their support is GREATLY overstated.) I'm consquently using Win95 to do my Internet work. The reason for this is that my modem, network adapter and sound card are all Plug and Play (PnP). I would like to see an article detailing step by step, for a Linux beginner, how to install Tom Lee's PnP Package. This would involve recompiling the kernel which I'm not afraid of, but have no idea how to go about it. The more step by step the better. I'm from the DOS world and any assumed knowledge that I have might be wrong. Thanks for a great magazine. Chris Bruner _________________________________________________________________ Date: Tue, 18 Aug 1998 21:23:27 +1200 From: Andrew Gates, andrewga@fcf.co.nz Subject: Help wanted for a (Cheap) COBOL combiler for Linux I have a friend who is doing a refresher course in Cobol in a Unix environment. I have suggested that she run Linux, and pick up a cheap / shareware copy of a Cobol compiler for Linux from somewhere. Knowing absolutely nothing about either Linux or Cobol, am I dreaming, or is there a realistic alternative to the compilers I have seen retailing for ~$1,500 US? I'd really appreciate any help/advice anyone can offer. Andrew Gates _________________________________________________________________ Date: Wed, 19 Aug 1998 18:37:34 +0200 From: ppali@friko6.onet.pl Subject: RadioAktiv radio tuner I am one of those Linux users, who are not experts, even after a year or more of working with the OS. I like very much discovering by myself various aspects of Linux, trying out the many programs and help tips. What is important is that it works well and that I can use it for most of the common computer tasks (after a bit of tinkering). Now I have decided for the first time to post a following question: After trying many radio tuners available on the net and failing to make my RadioAktiv radio card work under Linux I am stuck. Maybe someone would give me a few tips (or one TIP)? _________________________________________________________________ Date: Tue, 25 Aug 1998 15:59:21 -0500 From: Hilton, Bradley D. (Brad), HiltonBD@bv.com Subject: Trident 985 AGP Is it possible to get X running on a Trident 985 AGP video card? What server would I use? Thanks, Brad Hilton _________________________________________________________________ Date: Mon, 24 Aug 1998 17:19:21 -0700 From: dk smith, dks@MediaWeb.com Subject: IDE disks If I could only find a definitive reference on setting up IDE disks, SCSI disks, and partitioning issue for running with Linux, NT, and LILO. I am new to this stuff. The docs at Red Hat, although extensive, were not enough for me. -dk _________________________________________________________________ Date: Thu, 27 Aug 1998 16:05:15 +1200 From: Mark Inder, mark@tts.co.nz Subject: Help Wanted: Looking for an Xwin Server software that runs under win95/nt We use a Red Hat 4.2 machine in our office as a communications server. This is running well with the facility of telnet connections for maintenance, diald for PPP dial up - internet and email, and uucp for incoming mail. I would like to run an X server on my windows PC to be able to use X client software on the Linux PC over the local Ethernet. Does anyone know of a shareware for freeware version which is available. TIA Mark _________________________________________________________________ Date: Thu, 27 Aug 1998 00:02:28 -0500 From: Todd Thalken, tdthalk@megavision.com Subject: Looking Into Linux For PPP Server I am interested in implementing Linux in our office network. Specifically, we would like to set up a Linux box as a dial-up PPP server so that remote users can access the office intranet. Could you explain what hardware (multiport controllers) works best with Linux, and explain the steps necessary to set the Linux box up as a PPP server. Most of our client computers will be using Windows 95/98 dial-up networking. We would like to have the server assign IP addresses dynamically. This seems like it would be a relatively common question, so if there is already good information available please let me know where I can find it. I have read a lot about Linux, but still consider myself a green "newbie". Thanks! Todd Thalken _________________________________________________________________ General Mail _________________________________________________________________ Date: Sun, 02 Aug 1998 18:25:16 +0000 From: Gulf Resources Co, grc2000@kuwait.net Subject: Delphi for Linux I am a Delphi Developer. I am also a big fan of Linux and GNU Softwares. Anybody there who wants to join me in knocking the doors of Inprise Corp (Borland ) to convince them to port C++ Builder and Delphi to X Window. If these things happen, Microsoft will be very upset. What Linux needs is an innovative company like Borland or Symantec. _________________________________________________________________ Date: Mon, 03 Aug 1998 02:16:54 +0100 (BST) From: Hugo Rabson, hugo@rabson.force9.co.uk Subject: response to Ruth Milne I tell you, ..... Are you familiar with Nietzsche's description of the ordinary man's journey from man to superman? ...how he "goes down" into the abyss and comes up the other side? Moving from Windows to Linux is a bit like that. ;) My adventure started in late April. I was sick and tired of Windows NT bluescreening. I read an article saying how stable Linux was in comparison. I looked into GUIs & found KDE to be to my liking. In the end, I vaped NT because I needed the hard disk space. ;-P It is now August. I have had ro reinstall almost a dozen times because I am still getting used to "The Linux Way". I have been using computers since i was 6; PCs since I was 16; Windows since I was 18. Linux is very stable indeed but it is eccentric & definitely not user-friendly, unless your definition of user differs wildly from mine. I have written a "HOWTO" so that I can recover quickly if I have to reinstall the entire OS and GUI. It is currently 3500 words long, and tells me how to install RedHat, compile a new kernel, compile&install KDE 1.0, install the BackUPS software, configure dial-up networking & autodial, install AutoRPM, and .. umm... that's it, so far. Don't get me wrong: Linux _is_ a wonderful thing. It's just ... It's _such_ a leap from Windows! I am convinced my primary client (with a dozen Windows machines) could function very well with Linux & Applixware instead of Windows & Office, just so long as they have someone competent to maintain their systems. Of course, they'll need much less maintenance under Linux than under Windows ;) Linux requires a lot of competence & intelligence (and downloads!) if you're going to set it up. Windows doesn't. On the other hand, it seems much less prone to these embarrassing GPFs. :) Hugo Rabson _________________________________________________________________ Date: Thu, 6 Aug 1998 13:01:42 +0200 (CEST) From: Hugo van der Kooij, hvdkooij@caiw.nl Subject: Linux Gazette should not use abusive language! This is my final note to you about this subject. I have not heard nor seen a single response in the past regarding this issue. I will however request mirror sites to stop mirroring unless you remove your abusive language from the Linux Gazette. The following text should be removed from ALL issue's: The Whole Damn Thing 1 (text) The Whole Damn Thing 2 (HTML) I presume I am not the only person that find this text not at all suited for a Linux publication. It is in effect offensive and could easily be removed Hugo van der Kooij Actually, I have answered you at least twice about this issue. I don't find the word Damn either abusive or offensive and have had no objections from anyone else. So, why don't we put it to a vote? Okay, you guys out there, let me know your feelings about this. Should I remove the "Damn" from "The Whole Damn Thing" or not? I will abide by the majority. --Editor) _________________________________________________________________ Date: Sun, 09 Aug 1998 15:48:34 -0600 From: Mark Bolzern, Mark@LinuxMall.com Subject: Some History and Other Things LG #31 http://www.linuxgazette.com/issue31/richardson.html Marjorie, Neat issue of the Gazette, thanks for all the hard work. I'm proud to be a sponsor, Just sent another $1K. One little teensie issue of fact though: First the quote: The first two issues of Linux Journal were published by Robert Young. After the second issue, Robert decided to start up Red Hat Software, and Specialized Systems Consultants took over as publisher. Also with the third issue, Michael Johnson took on the role of Editor and continued in that role through the September 1996 issue. I became Editor on February 1, 1997 and began work on the May issue. And the correction: Actually Bob (Robert) started a Linux catalog within the ACC Bookstores. It wasn't until quite a bit later when he met Marc Ewing that he folded ACC into Marc's Red Hat Software. I wuz there ;-> Thanks Mark _________________________________________________________________ Date: Sat, 8 Aug 1998 15:06:37 -0700 (PDT) From: Heather Stern, star@starshine.org Subject: Re: those crazy links On http://www.linuxgazette.com/issue31/tag_95slow.html, the link pointing back to the table of contents points to lg_toc30.html instead of lg_toc31.html. No, wait... all of the issue 31 answer guy pages seem to be mislinked (except the main one.) Also, the "previous section" button on the pages mentioned above seem to be mislinked as well... This isn't really important since most normal people like me use the back button like a religion, but it always helps to be consistent and have links pointing where they should, doesn't it? :) Yes, actually, it *is* important to me, and the base files are mostly generated by a script (making it hard to get wrong). But, I broke some stuff in the footer logic, so I did a proper footer by hand and propoagted it into the tag_ files myself. So, as I go look at the template I used... Dad-blammit, you're right!! All of the 30's in there should be 31's. (Although the copyright notice is correct.) Mea culpa! Thanks for your time... --Charles Ulrich. p.s. May be worth the effort to try one of those link checker bots that seem ever so popular on the web these days... Maybe, but it's being worked on in a private Linux network. Most of those "web bots" only access external sites properly. I should have run the command 'lynx -traversal' at the top of it, so I'd have a badlink report, but I was in a last-minute rush. I've done so now, and found another error that you missed. One of the beautiful things about the web, is that a minor misprint can actually be undone, unlike the world of print. I've submitted a corrected packet to our editor. Thanks for mentioning it. -*- Heather Stern -*- _________________________________________________________________ Date: Fri, 7 Aug 1998 11:13:47 +0000 From: kengu@credo.ie Subject: news from Irish LUG Hello, I'm involved with the Irish Linux Users Group website and was wondering if you would please mention that we are currently compiling a list of people in Ireland that would be interested in getting the 'Linux Journal' - details are available at our website http://www.linux.ie/. thanks Ken Guest _________________________________________________________________ Date: Thu, 06 Aug 1998 17:30:55 +0100 From: James Mitchell, james-t.mitchell@sbil.co.uk Subject: Re: The other side of the story (or, on the other, other hand) Just before I launch into the meat of this email, I'd like to say that the Linux Gazette is excellent. good articles, and good tips and comments. I'm writing about the mail in the August issue "The Other Side of the Story", in which Antony Chesser compares the Windows GUI to the shell prompt, especially the line "When Linux finishes installing, you're left with a # prompt. When WIN95 finishes installing, you've a fairly intuitive GUI that allows you to quickly and easily install and run programs, connect to the net, and **apply updates without re-compiling the kernel**" My quibble is with the underlying assumption that a GUI (and here I assume that includes Mac, and X, as well as Windows) is more intuitive then a command line. I argue that for a complete novice one is as bad as the other, neither a command line nor a screen full of little coloured icons and a START button are instantly comprehensible to a complete computer novice. (Before you write me off as insane - remember that a GUI is supposed to shorten the time it takes to learn how to operate the computer, they don't eliminate the time altogether.) Do you remember the scene in the Star Trek movie (the one with the whales...) where Scotty tries to use a Mac? He talks to it, and nothing happens... the operator says "You need to use this [the mouse]", so Scotty picks up the mouse and uses it like a microphone - "Good morning computer." Can you see where I'm going? Until someone teaches the "complete novice" the relationship between the pointer and the mouse, and what happens when you click, double-click, or drag with the mouse, they will be just as lost as a novice sitting in front of a command line. Actually, they may be worse off... we have had typewriters for a lot longer then mice, and people will grasp the concept of typing faster then clicking on pictures. So, in summary, I think that a complete novice will have a learning curve to cope with whether they use a GUI, or a command line; and the rest of us should remember that there is a difference between "ease of use", and "what I'm used to". Cheers, James _________________________________________________________________ Date: Thu, 06 Aug 1998 09:22:46 EDT From: Roger Dingledine, arma@seul.org Subject: Linux News Standardization/Distribution Project We've been making progress on our proposal to standardize the format and distribution of Linux news. Our design uses the NNTP protocol to create a network of servers that will quickly and robustly share news that is interesting to the Linux community. This will allow websites like Freshmeat and Slashdot, as well as lists like Threepoint's and linux-announce, to reduce duplication of effort while still customizing their presentation. In addition, this will provide a single easy method of submitting an item of news, whether it's an announcement about a new software release, or a description of the latest article in Forbes magazine. The end goal of organizing the Linux announcements and news articles is to encourage smaller ISVs to port to Linux, since they will see advertising their software to a wide audience as less of an obstacle. Other important benefits include greater robustness (from multiple news servers), less work for the moderators (messages will be presorted and people can specialize in their favorite type of news, resulting in faster throughput), and a uniform comprehensive archiving system allowing people to search old articles more effectively. We are currently at the point where we are designing the standard format for a news item. We want to make it rich enough that it provides all the information that each site wants, but simple enough that we can require submissions to include all fields. At the same time we're sorting out how the NNTP-based connections between the servers should work. We've got Freshmeat and Threepoint in on it, and other groups like Debian and LinuxMall are interested. We need more news sites to provide input and feedback, to make sure everybody will want to use the system once it's ready. If you're interested, please check out our webpage at http://linuxunited.org/projects/news/ and subscribe to the mailing list (send mail to majordomo@linuxunited.org with body 'subscribe lu-news'). Thanks for your time (this is the last mail I will send directly about this), Roger (SEUL sysarch) _________________________________________________________________ Date: Fri, 14 Aug 1998 23:49:36 +0200 From: Martin Møller, martin_moeller@technologist.com Subject: Linux Gazette to be featured on Alt Om Data's CD-ROM monthly. This is just to inform you that some of our readers have pointed out that we ought to distribute your magazine on our Cover CD, and after having read through the lisence, I believe this will be no problem. I have, just to be safe, saved a copy of the copy lisence together with the archives and plan on distributing the new issues as the show up. Keep up the good work! Martin Moeller. _________________________________________________________________ Date: Fri, 21 Aug 1998 20:52:14 -0400 (EDT) From: Timothy D. Gray, timgray@lambdanet.com Subject: Linux reality letter In LG#31 Michael Rasmusson wrote: "the majority of Linux users are IT professionals in some way" and alluded to the fact that Linux will be slow to be accepted due to this fact. This is very untrue. Most Linux users are in fact College and high school students. These forward thinking young minds aren't tied down by archaic IT department policy (many of which were penned in the 70's when IT was called the Processing/programming/systems/data-processing department) Linux will explode, it will do so violently. In fact it will explode so fast and vast that Microsoft will say "What happened?" The local Linux Users groups are all populated by 90% college and high school students. What do you think will happen when these students hit the computer departments at large corporations? They will install Linux, they will use Linux, and they will recommend Linux. The "explosion" has already started. Many large companies have already abandoned NOVELL and Microsoft for their servers. (The makers of the CG effects in the movie Titanic are far from small) _________________________________________________________________ Date: Wed, 26 Aug 1998 03:37:22 -0400 (EDT) From: Paul Anderson, paul@geeky1.ebtech.net Subject: Linux and new users I've been reading the LG mailbag... A lot of people think Linux should be made easier to use. I don't think that's quite right - the idea, IMHO, should be to make it so that Linux can be used by someone who's new to computers, BUT they should have to learn to use it's full power. With power, knowledge must come or disaster will follow instead. The goal, in the end, is that the person becomes a self-sufficient user, capable of sorting out most difficulties without needing help. TTYL! Paul Anderson _________________________________________________________________ Published in Linux Gazette Issue 32, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1998 Specialized Systems Consultants, Inc. _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ More 2¢ Tips! Send Linux Tips and Tricks to gazette@ssc.com _________________________________________________________________ Contents: * 2 Cent Tip from the 'Muse * Tips and Tricks: Keeping track of your config files * 2 cent tip: Cross platform text conversion. * XFree86 and the S3ViRGE GX2 chipset * Clearing the Screen * Re: Shell Scripting Resources Re: Recognising the AMD K5-PR166 Your atapi CDROM Tips: simulataneous kernel versions Creating man pages made easy! 2c Tip Re: Cross-platform Text Conversions Un-tar as you download megaraid drivers Re: simultaneous versions of Kernels _____________________________________________________________ 2 Cent Tip from the 'Muse Date: Fri, 28 Aug 1998 00:13:07 -0600 (MDT) From: Michael J. Hammel, mjhammel@fastlane.net You know, I don't think anyones mentioned it before in the Gazette, but there is this little program that is handy as all get out: units. You give it the units you have and specify what you want it converted to and Viola! It converts it for you! It won't do Celsius/Farenheit conversions, but handles Grams/Pounds conversions just fine. And for all those Linux cooks out there, it converts cups to quarts, teaspoons to tablespoons and cups to tablespoons. Its the units freaks Swiss Army Knife. No hacker forced to make his own Thai curries should be without it. Michael J. Hammel _____________________________________________________________ Tips and Tricks: Keeping track of your config files Date: Mon, 03 Aug 1998 11:00:16 +1200 From: Ryurick M. Hristev physrmh@phys.canterbury.ac.nz This is my trick for keeping track of the many config files you find on a Linux/Unix system. Most config files are in the /etc directory. However, particularly on a home machine you won't change them all and sometimes you want to save (e.g. on a floppy) only the files you have changed. Besides, you don't want to have to remember the exact location for every one. So here's what I do: + created a /root/config directory + Each changed config file for whatever program gets a symlink in the /root/config Then every time I want to change something I go directly to /root/config. If I want to backup my system configuration, I just copy the files by dereferencing the symlinks, etc. ... Cheers, Ryurick M. Hristev _____________________________________________________________ 2 cent tip: Cross platform text conversion. Date: Thu, 06 Aug 1998 13:07:59 -0400 From: Jim Hefferon, jim@joshua.smcvt.edu To convert to a DOS text file, mount a DOS floppy and copy the text file. $ su (you are prompted for a password) # mount /dev/fd0 -t msdos /mnt/floppy (the # says that you are root BE CAREFUL!) # cp myfile.tex /mnt/floppy # exit $ For instance, after these, I can use SAMBA to get myfile.tex to an NT network printer (Z:> copy \\mymachinename\mnt\floppy\myfile.tex lpt2). It makes sense if you do this often to have a DOS disk always mounted, but if you mount as above, remember to umount before you try, say, mounting a different floppy. I find this easier than a solution with the tr command, because I always forget how to do such solutions, but I can remember how to copy. Jim Hefferon _____________________________________________________________ XFree86 and the S3ViRGE GX2 chipset Date: Wed, 05 Aug 1998 16:51:53 -0500 From: Ti Leggett, tlegget@mailhost.tcs.tulane.edu At work, we just got in a whole slew of computers that use the S3ViRGE GX2 chipset. Upon trying to install X on these things, I found that the default Red Hat 5.0 XFree doesn't cut it. This is how I've been able to fix the XFree86 problems with the S3V GX/2 chipset. First, do not use the S3V server despite what Xconfigurator says. The GX/2 chipset is not supported for that server. You must use the SVGA server (besides, it's accelerated and supports DPMS). Currently, these are the modes supported as of XFree86-3.3.2pl3: 8bpp: 640x480 works 800x600 works 1024x768 works 1280x1024 works 15/16bpp: 640x480 works 800x600 works 1024x768 works 1280x1024 works 24 bpp: 640x480 works 800x600 works 1024x768 works 1280x1024 works (very picky about monitor modelines though) 32 bpp: 640x480 works 800x600 works 1024x768 does not work 1280x1024 does not work The card I'm using to test this is a #9 9FX Reality 334 w/8MB RAM. Also I cannot verify that this works on any version less than XFree86-3.3.2pl2. pl2 actually has less modes/depth combinations that work - such as, no 16 bit depths work and 1280x1024 doesn't work in almost all depths. I suggest upgrading to XFre86-3.3.2pl3. Now onto the fix. Step 1. Make sure you're using the SVGA server (ls -l /etc/X11/X for RH users, maybe the same on other distros). It should point to /usr/X11R6/bin/XF86_SVGA. If it's not, link it to it (ln -sf /usr/X11R6/bin/XF86_SVGA /etc/X11/X). Step 2. Open your /etc/X11/XF86Config file for editing. Step 3. Find the Graphics Device Section. Step 4. Find the device that is the Standard VGA Device (usually has the line - Identifier "Generic VGA" Step 5. Remove the line that says: Chipset "generic" Step 6. Uncomment the line that says: VideoRam "256" and change it to recognize the amount of RAM your card has in kilo VideoRam "8192" # 8MB RAM Step 7. Add the following line (*CRUCIAL*): Option "xaa_no_color_exp" This turns off one of the accelerated option that gives trouble. Step 8. Add whatever other options you want (for a list see the man pages on XF86Config, XF86_SVGA, and XF86_S3V) Step 9. Change the bit depth and resolution to whatever you want. Step 10. Save and close the file and (re)start X. Note: I do not claim this will work for all cards using the GX2 chipset. I can only verify for the video card I'm using. I'm interested to hear how other video cards handle it. Hope that helps everyone involved. I've heard from people on Usenet that it works perfectly, and from others it doesn't. Ti Leggett _____________________________________________________________ Clearing the Screen Date: Wed, 05 Aug 1998 12:59:13 -0400 From: Allan Peda, allan@interport.net A few days ago a classmate "accidentally" cat'ed a file to the screen. He asked asked me what he could do to reset his confused vt100, as "clear" wasn't sufficient. At first I figured he would need to close and re-open the connection, but then I realized that there are codes to reset a vt100. Here is some C code that resets, and clears the screen. Save it as vt.C, then run "make vt". Place the executable in your path, and when the screen looks like heiroglyphics, type (blindly at this point) "vt". That should clear it up. /* ** Small program to reset a confused vt100 after ** `cat'ing a binary file. */ #include int main(void) { printf("\033c"); // reset terminal printf("\033[2J"); // clear screeen return(0); } /* For more info, see the following URLs: www.mhri.edu.au/~pdb/dataformats/vt100.html www.sdsu.edu/doc/texi/screen_10.html www.cs.utk.edu/~shuford/terminal/ansi_x3.64.html They have more vt100 escape codes. ** */ _____________________________________________________________ Re: Shell Scripting Resources Date: Wed, 5 Aug 1998 17:34:50 +0100 (BST) From: Sean Kelly, S.Kelly@newcastle.ac.uk In issue 31 it was mentioned that someone had been looking for some shell scripting help. Take a look at http://www.oase-shareware.org/shell/ as I have heard many people mention this site in response to shell scripting queries. Sean. ________________________________________________________________________ Re: Recognising the AMD K5-PR166 Date: Wed, 05 Aug 1998 11:22:43 -0400 From: Shane Kerr, kerr@wizard.net I'm wondering whether any other readers have used the AMD K5-PR166 with Linux. It's just that my system seems to think it's a K5-PR133 and states that it's running at 100MHz. Also, the BogoMips value indicates that the processor is running at 100MHz. Anyone any advice? I'm running a K5 P133+ on one of my systems - it actually is running at 100 MHz, that's why it's a "P133+". It's like the Cyrix processors, the name is basically a lie for marketing. I wouldn't put too much stock in the BogoMips value - it is bogus after all! My system clocks in at an equivalent to a 112 MHz system when I run the distributed.net client - the reason AMD claims a higher clock value is probably because some instructions run faster, and those may just not happen to be the instructions used in to BogoMips loop. As for your system thinking your K5-PR166 is a K5-PR133, it's probably because you have the motherboard jumpered wrong and/or the BIOS configured wrong. Are you sure that your motherboard & BIOS support the chip? Shani ________________________________________________________________________ Your atapi CDROM Date: Thu, 06 Aug 1998 16:50:04 -0500 From: Ian and Iris, brooke@mail.jump.net Your /dev directory is the culprit. Current installs use: /dev/hda /dev/hdb /dev/hdc /dev/hdd (/dev/hde) (/dev/hdf) for the first, second, (and third) ide interfaces, m,s,m,s(,m,s). Older installs had the /dev directory written a little differently. You would have the old standard, which was /dev/hdnx where n was interface, and x was a/b for master/slave. The only difference is in the names of the files. If you rename them, you will be in compliance. Alternatively, you could run makedev from a recent kernel, though I do not pretend to know all the details of that. ________________________________________________________________________ Tips: simulataneous kernel versions Date: Fri, 14 Aug 1998 17:35:14 +0200 From: Frodo Looijaard, frodol@dds.nl From: Renato Weiner, reweiner@yahoo.com Recently I was looking at the Gazette and I think I have a good suggestion of an article that will be very useful for the Linux community. I have had some technical difficulties of having two simultaneous versions of Kernels in my system. I mean a stable one and a developing one. I searched the net looking for information of how to co-exist both but it's completely fragmented. If somebody more experienced could put all this information together, it will certainly help a lot of people from kernels developers to end-users. This may come a bit late, but I am in the process of writing a (mini)HOWTO on this subject. It is not quite trivial, especially with modules lying around, or if you want several kernels with the same version number. Check out http://huizen.dds.nl/~frodol/howto.html for now. I am still in the process of getting it approved as an official mini-HOWTO. Frodo ________________________________________________________________________ Creating man pages made easy!!! Date: Sun, 16 Aug 1998 16:14:34 +1000 From: Steven K.H. Siew, ksiew@tig.com.au Below is something I wrote to help lay people create their own man pages easily ---------------------------------------------------------------------------- ---- If you ever wrote a program using gcc in linux, you may have come across this problem. You have just finished your wonderful little program which is of great use to you and you need a man page for it. Of course, you have absolutely no idea how to write a man page. Don't you need to know how to use troff? Or is it nroff to write a man page? Luckily there is a much easier way to write a man page. Here I shall describe an easy and quick (and dirty) way of writing a man page without learning troff or nroff. In order to do so, you must have the Perl version 5.004 (or higher) installed on your Linux box. There is a man page in the various Perl man pages on the creation of a man page using the Perl util "pod2man". It is called "perlpod.1". Below is a step by step guide to finding the man page and the util. ksiew> su password: #|/root>locate perlpod.1 /usr/man/man1/perlpod.1 #|/root>locate pod2man /usr/bin/pod2man Now, to write your own man pages, you must first read the perlpod.1 man page. You can do this by "man perlpod". However, to read the pod2man man page, you must first create it by using pod2man itself. #|/root>pod2man /usr/bin/pod2man > pod2man.1 #|/root>ls -al pod2man.1 -rw-r--r-- 1 root root 13444 Aug 16 12:12 pod2man.1 #|/root>mv pod2man.1 /usr/man/man1/pod2man.1 Okay, now you can read the pod2man man page you have just created by using the command "man pod2man". After reading it, you can now create your own man pages. As an example, I shall describe a simple man page for one of my own C programs called "addline". I first create a textfile called "addline.pod" and then turn it into a manpage using 'pod2man --center="Addline program manpage" addline.pod > addline.1'. Finally, I move the addline man page into its proper place using "mv addline.1 /usr/man/man1/addline.1". There; creating your own man page is simple, isn't it? Below is a sample addline.pod file -------------------Cut here and do not include this line--------------------- =head1 NAME addline - addline numbers to textfiles =head1 SYNOPSIS B [ B ] [ B ] [ B ] [ B ] I =head1 DESCRIPTION B inserts line numbers into textfiles. It was written to automate the insertion of numbers into a data file of results from a neural network program. =head1 OPTIONS =over 8 =item -c Ignores comments lines. A comment line is any line that starts with a '#'. This makes it easier to insert comments in the textfile without messing up the line numbers. =item -v Displays the version number of the addline. =item -3 Uses 3 digits for the line numbers even if the number requires less than 3 digits. For example, 013 instead of 13. The default is to use as few digits for the line number as possible. =item --colon Separates the line number from the rest of the line with a ':' character. =back =head1 EXAMPLES addline textfile addline -c textfile addline -c --colon textfile =head1 NOTES Addline is written in C and compiled using gcc version 2.7.8. It uses the standard C library and is designed to be fast and efficient. =head1 RESTRICTIONS Never ever use addline on a binary file. =head1 BUGS There are no bugs in addline, there are just some undocumented features. =head1 AUTHORS Original prototype by Steven Siew, but so massively hacked over by his sister such that Steven Siew probably doesn't recognize it anymore. -------------------Cut here and do not include this line--------------------- ________________________________________________________________________ 2c Tip Re: Cross-platform Text Conversions Date: Sun, 16 Aug 1998 07:52:17 -0500 (CDT) From: Peter Samuelson, psamuels@sampo.creighton.edu In LG31 you published a 2c tip for a unix2dos replacement written in Tcl. The author asserts that "It turned out to be really easy to do this in Tcl." Even easier in Perl, I say. Symlink the following code to the same names (d2u, u2d, m2d, m2u, u2m, d2m) Matt used. Make sure this file has execute permission, of course. Also, if you just want Perl to edit the input files in place, change the "perl -wp" to something like "perl -wpi.orig".... Peter Samuelson #!/usr/bin/perl -wp # # Simpler unix2dos2mac utility for 2-cent tip, mainly because Tcl is ugly. # No comments that Perl is ugly too, please. # # Usage: a standard Unix filter: # input: filename(s) or stdin # output: stdout # Buglet: u2m leaves lone CR at the end of file if it didn't end in LF # (Fixing it would use more memory.) BEGIN { $_=$0 =~ s|.*/||; $pcmd='s/$/\r/' if ($0 eq 'u2d'); $pcmd='s/\r$//' if ($0 eq 'd2u'); $pcmd='s/$/\r/;chop' if ($0 eq 'u2m'); $pcmd='s/\r/\n/g' if ($0 eq 'm2u'); $pcmd='chomp' if ($0 eq 'd2m'); $pcmd='s/\r/\r\n/g' if ($0 eq 'm2d'); unless($pcmd) { print STDERR "This script must be invoked under one of the names:\n", " u2d, d2u, u2m, m2u, d2m, m2d\n"; exit 1; } } eval $pcmd; ________________________________________________________________________ Un-tar as you download Date: Wed, 19 Aug 1998 13:08:52 -0500 From: scgmille@indiana.edu It's time for fun with pipes. Recently, when downloading the latest kernel over a ridiculously slow connection, I wanted to see where the download was by checking which file in the tarball was being received. After pondering the pipes and GNU utils, this thought came to mind. You can decompress and un-tar your files as they download, sort of a "streaming decompressor", if you will. Form the command line: tail -f --bytes=1m file-being-downloaded.tar.gz | tar -zxv Tail will display downloaded portion of the file, then remain open displaying bytes as they come. Make sure the 1m (1 megabyte in this case) is LARGER than what you have already downloaded. The piped output of tail goes to tar and the rest is history. Similarly for bz2 files: tail -f --bytes=1m file.tar.bz2 | bunzip2 - | tar -xv Enjoy! ________________________________________________________________________ megaraid drivers Date: Thu, 20 Aug 1998 18:34:32 -0400 From: "Michael Burns2, rburns@shaw.wave.ca Hi, It's been a long fight to get AMI to produce this patch and the install documentation. Mike Burns ________________________________________________________________________ Re: Suggestion for Article, simultaneous versions of Kernels Date: Sat, 29 Aug 1998 21:35:27 -0400 (EDT) From: R Garth Wood rgwood@itactics.itactics.com I think Hans-Georg is talking about having a stable linux kernel version X and a dev version X (ie not 2.0.34 and 2.1.101 but 2.0.34 and 2.0.34). I assume when you issue: # make modules_install it tromples your old stable modules and gives you errors when you use your stable version X. This is not as trivial a problem as it first seems. However there is a solution. Have a look at the make-kpkg docs (debian distro); specifically the "flavour" option. This will solv your problem. It won't be easy, though. Have a look at: /etc/conf.modules to see what I mean. R Garth Wood ________________________________________________________________________ Published in Linux Gazette Issue 32, September 1998 ________________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next ________________________________________________________________________ This page maintained by the Editor of Linux Gazette, gazette@ssc.com Copyright © 1998 Specialized Systems Consultants, Inc. (?) The Answer Guy (!) By James T. Dennis, answerguy@ssc.com Starshine Technical Services, http://www.starshine.org/ ________________________________________________________________________ Contents: (!)Greetings From Jim Dennis (?)phreaking (?)ISP Abandons User in Move to NT (?)Driving Terminals w/Java --or-- Java Telnet/Terminal (?)Finding BBS Software for Linux (?)The Five Flaws of the Unix System (?)XFree86 Installation in DOSLinux (?)resume on AS/400 --or-- Resume Spam (?)Linux Port of SoftWindows (?)Connecting Linux to Win '95 via Null Modem --or-- A Convert! (?)MS FrontPage for Linux/Apache (?)Virtual System Emulator for Linux and Why NOT to Use Them (?)FoxPlus for Linux ? (?)More on Distribution Preferences (?)IP Masquerading/Proxy? (?)PPP --or-- The "Difficulty" is in Disabling the Services (?)How to read DVI files? (?)Bad Super-block on Filesystem (?)Mulitiple processes sharing one serial port --or-- Multiplexing the Computer -- ISDN Modem Connection (?)Permission to Set up a Linux Server (?)Detaching and Re-attaching to Interactive Background Processes (?)[announce] Cdrdao 1.0 - Disc-at-once writing of audio CD-Rs (?)High Speed Serial (RS422) under Linux (?)ANOTHER MODEM PROB Plus, More on Grammar (?)/usr/bin/open command not found (?)Tuning X to work with your Monitor (?)The last Linux C library version 5, 5.4.46, is released. --or-- The End of libc5: A Mini-Interview with H.J Lu (?)Linux System Administration. --or-- Where to put 'insmod' and 'modprobe' Commands for Start-up (?)The BIOS Clock, Y2K, Linux and Everything (?)Online Status Detector --or-- Failover and High Availability for Web Servers : Conditional Execution Based on Host Availability (?)SysAdmin: User Administration: Disabling Accounts (?)Thank you --or-- Articles on LILO Saves Life? (?)Netware NDS Client --or-- NDS (Netware Directory Services) for Linux: Clients and Servers (?)More 'Win '95 Hesitates After Box Has Run Linux?' (?)Bad Clusters on Hard Drive --or-- Another Non-Linux Question! (?)Help with C/C++ Environment Program --or-- Integrated Programming Environments for Linux (?)Web Server Clustering Project (?)wu-ftpd guest account on a Linux Box --or-- WU-FTP guestgroup problems ______________________________ (!)Greetings From Jim Dennis Linux as a Home Users System We're all getting used to the idea that Linux can attract corporate users, for deployment as web, ftp, file (SMB and NFS), print and even database servers; and we're getting used to seeing it used for routers, mail, and DNS. We're even getting used to the idea that corporate user put Linux on their desktops (in places where they might have spent a small fortune on a workstation). But, what about the home/personal user? Most of us consider this to be an impossible dream. Even those few enthusiasts in the Linux community who dare to hope for it --- have been saying that it will take years to gain any percentage of that market. However, I'm starting to wonder about that. I've seen a number of trade rag articles naysaying Linux on the desktop. Ironically, when a reporter or columnist explains why Linux isn't suitable for the desktop --- it actually raises the possibility that it is suitable for that role. A denial or refutation tells us that the question has come up! What prevents the average IT manager from deploying Linux on their desktop today? In most cases it's fear. The users are used to MS Word, MS Excel, and MS PowerPoint. Any user who uses any of these is forcing all of the rest to do so as well (since these applications all use proprietary, non-portable, file formats). Everyone who uses Office has to use a PC or a Mac (and many of them switched away from Macs due to lags in upgrades and subtle file compatibility problems between the Mac and PC versions of these applications). Why do Mac users run VirtualPC --- to deal with the occasional .DOC, .XLS, or .PPT file that they get --- or some other proprietary file format (like some of those irritating CD-ROM encyclopedia) which is only accessible through one application. However, these proprietary formats are not secret codes. Linux and other Open Source (tm) hackers will turn their attention to them and crack their formats wide open. This will allow us to have filters and converters. 'catdoc', LAOLA, and MSWordView are already showing some progress on this area (for one of these formats). Microsoft will undoubtedly counter by releasing a new version of their suite which will carefully break the latest third-party viewers and utilities (free or otherwise). They may even apply the most even perversion of intellectual property law yet devised: the software patent. However. I think that the public, after a decade of following along with this game, is finally starting to wise up. The next release that egregiously breaks file format compatibility may be the end of that ploy (for awhile at least). But what about the home user. How do home users choose their software? What is important to them? Most of them don't choose their software --- they use what came on the system and only add things later. When they go out to buy additional software, home users are the most price conscious of all buyers. Commercial, government, and other institutional buyers can make a business case to justify their purchases. Home users just look in their wallet. The other common influences on the novice home user include the retail store clerks and their kids. That's one reason why the school and University markets were always so crucial to Apple's success. I noticed that the Win '98 upgrade was going for $89. I couldn't find a "non-upgrade" box anywhere in that store (CompUSA). People are starting to hear that for half that price they can get this other OS that includes enough games and applications to fill a 2Gb hard drive. I think MS is actually starting to price itself out of the market. (It seems that my MS-DOS 5.0 upgrade was only about $35 or $40). If MS Office weren't bundled with so many new systems, there probably would be about a tenth the legal copies in home use. With a little more work on LyX and KLyX and a few of its bretheren --- and a bit more polishing on the installation/configuration scripts for the various distributions I think we'll see a much more rapid growth in the home market than anyone currently believes. I think we may be at 15 to 20 per cent of the home market by sometime in the year 2000. So, what home applications do we really need to make that happen. I like the "Linux Software Wishlist" (http://www.linuxresources.com/wish/) ... because it gives all of us a place to vote on what we would buy. One class of packages that remember used to be very popular was the "greeting card" and "banner/sign" packages: PrintShop, PrintMaster, and Bannermania. Those used to have the cheesiest clipart/graphics and a fairly limited range of layouts. Limited enough to make any TeXnician scream with frustration. However, they were incredibly popular precisely because of those constraints. Having a few dozen to a couple hundred choices to pick from is far less intimidating to home users than all the power and flexibility you get with TeX, LaTeX, and the GIMP. I would dearly love to see a set of pre-designed greeting cards, certificates ("John Doe has Successfully Completed the Yoyodyne Tiddly Winks Seminar" --- with the lacy border --- you know the kind!), etc. all done in TeX or PS or whatever. This and a front end chooser and forms dialog to fill in the text would be a really killer home app. (Bannermania was geared to creating large banners, either on fanfold paper or as multiple sheets to be cut and pasted together on to a backing board (piece of cardboard). I think that a new Linux implementation of this sort of app built over the existing software (TeX, GhostScript, etc) would end up being vastly better than anything that was possible under the old PrintShop --- and still be as simple. I'm sure most of us have that one old DOS, Windows, Mac, or other application or game that we'd like to see re-done for Linux. So, dig out the publisher's address or phone number (assuming they still exist) and let them know what you want. Then post your request to the wishlist. Even these trivial bits of action can make Linux the choice of home users. I say this because I think it's about time that they had a choice. _________________________________________________________ (?) phreaking From an00997 on 30 Jul 1998 Hi,I'm Dodo.I'm just finishing a course of computer operations and I would like to know about phreak,hacking... Can you tell me about it ??? Tips or news ??? Thanks... (!) My first thought is that this is some sort of troll (message intended to generate flames --- often forged to appear from an unsuspecting party so as to harass the apparent sender). That doesn't make sense in this case since getting one flame from "The Answer Guy" is hardly worth the trouble. There for I have to assume that you have chosen an unusually apt handle for yourself. So, you're finishing a course in computer operations. That's nice and productive. You should also considering taking a course in basic composition and grammar. (Hints: commas and periods are normally followed by spaces; question marks normally are used in sequences of one (the "triple question mark" is for emphasis); and you finish courses that are "in" or "on" topics, not "of" them). Normally I don't flame people on their spelling grammar, or punctuation. However, there doesn't seem to be much else to say to you. You want to know about phreaking and "hacking." The first think to know about phreaking (the study of practical phone fraud techniques) is that use of most of the techniques used by phreaks is illegal in just about any jurisdiction. In many places you can a) go to jail and b) insure that you can never work in the computer industry again by getting convicted of crimes involving telephone and computer fraud. The term "hacking" as applied to techniques for bypassing system security and gaining unauthorized access (or privileges on) them is highly controversial. It is accepted practice among computer enthusiasts to use the term "cracking" to discuss those activities and "hacking" to discuss the lawful and legitimate pursuit of their hobby. In this latter sense we call Linus, Alan Cox, and others "Kernel hackers." The media prefers to use the term "hacker" in the former sense. This is one of many reasons that "hackers" and "crackers" alike tend to be disgusted by the media. (As a regular contributor to LG I'm considered by some to be in "the media" and thus worthy of suspicion and disgust. Others have other opinions --- some of those are even less complimentary). I personally find the whole "phreak mystique" to be disgusting. There is a tendency to romanticize phreaks and crackers ---- to create a mythology of the "uberhacker" (a Nietsche-an reference that very few of them understand). That whole subculture is permeated with a smug "superiority" that tries to say: "we know something you don't." Of course, to them I'm a nobody. I've never cracked into anyone's system. I've never written any "warez" or "sploits" and I don't even know all the buzz words and jargon to participate in their conversations. I'm a "lam3r" that's not even good enough to be a "wannabe." In other words, I'm not an "3l33t d000d." It also tends to be quite juvenile. They seem to have an inordinate fondness for bad grammar and intentionally crazy spelling. I suppose it's part of the general affectation of 'tude --- the rebellious aversion to authority and convention, even the conventions of language itself. Trite! Now this is not to say that you have no business learning about phreaking and cracking. There's nothing wrong with learning about these things, nor even anything inherently wrong with experimentation and research. However, there is a major problem if you conduct your "research" on "subjects" without their informed consent. As a sysadmin's (and sometimes security) consultant I study these things as much as my time allows. Most of my information comes from mailing lists like bugtraq, and from web sites like rootshell and the l0pht. I'll let you find those on your own. You can also subscribe to 2600 and Phrack magazines (printed) to learn more. Phrack is also available online. All of the real "cracker" socializing seems to be done via IRC (Internet Relay Chat). This has tended to give the whole IRC system a bit of a bad rap. The popularity of IRC for this stems from at least two factors: it is immediate and interactive (instant gratification is very important in these circles), and it allows for direct client-to-client communications (DCC) which makes it easy for participants to exchange "warez" and other files. From what I gather the old-fashioned BBS is also still pretty popular in that crowd. These seem to be "by invitation only" --- so you'll have to curry favor and do some horse trading to get any phone numbers on any of them. Naturally it is important for these crackers and phreakers to maintain their elite status by locking out the lamers and wannabes. So anything published about them is wrong, or will be right after they read it. I suspect that this message by itself will probably get me flamed and possibly attract some cracks on my systems (d00dz, don't bother; it's not sporting --- I don't do anything special to protect my servers, honest! My web and ftp servers are just virtual hosts on some poor ISP, no challenge at all). Meanwhile my best advice to you, Dodo, is to cut your moniker in half and just "do" something constructive. If you want to make a serious study of "cracking" and "phreaking" than the Linux Gazette Answer Guy is a pretty lame place to start. In short: get a life! _________________________________________________________ (?) ISP Abandons User in Move to NT From Tsyplakov Maxim V. on 28 Jul 1998 Regard! Help to solve a problem: Beside my ISP installed by Windows NT. Can will Not be connected with him through dial-up from Linux. Scenario waits login: and password:, but from there does not come no lines. Links Linux-Linux, Linux-Free BSD, worked without any problems. Who known what is wrong? CCL: Invalid command (c). (!) It sounds like you should try talking to your ISP. If they won't co-operate or help, switch to another one. If there aren't any others in your area, find some friends and form your own -- either a co-op or a commercial. Welcome to the free market! They are probably using some proprietary MS RAS (remote access service) or they've done something weird with their terminal servers or PPP software --- though the fact that it connect and prompts for a login and password doesn't sound like a MS NT RAS or PPTP sort of symptom --- those are text and MS prefers to use proprietary binary in their protocols and document formats. In any event I don't have nearly enough information to help. _________________________________________________________ (?) Java Telnet/Terminal: From Spencer T. Kittelson on 28 Jul 1998 They're out there but not all there. We have some old code that runs on terminals that we would like to drive with a Java based server. We are looking for the reverse equivalent of a terminal emulator, i.e. a Java toolkit that multiplexes serial/network character streams and provides support for character based devices. In particular we are looking for the Java equiv. of the C curses library. Any ideas if and where such a thing exists? Spencer (!) The canonical resource for finding Java applets and applications on the web is at "developer.com" (formerly known as Gamelan). Here's URL that will provide you with a list of some telnet and terminal emulators writting in Java http://www-c.developer.com/directories/find.cgi?search=Java+Telnet& num=50&sp=sp (Be sure to cut and paste that without the linebreak, and the extraneous backslash that I use to indicate the line continuation). There are a number of these listed there and I haven't tried any of them (well, I tried WebTerm awhile back and I did look at the online demo of JXterm, and Crosstie). I've played with SCO's Tarantella --- which seems to be more of an X Windows in a Java frame --- and also provides support to access NT desktops through a Java frame. Alas, that seems to be a proprietary technology and it seems to require a SCO OpenServer to host part of it. (I suspect that means that it doesn't qualify as a "Pure Java" solution -- though the client side of it might be "pure Java"). WebTerm seems to be available for non-commercial use --- but doesn't define the term (do they mean you can't use it in your business environment or just that you can't sell copies of it). JXTerm and Crosstie seem to be commercial products. One limitation of most of the Java implementations in existing web browsers is that the Java applets can normally only open connections to the same address from which they were fetched. This means that your host would have to run a web server with some HTML pages that contained the required applet markup. You could also distribute these to your systems along with an installation of the JRE (Java Runtime Environment) and a copy of 'appletviewer' --- that would allow you to run these without the common browser restrictions. Another problem with these is that they are not trivial to install and run via the 'appletviewer' WebTerm 2.0 gave me grief about "missing resources" while the same copy of appletviewer was perfectly content to run the various other demos that I had laying around. I'm sure that it's some fussing with the CLASSPATH variable or some other thing that I don't have configured to it's liking. Frankly I haven't worried about it much. I presume that your clients are PC's or NC's rather than Linux boxes. Otherwise I presume you'd just configure your browswers with 'telnet' configured as a helper app and just embed URL's of the form: Telnet to our Application Server ... and be done with it. Naturally you can do that on your Windows boxes as well --- just install some sort of telnet utility and configure the browsers to use it. I personally like C-Kermit for telnet -- so you might consider using K'95 (the Win32 Kermit from Columbia University). That would give you a consistent scripting, telnet and file transfer environment across your systems. That approach (using helper apps) is likely to be much faster, more robust and probably and cheaper than trying to do this with Java applets. The usual telnet utilities have had years to mature and are written to the clients native API's --- so there's no fussing about that. _________________________________________________________ (?) Finding BBS Software for Linux From Li-cheng Hsu in the comp.os.linux.development.apps newsgroup on 29 Jul 1998 Greetings, I used to be a sysop of FidoNet using Maximus as my BBS system in DESQview environment. Now I have switched from DOS/Windows to Linux as my major working platform. The question is, is there any BBS system that is recommended to run in Unix ? It should be able to handle both dial-up & TCP/IP, of course. :) (!) There's a tree of directories at the master Linux archive: http://sunsite.unc.edu/pub/Linux/system/bbs There are several packages there --- including a number of utilities for ifmail (the Internet to FIDO gateway). Most of these are free or shareware. There are also Linux ports of MajorBBS and MMB Teammate (a couple of major commercial BBS packages --- which are pretty expensive). I haven't used any of them so I can't offer specific suggestions. However, I've crossposted the two newsgroups that are most likely to have interested and informed participants (alt.bbs.unixbbs and comp.bbs.misc). There are about 50 newsgroups devoted to BBS' (including various specific BBS packages like TBBS, MajorBBS, Citadel, etc). (I've been a sysop on two large corporate systems, for Symantec and for McAfee --- so, I used to subscribe to some of these. However I've never run a small hobbyist system so I just haven't kept up in the field). Thanks in advance. :) Linux can handle dial-up as well as console login and give a remote user normal shell. But you probably want to restrict access for BBS users. You can set them up with simple shell script (or perl, or tcl) which would emulate maximus as close as you wish, but I think that better approach is to use text-based web-browser lynx for their shell. These are likely to be severe security problems unless you are a phenomenally good (and careful) programmer. I'd did play with a configuration that ran lynx in a chroot jail. That was to prototype a "dial in kiosk" One sticking point for my application was that I wanted a replacement 'getty' that could auto-detect ANSI PC emulation (which many BBS' can do with some sort of magic escape code) and bypass the Unix login command --- I think I replaced /bin/login in the jail with an SUID "nobody" copy of lynx, and put a /etc/issue that just said: "Hit any key to ...." Thus you set up normal Web site instead of BBS, solving problem with TCP/IP instantly, and let dial-in users to view it in lynx. Lynx includes provisions to download/upload files using Z-Modem and Xmodem (by calling external programs sz and rz) and allows to restrict users almost as much as you wish. However, those various restrictions may not be foolproof. There have been exploits that bypassed lynx restrictions before. So, if security is an issue, you definetely want to lock this in a jail with no shell and take some other precautions. You are right that this is an interesting way to provide "kiosk" style dial-up using stock HTML/web pages and off the 'net freeware. That was the whole point of my prototype (which took all of about three hours one afternoon). _________________________________________________________ (?) The Five Flaws of the Unix System From Ashley G. on 29 Jul 1998 JIM, HI I WAS WONDERING IF YOU CAN SEND ME SOME INFO.ALL I NEED IS THE NAMES OF THE 5 FLAWS IN THE UNIX SYSTEM,JUST THE NAMES. IF YOU CAN SEND THEM TO ME I WOULD GREATLY APPRECIATE IT> (!) I think some flames are in order here: 1. This is not the "We do your homework for you" service. 2. I volunteer many hours per month answering questions about Linux. There are others out there who can answer your questions about other forms of Unix. I frequently answer questions about how to interoperate between Linux and other OS' including many forms of Unix. Most of what I say about Linux applies to most other forms of Unix. However the distinction is important. 3. You need to learn where your [Caps Lock] key is and keep it turned off if you plan to get any co-operation or respect from anybody on the 'net (in Usenet netnews or on any technical mailing lists). 4. There is no such thing as "the Unix System" there are many different versions of Unix --- and there have been for almost thirty years. 5. If someone told you that there were "five" specific "flaws" in Unix they were suffering from horrible misconceptions. 6. As likely it may be the case that you've critically misundertood someone. Now to answer your question: There is no list of generally held "flaws" in Unix or Linux that I know of. There are a number of problems with even postulating such a list. First there isn't any one Unix system. C-Kermit claims to support about 700 versions and implementations of Unix (and Unix-like operating systems). There is considerable ongoing academic debate about what precisely is Unix. I won't bother trying to provide my own definition --- it would just get me flame mail and perpetuate the debate. There are many people who will even deny that there's any doubt. They will say: Unix is any system that has been "branded" by The Open Group as conformant to the X/Open portability guidelines (XPG4 or XPG3). Others will pipe in and say that any thing that meets Spec1170 is Unix, while others will claim that POSIX is the one true standard. At that point we'll go through the whole debate as to whether Unix is limited to just those systems which are dubbed to be "Unix" by this or that standards body, or whether it applies to Unix like systems --- such as Linux. Indeed we could argue for days about what precisely is Linux. In the strict sense it is considered to be a set of kernel sources and the ancillary device drivers, and makefiles. In common usage Linux refers to any of a number of collections of software that run under a compilation of those (kernel) sources. Others, notably Richard Stallman, argue that the term Linux should be applied only to the kernel sources and that a different term should be applied to larger aggregations of software built around it. His argument is valid -- since most Linux distibutions are about 5% to 10% Linux kernel sources, and drivers and about 25% GNU software. Since RMS is the principle of the Free Software Foundation (the organization that owns the copyright over the GNU sources) he has a reasonable interest in seeing that people know where some of the major components of their Linux based GNU systems come from. Eventually the FSF will have a full operating system of it's own: the HURD. The GNU project was started to build such a system and the fact that they released a large number of vital components for public use is what made Linux possible. At the same time there are other bodies that have produced major software subsystems that are conventionally included in a Linux distribution. The computer science research group (CSRG) at University of California, Berkeley released a large number of packages and a large body of source code for public use (BSD). Many of the common utilities under Linux (most of the NetKit, I think) are from these sources. The X Window system comes from MIT's Athena project and the free implementation of that which we use under Linux is principally from the XFree86 Project. XFree86 is the X Window system that's used by a number of Unix implementations including FreeBSD, NetBSD, and OpenBSD. So, if we were to try to fairly represent these parties in our nomenclature we'd have to refer to our systems as: Linux/GNU/BSD/XFree86/"MIT X Window System" Systems ... which is why a respectfully decline to comply with rms' desire for me to use the phrase Linux/GNU when I mean "Linux" (in the common broader sense). The other reason I choose not do to this most of the time is that I find it more difficult to read in that form. This is undoubtedly a horrible character flaw on my part but I find that I sometimes subvocalize (mentally "sound out") passages of technical text in my efforts to understand and proofread it. So, if rms likes he can simply say that my refusal to refer to this is symptom of my stupidity. I'll cop to that. So, I suppose we could say that the "first flaw" of Unix is that no one seems to know what Unix is. While it is tempting to try to follow this line of logic and devise four more for you --- I think it will be much quicker and more amusing for you to read The Unix-Haters Handbook by Simson Garfinkel, et al (IDG Books, (c) 1996). Conveniently this book is in four "Parts": Part 1: User Friendly? Part 2: Programmer's System? Part 3: Sysadmin's Nightmare Part 4: Et Cetera ... and I think that every serious student of Unix and Linux should read this book. For one thing it requires an advanced understanding of Unix to understand the complaints --- and a really advanced knowledge to see how many of these complaints don't apply to many "modern" Unix variants (Linux in particular). For the rest of it I found it amusing, frustrating and significant that the many contributors to Unix_Haters did not list modern available alternatives that exhibited the features they preferred in an OS and environment (or at least that lack the features that they hate). There were references to the ancient "Lisp Machines" but there was no clear endorsement nor were there any suggestions about how things "should be." So, as the title suggests this is a curmudgeonly book without constructive merit. However, the Unix and Linux enthusiast should be thoroughly familiar with the material for the same reason that a self-respecting agnostic should be thoroughly familiar with the major religious works of whatever society surrounds him or her. _________________________________________________________ (?) XFree86 Installation in DOSLinux From STEVEN SCHILLY on 20 Aug 1998 Hey sorry about 'council' sometimes my mind is wondering about in many green fields at once! Anyway I figured that out before I got your mail it was in the init and inittab (although I have no idea what was wrong!) I copied an init file from a working version of micro linux to mine and a new kernel of DOSLinux x.xx? (will have to look). I'd like to run X86Free; where is the best place or way to download this entire package. What will make installing it as easy as possible? I like to hack at the system level but don't have the time right now .. so I'd like the easy way out....Any sug.? ______________________________ (?) XFree86 Installation in DOSLinux From Rick, Aug 1998 I have doslinux on my PC b/c i wanted to keep Win95. I want to load Xwindows but i cannot figure out how to do it on DOSlinux... Could you please point me in the right direction like what files i need and how to install and run. thanks ... ______________________________ (!) Jim answers both...[His two messages weren't precise copies, but you, our web readers, only need to read the cut-and-pasted portions once. -- Heather] I think the easiest way would be to install the RPM utilities using the unRPM-Install package that's Available on the DOSLinux "Home Page." Then I'd fetch the XFree86 packages from a Red Hat mirror and just use the 'rpm -i' command. You could also consider one of the other "tiny" Linux distibutions like "Xdenu" or "Dragon" Here's a copy of another message that covers almost the same question: [copy omitted] The easy way out of manually installing XFree86 is to pick a distribution that includes it. I gather that you're running DOSLinux --- which is similar to "MiniLinux" in that you can install a functional Linux subsystem into an MS-DOS subdirectory in about 20Mb of disk space (maybe 30 or 40 these days). Oddly enough I just got another question on the same topic. I'll paste copies of my suggestions: There were a couple of predecessors to DOSLinux, including Mini-Linux and Xdenu. Xdenu is still available at sunsite: http://sunsite.anu.edu.au/archives/linux/distributions/xdenu/umsdos ... and it included the X windows subsystem. That apparently hasn't been updated in about 3 years. Just a couple of directories over from that you'll find tinyX at: http://sunsite.anu.edu.au/archives/linux/distributions/tinyX ... which is a set of implementations that each fit on a single floppy. (You pick the file that matches your video card type. Most of these are from '93 or '95). MiniLinux itself is still there: it fits on about four diskettes and includes X. It also dates back to '95. However, it's programs should probably still run on any more recent system (you might need a kernel with support for the old COFF/a.out format --- since I don't think MiniLinux was updated to ELF). Also on sunsite we find a page that links us to "Monkey-Linux" which does include X and was updated at least as recently as May of '97. The author notes: "English documentations is still not fine..." So you might feel more comfortable with this if Czechloslovakian is your native language. The web page is at: http://www.spsselib.hiedu.cz/monkey He does note that Monkey is compiled for ELF format. __________________________ (?) XFree86 Installation in DOSLinux From Rick on 20 Aug 1998 thanks for the help... i think i have learned from doslinux so i can go ahead and install redhat... i will still use this information to try to get dos linux working on my laptop. thank you. _________________________________________________ (?) Resume Spam From Prblnd on 24 Aug 1998 AnswerGuy I'm just a LittleGuy, out-of-work..Looking for AS/400 platform and somebody said I should tell you! (!) Who? Your assertion that "somebody said" you should tell me sounds very suspicious. If it is true that "somebody" said you should tell me about your plight --- that person is either stupid or cruel. I'm not a hiring manager. I'm not a recruiter. I have nothing whatsoever to do with AS/400's (no slight intended, they sound like nice enough systems). (?) Question: Can you assist me to find a new start? Answer: (!) I probably could. However, the fact that you blindly sent this to me without any evident research into who I am and what I do suggests that time spent on assisting you would be wasted. However, here's the assistance I will offer: Research: Don't waste your time or anyone else's by blindly spamming addresses out of the blue. Do your homework. Send messages to those who are looking for them and expecting them. Make it clear in your message how you selected them and why you hold an expectation that they have requested your message. Writing: You message's semantics are horrible. The spelling and punctuation are reasonable. However, the phrase "looking for AS/400 platform" doesn't make sense. I presume you are looking for work that is "related to the AS/400" or that "requires expertise with AS/400's." I understand the difficulties faced by people for whom English is not native; however, you want to find someone who is literate and fluent in English to help you edit your messages to potential employers. Tone: Desperation is repugnant to most well-balanced people. Are you really looking for a "new start?" I would suggest that you focus on just getting a new job. You don't want to throw suggestions about "other" problems into the faces of potential employers. The phrase "a new start" is idiomatic to Americans and suggests such unsavory failures as incarceration/ institutionalization, divorce, and/or drug treatment. These are the sorts of total failures that require someone to look for a "new start." Merely getting laid off or even fired is merely enough to look for a "new job." Also it is generally useless to appeal to a potential employer's sympathies. If I were a hiring manager I don't care if you are a "LittleGuy" or a major political, philosophical or intellectual figure. Hiring managers care if you can do the job they need for the price they are willing to pay. Given multiple applicants that meet their pre-requisites they are generally swayed by much more subjective criteria (such as whether you are related to them, whether you are previously aquainted, and whether they "like" you, etc). So, you want to maintain a "positively neutral" (and "upbeat") tone in your dealings with potential employers. With that in mind let's re-write your message to me: ------------------------ Begin Rewrite --------------------------- Dear Mr. Dennis, Sam Ockman told me that you have lots of contacts throughout the industry. When I mentioned to him that I'm looking for a new position (systems administration, preferably on AS/400) he suggested that I drop you a note. I've attached my resume. Please feel free to share it with any of your associates that might be interested. I'm in Sri Lanka, North Dakota --- but I'm happy to relocate. Also, any suggestions you have would be deeply appreciated. I've looked through some of the sites that are listed in Yahoo!'s "Employment" sections. There are so many of those that I'm not sure where to start. Thanks. ------------------------ End Rewrite --------------------------- Note: we use a real name here. We don't use an alias. You could have searched for "answerguy" on Yahoo! gives 242 references and apparently 222 of them point to me. (I'm not the one on CAGTV --- http://www.answerguy.com, and I'm not the one from Square One Tech (http://www.squareonetech.com) Nor am I listed on Global Online Electronic Services (http://www.goes.com)). My full name is published in every article. We start of by saying precisely who sent you to me, and why. If you were referred through some traffic on a newsgroup or mailing list --- say so! Next we state our purpose. Nothing fancy --- just ask as directly and simply as possible. If you expected that I would be hiring --- tell me why you thought so. In that case you'd also want to say way you think that you're a suitable candidate for the specific positions you think I might have open. For example: ... some participants on the comp.unix.security newsgroup suggested that you might have an opening for a new webmaster. I've been working in HTML for three years, and I'm experienced with the installation and configuration of a number of popular CGI packages (including several from Matt's script archive). I'm particularly interested in working for your consulting firm because I want to learn more about Unix and Internet security and I've heard that you're widely respected in this field. ... (note: this is all hypothetical --- Starshine Technical Services does do some security consulting --- but is not "widely" regarded in the field, yet). Lastly, when we ask for more general help we also give some idea of what we're already doing. We do this for two reasons --- first is allows are respondent to avoid redundant suggestions. More importantly, it shows that we are motivated and that we've done some homework. The point is that everything you say is relevant to the matter at hand. You cannot do that unless you do some research. Incidentally, people posing technical questions should take note of this advice. It applies as much for an "application for technical support" as it does for any other. At this point I get enough mail for the "Answer Guy" that I don't wonder where people heard of me (when they're talking about Linux). However, it would be nice if I knew a bit more about where some of these questions came from. If someone at a users group meeting says: "You should send mail to that guy from the Linux Gazette" --- I'd like to hear about which UG it was. (?) Question: WILL you assist me in finding the new career job? Answer... (!) No. (?) okay, two questions, only one right answer... thanks for your time.... resume attached............................ (!) ... (apparently in .DOC format) and deleted without a glance. ____________________________ (?) More on Resume Spam From Prblnd on 24 Aug 1998 you made me laugh thru your entire response .. thank you - Paul (!) Glad I could help! ____________________________________________________ (?) Linux Port of SoftWindows From The Answer Guy to Insignia Solutions on 25 Aug 1998 Dear Sir or Madam, You recently sent me a business reply card offering me SoftWindows '95 for any of three relatively popular RISC-based Unix platforms. This offer is useless to me since I use Linux on my PC's. As a long time user of Linux. I, my employees, and my customers occasionally need access to files in some proprietary document format (usually generated by Windows Office) and we are willing to pay a reasonable sum (I could probably buy 20 copies tomorrow). I am a member of a local users group (Silicon Valley LUG) with about 400 regular members. I can take a poll at our next meeting and give you the results if you'd like. Most of us are not choosing Linux because it's "free" (in the financial sense). The time and energy most of us have spent is far more valuable than a couple hundred bucks here or there. Also most of us have purchased new computers and recieved "free" copies of Win '95 or Win '98 with them. We've then gone out and purchased Red Hat (http://www.redhat.com) Linux ($50) to replace those. The need (among Linux users) to run 32-bit Windows applications without rebooting is perhaps a bit difficult for most Windows users to understand. We could, after all, simply reboot. MS Windows users are used to rebooting a couple times a day. However, for most Unix and Linux users this is the problem. We depend upon a much higher degree of stability. Most of my systems stay up for months at a time. These workstations are not "busy" --- they are mostly just "maintaining state" -- my editors, email programs, web browsers and newsreaders are all in different states. When I get blocked on one task (perhaps because I need to look something up) or interrupted (perhaps to go to lunch, as I did while writing this message) --- my cursors all stay where I put them. I don't have to remember each of these tasks that are "in process" and I don't have to resort to reams of "Post-It" notes (though I put the occasional XPost-It on my display). I can simply move to another window, another virtual desktop, another virtual console, even another user account concurrently running a whole different session of X. I can leave such processes running throughout my network running with confidence that nothing sort of a power outage or a major sysadmin error will disrupt my work. To us the notion of rebooting to get into MS Windows (usually just to read some .XLS, or PowerPoint e-mail attachment) is analogous to sweeping all of the paper work off of your desk, emptying all of the drawers, and turning the desk upside down. For me to just go though an find all my running processes and write down which files were open in which applications would probably take at least a fifteen. Later, to restart all of them and get back to "my place" (reposition my windows, cursors, etc) would take at least a half hour. That's 10% of a workday lost to just rebooting a system! Since I bill $100 per hour --- it doesn't take much of that to convince me to get a package like WABI, VirtualPC (if Connectix responds to my inquiries), or (if you make it available) SoftWindows for Linux. There were an estimated 5 to 10 million Linux users by the beginning of this year. IDC estimates that 2.2 million revenue generating workstation installations of Linux were installed during 1997. That beats their estimates of all non-PC based Unix' combined and outstrips the reported NT workstation installations almost two to one. The number one problem faced by most business Linux users is lack of access to 32-bit Windows applications (most of it for document sharing). StarOffice and Applixware are not yet mature enough and certainly don't have sufficiently robust document filters to be a solution for most of us. Corel's WordPerfect for Linux is "on its way" but it probably won't have quite the quality of document filters and translators that we require for reasonable interoperation with some companies. WABI will probably never run Win '95 or Win '98 apps. WINE is still not ready for broad use (only the most technically adroit Linux enthusiasts can use it --- and it has virtually no Win32S suppport). So, there is clearly a niche for your market. That niche is almost certainly bigger than the combined niches for HP-UX, Solaris/SPARC, and AIX. PS: I keep hearing this persistent rumor that Insignia can't release a version of SoftWindows for any PC based OS, allegedly due to some clandestine cross-licensing arrangement with Microsoft. That would certainly explain why you've been ignoring the larger PC/Unix markets. If this rumor ever makes its way to the DoJ it should make for some interesting reading. I also notice that MS has recently been "legitimizing" the non-Intel Unix platforms and has been making announcements to the affect that they are releasing native ports of MS Office, and Internet Explorer for a select few of these platforms (all non-Intel, naturally). I wonder what affect that will have on your market. ____________________________ (?) Re:Linux Port of SoftWindows From Insignia Solutions Unix Customer Service on 25 Aug 1998 Jim- Thanks for your interest in SoftWindows for Linux and in Insignia Solutions. Right now, we do not have a product for Linux or any Intel-based Unix. Unfortunately this is not a simple port of an existing SoftWindows for Unix on RISC product - a completely different design is required as the Intel cpu does not help as much as you might expect. Consequently, a large investment on our part would be required to produce this product. We are prepared to do this, but only given enough customer demand. To assess this demand we will be producing a web based survey, to determine what would constitute acceptable pricing, product functionality (in particular whether DOS/Windows would have to be included) and performance. Please check www.insignia.com periodically for further information. Thank You, Christopher Wood Insignia Customer Service 800-848-7677 option 5 ____________________Reply Separator____________________ [attached copy of original message omitted] ____________________________ [As of press time, they are not plugging a "market expansion survey" or anything similar directly on their home page. However, they are offering a "free UNIX Solutions kit" if you fill out the form at: http://www.insignia.com/banner/unixkit.html ...where they don't mention Linux, but will let you fill in an Other space for your platform. It doesn't mention price. If you have a serious interest in buying Softwindows should it come to pass, let them know, so they can't claim "there's no demand". But it should be legitimate demand... if you wouldn't be willing to talk to a sales rep about it, I'd say don't bother hitting the above link. -- Heather] ____________________________________________________ Chris Gushue's question about nullmodems was posted in July (Issue 30). ____________________________ (?) A Convert! From Chris Gushue on 29 Jul 1998 About a week after asking for help about connecting Linux and Windows 95 via null modem, I totally switched over to Linux (well, with a small Windows partition for the occasional game). I got fed up after numerous crashes while just installing it, plus my brand new Gravis GamePad Pro didn't work in 98 at the time. I can make my Linux system do pretty much anything I want, either in text mode or in X. Plus, I don't have to reboot at least once a day. To summarize, it's much better to run Linux than Windows (well, everyone knows that, of course!). Instead I think I'll work on setting up a LAN with my Linux system (on a cable modem) with two Windows 95 systems (my roommates-to-be systems). Shouldn't be too hard - except for maybe the Windows configuration (I expect no less than 3 reboots) :) Great column - keep up the great work! (!) Glad I could help. ____________________________________________________ (?) MS FrontPage for Linux/Apache: From Terry Singleton on 18 Aug 1998 Hi there, Being a newbie LINUX user I searched YAHOO and found your site. It is a relief to actually find a site which has some newbie material..thanks. I am hoping you can shed some light on this subject. I recently put up a LINUX server, I am hoping to use it for email and http purposes. However our users are not knowledgeable enough to be able to ftp html files into their directories and therefore we would like to use Front Page. I have downloaded and installed APACHE 1.3.1 and noticed that MS does have FP extensions that are supposed to run on LINUX and APACHE. (!) I think you are underestimating your users. You might want to find the WS_FTP and/or the "CuteFTP" package. These shareware Windows packages are pretty easy for Windows users --- and work pretty much like the old file manager. (As for e-mail, I suppose some/many of them will be use Netscape Communicator's POP client. You can also offer them Eudora and Pegasus Mail. These will work with any POP server --- including whichever one was almost certainly already installed with your distribution). The easiest way to allow your Windows users access to their files on your web server is to install and use Samba. Samba implements the SMB protocol --- which is the native file sharing system that's implemented in Windows for Workgroups, Windows '95, Windows '98, Windows NT, and OS/2 (LAN Manager and LAN Server). files. With Samba you'd give each user access to their home directory (possibly creating symlinks from their home directories to any shared directories). You users would simply drag and drop files using the same file manager and "Explorer" interfaces that they'd use with any other WfW or NT fileserver. You can also create group shares (on your Linux or other Unix system) which will automatically show up to the appropriate users in their browse lists. The last time one of my associates looked into using Microsoft's FrontPage server extensions the idea was abandoned without even attempting the installation. There were showstopper limitations and design features that obviated any need to look further at it. Her conclusion was also supported by a number of messages that I've read on the Bugtraq and NTSecurity mailing lists. So I recommend that you reconsider your options and avoid FrontPage if you have any choice in the matter. If you insist on using FP despite these limitations then you'll probably want to look at: The Unofficial FP Server Extensions Home Page http://compy.ww.tu-berlin.de/FP-Server_Extensions/default.htm http://www.bewley.net/httpd/frontpage.html Although I've never used it, I've read about another way to upload files to a web server using HTTP POST commands. It's described in the O'Reilly book on _CGI_Programming_ (one with a mouse on the cover) on page 414 (Appendix D). Basically you create a form that looks like so: File Upload Form

File Upload Form


Your Name: File to Upload:
At the time it was only supported in Netscape. This will show up as a form with a filename field, and a "Browse" button next to that. I don't know if any other browsers ever added support for it. (I just check with Lynx 2.7.2 --- it recognized the INPUT TYPE="file" and rendered it as "Not Implemented") Also you'll have to create/find a cgi script/program that implemented the file/MIME decoding portion of this (I just used "upload.pl" as a placeholder for this example). That same book listed some Perl 5 modules that might be useful for this sort of thing --- I think one of them was "BasePlus.pm" --- you'd want to search CPAN (the Comprehensive Perl Archive Network: http://www.cpan.org, http://www.perl.com/CPAN-local among many) for related work. I don't know for sure but you might start at: CPAN: By Category: WWW, HTML, HTTP and CGI http://www.perl.com/CPAN-local//modules/by-category/15_World_Wi de_Web_HTML_HTTP_CGI/ (?) Being new to LINUX I have no idea how to get started. Some of the questions I have: 1. How do install them,,do they run on 1.3.1? 2. After installation how are they configure? 3. How do we setup permissions? (!) I don't know the answers to any of these questions. However, I think you'll find some instructions at the two sites listed above. (?) 4. Virtual hosting would be nice but I think simply subdir will suffice for each user (using the ~username notation). (!) There are a couple of HOWTO's on this. The one you'll want to start with is: Linux WWW HOWTO http://sunsite.unc.edu/LDP/HOWTO/WWW-HOWTO.html (!) Any help or direction would help. thanks (!) This is a start. ____________________________ (?) Re: Linux Gazette From Terry Singleton on 20 Aug 1998 thanks...I would love some more information on getting SAMBA working. I have read alot about a NFS client. Will I still need the NFS client to connect when using SAMBA. (!) No. You won't need NFS to access your Samba systems. As for more info on Samba --- there's a whole book on the subject. There's also an SMB-HOWTO that's quite old but should still get you started; http://sunsite.unc.edu/LDP/HOWTO/SMB-HOWTO.html ... I gather that this hasn't been updated since '96! I know that Samba has been under constant, sometimes intense, development throughout that time. However, the basics still work that same. (?) Or if I implement SAMBA will the machine be accessible by \\linuxmachinename like other NT boxes...do I have to create a WINS entry for the LINUX box?? (!) Yes. You can use UNC naming and normal "Explorer" browsing to access your Linux box. It will look "just like" an NT box to those protocols. (Many sysadmin's have reported that they've replaced NT servers with Linux with increases in performance, capacity, reliability --- and no complaints or comments from their users). ____________________________ (?) Terry comments about Linux' Future From Terry Singleton on 20 Aug 1998 You really are the answer guy, thanks for your time. General Comments: Being new to LINUX I am very impressed with it. I initially bought a copy of RedHat to get started with UNIX. The college just bought this bohemith DEC ALPHA server which is to run DEC UNIX. Because I was now to learn this OS I thought it prudent to play with LINUX as it would install on my machine quite easily. After inserting my setup boot diskette and 20 minutes of answering quite simple questions I am up and running with LINUX and a GUI called CDE. Although I would not install the GUI on a server for a desktop environment it is quite nice. The RedHat install detected by 3COM905 NIC card, my ATI video card and used my DHCP server to set itself up on the network, I must say that even with NT or 95 most of the time I need to supply additional drivers for the install, not in LINUX's case. Now I must say that I am not sure that these drivers are optimized for the OS but they are functioning fine. After having great success with LINUX on the desktop and learning most of the basic shell commands I installed LINUX again, this time in a server environment(kind of a rogue operation). Configured sendmail, qpopper and dns; now this little LINUX box handles all our student email and DNS requirements. It replaced 2 NT servers and handles 2000 POP3 users and 1000's of emails per day. Future Directions: I am hoping LINUX can provide some much needed LDAP services for email address books and I may even consider using LINUX as our corporate web server OS, because LINUX also provides SAMBA file services we may even look to LINUX for our file and print services needs. The only thing holding me back in this arena is MS's ASP technology which is a great server side scripting language. Perhaps when SUN et al finalize JSP (JAVA Server Pages) and the JDK is released for LINUX I will re-examine this issue. To move this environment to the desktop for the masses I would say to COREL, keep up the good work(they ported Word Perfect to LINUX), to Triteal, keep working on the CDE environment. LINUX definetly is an OS with much greater potential than any other OS currently in development. ____________________________________________________ (?) Virtual System Emulator for Linux and Why NOT to Use Them From Jeff on the L.U.S.T List on 20 Aug 1998 Now that I have LINUX installed on a machine, the question becomes what can I do with it? I've heard there is an application that will allow me to run my standard windows programs (office, etc), anyone know anything about that? Jeff (!) There are several efforts to provide this sort of thing. Page down to the end to see some notes about those. Meanwhile here's a rambling rant: Installing Linux in the hopes of running your Windows/Office programs is certainly misguided if you intend to get any "normal" work done. What you do with any OS is run programs. When selecting which OS(es) to install and use your chief consideration is: what programs do I want to run? Thus, if you wanted to run WordPerfect, or Mathematica, or Applixware (an applications suite which is available on several forms of Unix, and in a Windows version as well) --- you'd then have choices. These applications are available under a few operating systems. However, with Microsoft Office you only have two (real) choices: MS Windows, or MacOS. As noted below many people have attempted to expand your choices (by allowing you to run Windows programs under various forms of emulation and capability interfaces). I personally think it's silly to install an OS and then ask "now what do I do with it?" It seems analagous to ripping the powertrain out of a car, fitting it with a "formula 1" engine, blowers, and a custom tranny and asking: "Now what do I do with it? Can I use this for my daily commute and grocery shopping?" (The answer, in this hypothetical case is, "Maybe, but why? A racing car is for racing, it makes a poor choice for more general use"). Of course my analogy breaks down at this point since Linux is not so specialized. I don't want to perpetuate the notion that Linux is a "server" OS. That's just where it's currently popular. There are general applications for Linux --- they just aren't the same brands that you're used to seeing for Windows software. There are more office applications suites for Linux than there are for Windows. This is nice from the point of view of the consumer that wants choices; but crippling from a "big business" perspective. Under Windows there is essentially one dominant office suite. (A couple of others exist, like Lotus Smartsuite, and Corel's Office, and Applixware --- but they get essentially no press coverage and have just about nil "mindshare"). It is almost certainly no accident that the company that controls this proprietary OS also dominates the applications that are available for it. That is a major point in this DoJ investigation of Microsoft's business practices. Other companies have dominated other fledgling industries in our nation's history. At least three of them were subject to "consent decrees" (agreements with our federal government regarding their responsibilities as recognized monopolies) --- I'm referring to IBM, Xerox, and AT&T. However this is not a history lesson --- which is good since I don't have textbook summaries of those cases handy. So, under Linux you can run Applixware, WordPerfect, StarOffice, Cliq (a character based suite) and various freeware packages like LyX, SIAG (Scheme in a Grid --- a spreadsheet package), and Maxwell (a word processor) and others (like 'sc' or 'slsc' the "spreadsheet calculator" and the "SLang version of 'sc'"). A more popular way for many Linux users to work is with text editors (rather than word processors) --- and using markup and typesetting languages. These are whole different approaches to the situation. Instead of using some proprietary word processor interface and document/file format I use simple text like: % Template for a LaTeX Letter \documentclass{letter} \begin{document} \begin{letter} {% \\ % full name and title \\ % address % city, state, zip-code \vfill \opening{Dear % Sir or Madam% ,} \vfill \closing{Sincerely,} \vspace{1in} \signature{Jim Dennis,} \vfill \end{letter} %% end letter (Repeat as necessary) \end{document} I can use this template for all of my personal letters and a similar one (with spacing set aside for a letterhead --- or even with letterhead macros --- etc) for business. With that I can simply run a command like: latex myletter.tex ... to typeset it and either print the resulting "dvi" file or run the 'dvips' command to convert it to PostScript and print that (even on non-PostScript printers). Note that the letters are typeset --- with kerning, leading, etc. I don't have to concern myself with the details about line length, pagination, etc. The \vfill commands (macros) are hints about how I want the portions the first and last pages filled (providing visual separation between the addressing and salutation, and between the text and the signature/closing). I can create my own "styles" and "document classes" and I can create my own abbreviations and macros for doing things my way. For example a friend of mine has a couple of macros for his resume (which can be easily rendered with a properly accented 'e' under LaTeX) which allow him to put in a line like: \job {Big Former Employer} {Chief Bottle Washer} {Feb, 29, 1931 --- May 1, 1942} ... and have each element of that (company, title, date range) typeset in a particular fashion (such as "Large bold" for the company, "large italics" for the title and "small caps" for the dates, with "\hfill" to fill these lines). If he decides that he doesn't like the look of one resume --- he can redefine his "job" macro and all of the jobs will be consistently rendered the new way. He and I don't have hand edit all of the jobs on the list --- just the definition of \job. As another example; there are many things that Windows users put in spreadsheets (Excel) that are more like databases or tables (that is tabular data without any computations involved). They use Excel (or other spreadsheets) for this because the things are already available and they know how to use them. In some cases this is a reasonable choice. In others they make more work for themselves than they would by using a database, or using a "big text file" (say, tab and line delimited). Under Unix it is more common to put these sorts of things in a text file and use the many text processing tools (sort, diff, cut, paste, join, grep, awk, perl, ... ) to work with them. As the Windows user continues to use a spreadsheet (especially for tracking lots of data and importing long tables) he or she hits the capacity limits imposed by their memory and applications. Spreadsheets programs typically have to load whole spreadsheets into RAM. I don't know of any of them that can "page through" a large spreadsheet effectively. The Unix user's capacity is typically limited by diskspace. In other words the text processing approaches usually scale well. Things like awk, grep, join, etc just filter through the file(s) and don't have to load any more than a small buffer's worth at any given time. Even 'sort' --- which necessarily must go through whole files --- scales pretty well (I've sorted files that were hundreds of megabytes --- it takes a while and plenty of temporary disk space --- but GNU sort will do it). As a side note most programs under Linux/Unix are "toolkits" or "little languages." For example you can simply "sort" a list by "ASCII collating sequence" by just using a command like: sort file > file.sorted ... which is the simple case. But you can also deal with more complex cases like this: let's say I have a list of appointments of the form: MMM, DD, YYYY Notes (three letter abbreviation for the month, on or two digit date, four digit year followed by some text). I can sort that with a command like: sort +2n +0M +1n ... note that I sort first, numerically by year (+2 columns from the start of each line), secondarily by the first column by "Month" (a special sorting key per your locale), and tertiarily by date (also numerically). It's the same sort utility --- and it has lots of other options (about things like "folding/ignoring the case" specifying field separators, using an offset within the column/field, counting consecutive blanks as singular or multiple field seperators, and things like that). Under Unix it's also easy to use programs with one another. This is obvious when by pipe the output of one filter into another (also available under DOS --- but much less widely used due to the relative obscurity/unavailability of good filters to use, and crippled by the implementation of pipes --- which is basically a set of "anonymous tempfiles" with "transparent redirection" --- as opposed to Unix pipes in which the processes are running concurrently). It is less well known but equally handy to see how the dominant Unix/Linux editors ('vi' and 'emacs') allow one to interface with "standard" Unix commands and filters. Under 'vi' I can mark a line in my text ("[Esc]ma" to set the "a" mark) search for an arbitrary regular expression ("/regex[Enter]") and then I can filter that block of text (from my cursor to the mark) through an arbitrary Unix command (such as 'sort') using just 3 or 4 keystrokes (plus the command's name). (In my example that would be "!'a" followed by "sort." All of the lines of text would be fed to the filter, and anything returned by the filter would replace them in my text). To read the output of some simple command under 'vi' I just type "[Esc]:r!" followed by the command. Any output from such a command is inserted into my text. There are similar features in 'emacs' (just use C-u, M-| [That's C for "ctrl" key, M for "Meta" usually meaning the Alt key, in case yo're not familiar with Emacs documentation -- Heather] to pipe a block through a filter, and C-u, M-! to read input from a command --- I have those bound to simpler commands like "[F3]!" in my startup files). Those examples are "power users tricks" --- but the point out something more important. Many Unix/Linux commands autommatically and transparently use other programs in their normal operation. Thus you can type the command: tar -tzf foo.tar.gz ... and the "tar" (tape archiver) will transparently decompress the .gz file using 'gzip -d' while it extracts the "table of contents" (-t) from it. Similarly I can type a command like: tar cf otherhost:/home/myhome/new.tar ./* ... to create a "new.tar" file on a different system (tar will transparently call the 'rsh' command to let me do that --- assuming that I have set up the permissions and security to allow it --- in other words, assuming that my security if very lax). More obvious examples show up in most mail programs and newsreaders under Unix. Most of them (elm, pine, tin, trn) don't implement text editing functions. They pass your replies, compositions, and other texts to your preferred editor. Under DOS/Windows every mail package, newsreader and many other applications implement some cheesy little "editing mode" (or screen/dialog) -- each with its own quirks and none with as much power or flexibility as the old 'ed' (Unix line editor). If you installed Linux to learn about it --- then get out there and learn about the commands you've installed. Try this series of commands: cd /usr/bin/; man * --- that will bring up most of the man pages for most of the commands on your system (one at a time --- hit "q" then "Ctrl-C" to break out). Many of them are little tools, intended to be use for a small part of your overall work. On a couple of my systems I have over 2200 commands available. (From bash you can quickly find out how many commands are on your PATH by just tapping on the [Tab] key a couple of quick times to get a warning message like: "Display all 2209 possibilities? (y or n)"). Some are as simple as 'cat' (concatenate one or more file streams into standard output) and 'echo' (print a bit of text on "stdout"). Others are as complex as Perl, C, and emacs. GNU emacs and Xemacs are complete programming and applications environments --- with hundreds of packages and about 1800 user accessible commands (a quick way to find this is to type "M-x" and then use the same double-tap on the [Tab] key, switch buffers and count the number of entries in the "Completions" buffer that pops up --- which is easier if you have your status line displaying the line count). I think there are about 1000 functions (system calls, stdio and stdlib, etc) in a standard C programming package, and there are several hundred in Perl and maybe one or two hundred in awk. Then there's othe programming languages like TCL, expect, Python, etc). Luckily many of these overlap and are essentially "dialects" of a common set of "Unix conventions" There's also quite a bit of overlap and duplication among these. To get some idea of what's available for Linux, browse around on Freshmeat (http://www.freshmeat.org) for about a week. Note that those daily changes are new releases and updates to these programs. Also take a look at the home site for the LDP (Linux Documentation Project) http://sunsite.unc.edu/LDP. Then follow their link to one of the Linux Apps home pages: http://www.linuxapps.com/ and then round out your tour with visits to Linas Vepstas' page: http://www.linas.org/linux/ and Christopher B. Browne's pages: http://www.hex.net/~cbbrowne/ Those should give you a pretty good idea of what applications are out there. Also get into Netnews and subscribe to comp.os.linux.announce. > You can get the DosEmu program, it emulates dos and can even run > windows 3.11(unofficially) so theoretically you could run office 4.2 > or less on it but that's a stretch. However it is still in > development(just like linux) and eventually may officially run > win3.11. There is no support for win95/98 apps that I know of, do to > the structure of the OS it would require alot of time to emulate > it in all of its instability and glory. Hope that helped. reds If you really want to do it, look for an app called BOSCH (I think). With it you can run WIndows 95 (Probably 98 too) from Linux. I know that they have the opposite too. A version to run Linux from a Windows 95 box (I wouldn't recommend it). cheers, Raul Dias The name is "Bochs" and it's at: The Bochs Software Company http://std.world.com/~bochs/ FTP site: ftp://ftp.std.com/pub/bochs This implements a virtual PC under Linux or other forms of Unix (it's shareware distributed sources). There is also, as Raul says, a port to Win32S that allows one to run Linux under Win '95 or NT! I've never run it --- but I've heard that it is pretty slow. dosemu purportedly works for some Windows 3.1 programs --- and I know it works find with many DOS programs (use the "alphas" --- the betas are old and less functional). There's also WABI, a commercial package developed by Sun and licensed to Caldera; (the sole distributor of the Linux port). This is also limited to Windows 3.1. Search around at http://www.caldera.com for details on that. (It, like Bochs and dosemu, requires that you install the copy of Windows that you intended to run Finally there's WINE (Wine Is Not an Emulator, or WINdows emulator --- depending on who you ask). WINE is an ongoing attempt to implement a full set of the MS Windows API's libraries and DLL's sufficient to run a typical Windows application without requiring any Microsoft code on the system. You can read more about it at: http://www.winehq.com. Another approach that might be amusing is to buy a copy of Executor (a Mac emulator for Linux) and try to run the Mac version of MS Office under that. I personally these these approaches are silly and worthless except for the most casual use and (for the amusement and research value of those that enjoy that sort of thing). ____________________________________________________ The original thread about FoxPlus appeared in July (Issue 30). ____________________________ (?) FoxPlus for Linux ? From L.U.S.T List on 20 Aug 1998 I'd just like to thank you (Jim Dennis) for your very comprehensive and helpful responses to the XBase question. I had no idea there were so many database options available for Linux. I joined this conversation out of idle personal interest, but now I think I see some possibilities for solutions for current needs we have at our company. (!) I'm glad to hear it. This is one of the reasons that I copy many of these messages to "The Linux Gazette's Answer Guy" column. That way they'll be found by the search engines and pop up in many (relevant) queries. It's also one of the reasons that I include so many URL's in my messages. This is also for the "teach a man to fish" philosophy (don't provide just the answer --- but pointers to lots of related answers). Linux has grown to the point where no one knows how big it is or how many ways in which it is being used. The fact that there is no "central Linux authority" (or vendor) makes it difficult to size up the Linux market. ____________________________________________________ (?) More on Distribution Preferences From mlees on 20 Aug 1998 Answerguy, What do you think of this distribution? Mike OpenLinux Base OpenLinux®: A complete Linux operating system with all the system tools youll need. Plus valuable add-ons, like Netscape® Communicator and backup utilities. US and Canadian orders can take advantage of a $20.00 rebate from Caldera, bringing the price of OpenLinux Base to $31.95 (!) I haven't used any of the Caldera distributions recently. This is a much more recent version the those that I've used. So, I don't have an informed opinion on them. Since you just asked about Yggdrasil yesterday I'm wondering if this is a pattern. I hope you aren't going to send me of these every day. My opinion about Caldera Standard is that it is the best choice for a site that has existing Netware servers or clients. It was also the first distribution that was supported by WordPerfect for Linux. There are a number of other commercial software companies that work with Caldera for releasing Linux versions of their product. If the Caldera Base includes a copy of StarOffice (as your press release says it does) than that is a very good reason to try it. (The installation of StarOffice that I have from an early 4.0 CD is very unstable --- it dies quickly and horribly under my S.u.S.E. 5.1 system. I've heard that that there are new libraries and releases that fix that --- but I haven't been particularly motivated to go get them since I still mostly live in text consoles). StarOffice is a very promising product --- and the competition between it Corel Office, and Applixware should be interesting. The most important feature of either is to provide me with stable, reliable access to MS Office .DOC and .XLS files. The first one to successfully do that with MS Office '97 wins my vote. (Since that is one of the few reasons for me to get out of a text console and into X --- the others being Netscape Navigator (when I need something that just doesn't look right in Lynx), 'xfig' (to draw diagrams for the book that I'm working on), and 'xdvi', and 'gv' (to preview the LaTeX and dvips output for same). At the same time I recognize the potential of these office suites (and some others). As these get better we see Linux as a more serious contender on the desktops of home and corporate users. According to some surveys we're already winning against NT in a number of server categories (including web, mail, DNS, and SMB/Samba). We've gained a lot of ground in the technical and scientific workstation market (although the push to get EDA and CAD/CAM suites ported is just barely started). But all the "mom's" and "pop's" out there that have their college kids buying systems for them need something a bit less intimidating than 'emacs' and 'vi' --- and TeX and friends. KDE and GNOME will provide the main interface and many of the toys and widgets. StarOffice, Applixware, Corel Office, SIAG, LyX, Wingz, Xess, and others are all vying to provide the main user applications. (I personally think we'll also need multi-media GUI "Welcome to Linux/XFree86/KDE" and "Welcome to Linux/XFree86/GNOME" interactive tutorials --- with sound, music, via, and a dancing, talking Tux. I want a system I can install on a box and send to my Mom!). Getting back to your implicit question: Which Linux distribution should you try? ... the answer is: I have no idea! Unlike the marketeering weenies that you encounter in every magazine, and newspaper, on every TV and radio show and on billboard and busses every time you drive anywhere ... unlike them, I don't want to push a bunch of features on you and I have nothing to sell you (except my time --- which is pretty expensive). Helping someone select a Linux distribution (or anything else) is a matter of requirements analysis. What do you need? What do you want? How much are you willing to spend? (Time and money). It is quite possible that I would recommend FreeBSD, NetBSD, OpenBSD, BSDI/OS, or even Win '95, NT, or MS-DOS --- if I understood your requirements sufficiently. Before you send me a list or essay on your requirements consider that the Answer Guy is time I volunteer to show my appreciation for all the work that people like Richard Stallman, Linus Torvalds, Alan Cox, Arnold Robbins, and so many others have put into the GNU project, Linux and other freeware. I try to answer questions that I think are of broad interest to many Linux users and potential Linux users. (And possibly of interest to *BSD'ers and eventually GNU HURD'ers). The easy answer to selecting a distribution is: pick one! Since many of them are freely distributable you might want to start with one of those. Debian and Red Hat are definitely freely accessible. I think Slackware is still available online --- and I suspect that it's perfectly O.K. to borrow a friend's copy of the CD. Walnut Creek might have exclusive rights on CD distribution of Slackware --- I don't know. I think S.u.S.E. is free for "personal" use (although it is a bit unclear my S.u.S.E. 5.2 manual says: Copyright This work is copyrighted [sic] by S.u.S.E. GmbH and is placed under conditions of the GNU General Public License. You may copy it in whole or in part as long as the copies retain this copyright statement. ... (overleaf of the title page). It's not clear whether "this work" is intended to refer to the book or to the distribution that included it. The box and CD case (4CD's) don't list any other copyright or licensing notices that I can find. The only index entry under the term "license" points that the Appendix of their manual that contains the full text of the FSF GPL. That would suggest that you can borrow my set of S.u.S.E. CD's and install it, and would even suggest that someone could start creating derivative works (other CD sets) to sell in competition with S.u.S.E. However, I've always been under the impression that S.u.S.E. is a commercial distribution. I purchased both of my copies for it -- 5.1 and 5.2 --- and I've purchased many copies of various Red Hat versions (the boxed set and the lower-priced archives sets). So, you might want to ask a S.u.S.E. rep before you go into production against them. However, I doubt that they'd even want you to waste their time asking if it's O.K. to install from a friend's set on an evaluation basis. You're clearly willing to buy some distribution once you find one you like. Personally I usually select Red Hat for my customers (after I've considered their needs) simply because Red Hat has a pretty good balance of the various factors they care about. Debian has more packages (slightly) -- but the last copy of dpkg that I used was very convoluted (I'm hoping to get a 2.0 CD as soon as it goes out of beta). Slackware was nice when I needed it --- but most of my customers aren't interested in fussing with tarballs --- they want something with a decent package manager (one that can be operated easily from command lines as well as throught a GUI). Under RH it's pretty simple to write a script to poll an internal FTP site for package updates and automatically apply any of them that appear. (I think there's a package called 'rpmwatch' floating around some 'contrib' directories somewhere that does precisely that). I haven't looked at RH 5.1 yet. S.u.S.E. and Caldera both use the RPM format. S.u.S.E. includes more packages that the last couple RH CD's I used (4.2 and 5.0). It seems to have a pretty good installation interface though I have mixed feelings about their interpretation of the SysV init scripts. They have a large shell script named /etc/rc.config (mine is about 770 lines long --- of which about 500 are comments). This file contains a long list of shell variables and values. You can edit this file by hand or you can use YaST (Yet another Setup Tool) which is their curses based system's administration interface. The idea is that the other scripts all "source" this one file and use the variables that apply to their operation. On the one hand this is very nice. Concievably I could create a particular installation profile (which they support via their installation interface), install the system, configure it via YaST and put it into production. Let's assume I use the 'chattr +i +d' (immutable and no-dump) flags on all the files that came with the distribution and unset them as a pair whenever I change any of them; this would allow me to use the 'dump' program and never backup files that were from the initial installation off of the CD). This is for a "data+config" backup strategy. If I've stored the rescue floppy they created, and the rc.config file --- I should be able to restore the whole system to its configuration with just my installation CD's, my rescue diskette, and the rc.config file. (Naturally, I'll have to restore all my data as well). Another nice thing is that I might be able to create a little script to generate new rc.config files from a master form and a couple of other data files. If I have lots of new machine trickling in I might have a few files that contain lists of IP addresses, hostnames, NIS domain names, shared printers, and other local (LAN) data. I might conceivably be able to generate a new custom rc.config file for each new box and automate even more of the deployment. Under other distributions I have to mess with over a dozen separate files. Unfortunately it's not that easy even under S.u.S.E. If you use NFS you really want to use NIS or synchronize the 'passwd' and 'group' files across your systems (since maintaining ugidd maps is not scaleable and NFS relies on the uid/gid values to determine access and permissions. None of the distributions I've seen prompt me for a passwd/group file set prior to installation. So, if I use Red Hat on one system and S.u.S.E. on another (I do) --- there will be some base files that differ between them (most of the uid's created by most of the distributions do match -- there were only a couple that I had to run through a "masschown" script). (Distribution Dudes!: This is my enhancement plea for the month! Please let me hand you a passwd/group file set --- from floppy or over ftp/nfs/http --- and use that to map the ownership as you install). These days, for large sites, I recommend creating one "template" installation one a typical box, cutting that whole installation to tape or CDR after configuration but before any use (data). Now you can do all new system installations as "restores" from your backups. You can also take that opportunity to make sure that your recovery plans, rescue diskettes and backup media are all in working order. One reason I recommend that is that it takes me about four hours to fix various permissions and configurations (hosts.allow, hosts.deny, etc) after I've completed a new installation. One final note about choosing a distribution: don't just ask me. I'm only one person. I've only used about a half dozen Linux distributions (some of which no longer exist!). Don't just go to the newsgroups and mailings lists and ask "Which is best?" Ask questions that relate to your situation: Will you be integrating this into a Novell network? Do you have friends or family that will be working on your Linux box? Do any of them have experience with a Linux distribution? Do any of them use some other form of Unix (free or otherwise)? Do you have any particular applications preferences? Is system security a concern? What are the risk profiles that are acceptable to you? What is your native language (German speakers will probably be much happier with the German S.u.S.E. or the DLD (?) distributions, Japanese users seem to prefer FreeBSD, the French have their own distribution, etc.)? ____________________________________________________ (?) IP Masquerading/Proxy? From Peter Mastren on 20 Aug 1998 James, I appreciate your in depth coverage of the IP Masquerading topic last month. My own private network now is able to talk through my Linux box using the techniques you described. (!) Glad to help. I, however, can't seem to find an answer to my next problem anywhere in the literature. My Linux proxy is connected via ISDN to my employers intranet which itself is behind a firewall and served by a proxy server. From Linux, I can browse, telnet, ftp etc... using SOCKSified clients, i.e. rtelnet, rftp. From any other machine in my private network, I am only able to get as far as the companies intranet, but not all the way to the internet. (!) If your other machines were using SOCKSified clients they would probably work as well. So the first suggestion would be to find SOCKSified clients for your other systems. It is also possible to configure SOCKS (v5 at least) for multi-hop traversal (so that one zone or subnet in an organization, such as yours can use a SOCKS server to relay traffic to another SOCKS server. (?) How do I get modules, ip_masq_ftp.o, ip_masq_raudio.o etc... to use SOCKSified protocols? Basically, another level of indirection is required to actually reach the internet. Can this be done? (!) I supposed someone could "SOCKSify" the IP Masquerading modules or use 'ipfwadm' to redirect all the appropriate traffic to custom, SOCKSified, programs through the transparent proxying features. One of the features of the Linux IPFW (kernel packet filters) is a provision to redirect incoming TCP connections into Unix domain sockets on the localhost, where a user space program can be attached to them. This user space program can either handle the request directly or relay/proxy the connection through whatever interfaces and protocols you'd want to build into it. I think the squid cache and the DeleGate proxy can each be configured to support this. To find out a little bit more about this redirection feature look for the "-r" switch on the 'ipfwadm' man pages. Just off hand I don't see that the newer IP-chains code (apparently intended to replace ipfwadm in future kernels) offers any particular help for your situation. It does add significant new features to Linux packet filtering and it well worth the work that's going into it. However, I don't see anything on it's web site: http://www.adelaide.net.au/~rustcorp/ipfwchains/ ... that applies directly to your situation. Some other work in this field is at: The HOWTO for IPChains http://www.adelaide.net.au/~rustcorp/ipfwchains/HOWTO.html As I said It looks like IPChains is going to be the default kernel packet filtering code for the 2.2 kernels. The Home of Linux IP NAT http://www.csn.tu-chemnitz.de/HyperNews/get/linux-ip-nat.html (NAT -- network address translation -- is more generalized then IP masquerading. While IP masquerading implements a specific many-to-one NAT, IP NAT allows complex many-to-many translations. It might be able to co-exist with IP masquerading and/or IP Chains). Darren Reed's IP Filter http://cheops.anu.edu.au/~avalon/ This is the free filtering package used by FreeBSD and its brethren and it is the most popular packet filtering package for Solaris and a few other forms of Unix (which don't include packet filtering in their standard kernels). Reportedly this has been successfully run under Linux as well. As we move beyond packet filtering we look into proxying systems. We can look in at the home site of NEC SOCKS at: http://www.socks.nec.com (Just hit the "Download" link if you want the package itself). On a whim I used their "Search" link and found 844 results for "Linux" and 578 results for "Solaris" The numbers are interesting though meaningless and I don't have time to do an analysis to say whether the disparity is good or bad for the Linux community. We can also look at Thede Lod's "Simple SOCKS Daemon" page at: http://www.leverage.com/users/tlod/ssockd/ssockd.html This seems to be a simplified replacement/alternative to the stock SOCKS v4.x server. It seem that this as only been tested under FreeBSD --- so it might require some coding to port it to Linux. (!) Thanks for time and keep up the good work. Your efforts are appreciated. Peter F. Mastren ____________________________________________________ (?) The "Difficulty" is in Disabling the Services From CARqb on 20 Aug 1998 How can I set my computer to act as a network server? I'm running RedHat 5.0 Thanks in advance. (!) Usually you don't have to do anything special to any Unix or Linux box to enable a variety of network services. In fact it is far more common for sysadmins to put their energy into disabling the large variety of services that are enabled by default (as every service is a potential security hole). Now that applies to services like HTTP (web servers), FTP, POP and IMAP ("post office procotol" and mail access protocols), telnet, rlogin/rsh, and various others. If you specifically mean "network file server" then the answer is a bit more involved. There are a number of filesharing protocols that are supported by Linux. NFS is most commonly used between Unix systems. Samba is common when you want to use Unix/Linux servers in a Windows for Workgroups, Win '95, NT, OS/2 or LANMan/LANServer network. In environments with plenty of Apple Macintosh clients the natural choice would be 'netatalk' ('net Appletalk). On a LAN with DOS client systems, particularly with existing Novell Netware servers the choice would probably to use the 'mars-nwe' (Netware emulator). Hopefully in the near future we'll see increased support for TCFS (a transparently cryptographic filesystem -- which is far more secure than NFS, even NFS over SRA (secure RPC authentication) and for CODA a new, enhanced version of AFS (the "Andrew filesystem" when it was developed at Carnegie-Mellon University, later called DFS when it was aquired? by Transarc -- which I gather is an IBM affiliate). Thoroughly retro Linux heads could even try the RFS (remote filesytem) package (runix100.tar.gz???). RFS was a SysV network filesystem that lost mindshare to Sun's NFS and is thus virtually unheard of today. It is basically possible to support many of these system concurrently on a single Linux host. A truly ambitious (and sick) sysadmin might try enabling them all. It should be obvious that the Linux philosphy is to support whatever protocol the client wants. This is vastly preferable (from the IS manager's point of view) to certain systems that try to dictate what software must be installed on all clients as part of their server licensing. I hope this all helps. To give a better answer I'd have to know a lot more about which services you want to provide. Most of them have FAQ's and HOWTO's at the LDP mirrors like: http://www.linuxresources.com/LDP (which should be the first stop shopping URL for every Linux user's questions --- followed by Yahoo! and its Alta Vista database at http://www.yahoo.com). ____________________________________________________ (?) How to read DVI files? From Gregory F.I. Sewbalak on 20 Aug 1998 Dear Answer Guy, A few weeks ago I've purchased the S.u.S.e. 5.2. With this 6 CD-Box there was no user manual available, therefore I tried to read the "books.dvi" on the CD. However, I don't seem to succeed in opening this file, because I don't know which program I need to do so! So, with which program can I read the dvi-files and the psz-files? Reminder: I used to be a Microsoft fan, but since I've tried RedHat 5.0 I've completely switched to Linux! How about that?!! Yours Sincerely, Gregory F.I. Sewbalak (!) .dvi files are created by the TeX and LaTeX typesetting (desktop publishing) packages. These are in a "device independent" format --- and are normally processed further by printer drive like 'dvips' or 'dvilj' (for PostScript and LaserJet printers respectively). If you have the X Window System up and running you can use 'xdvi' to "preview" these (view them on screen). You could also use 'dvips' to generate a Postscript file and use 'ghostview' or 'gv' (both are PostScript viewers for X) to view them. When you have questions of this sort it's often helpful to use the 'apropos' or 'man -k' command to get a list of commands and man pages that may related to some key word. Thus, on my S.u.S.E. 5.1 system typing the command: man -k dvi gives me: dvi2tty (1) - preview a TeX DVI-file on an ordinary ascii terminal dvibook (1) - rearrange pages in DVI file into signatures dviconcat (1) - concatenate DVI files dvilj (1) - convert dvi files to PCL, for HP LaserJet printers dvips (1) - convert a TeX DVI file to PostScript dvired (1) - print dvi-files dviselect (1) - extract pages from DVI files dvitodvi (1) - rearrange pages in a DVI file dvitype (1) - translate a dvi file for humans grodvi (1) - convert groff output to TeX dvi format xdvi (1) - DVI Previewer for the X Window System ... and reading those would give me some cross references. Allegedly there are also SVGAlib dvi and PostScript viewers --- though I've never used one. One of these day I'll hunt one down and play with it. Although my wife doesn't mind running X on the old 386 --- it seems too slow to me (I finally changed to this 150Mhz Pentium system that I built because I've been using xdvi and Gnus so much).. [In fact, I'm using it right now, very effectively. But I'm cheating... I'm really using the networking power of X. Betel, our VarStation II, is running Netscape and two sterms for me, and Antares is just serving display, keyboard and pointer functions to my desktop. -- Heather] The TeX typesetting language, and the LaTeX set of macros for it, are very popular among academic and technical publishers. The system was orginally created by Donald Knuth --- the most respected professor in the field of programming. He designed it and the the WEB "Literate Programming" system while he was writing the first editions of his "Art of Computer Programming" series (between the 2nd and 3rd volumes, if I recall correctly). TeX is not just a typesetting language like troff --- it is an extensible programming language for creating typesetting commands. Leslie Lamport used this facet of the system to create a set of macros, LaTeX, to allow people to focus on a document's structure and let his macros and the TeX system do almost all of the page layout. Thus a bit of plain text like the following: % Template for a LaTeX Letter \documentclass{letter} \name{Jim Dennis} \address{903 Harriet Ave.\ \\Campbell, CA 95008-5119} \makelabels \begin{document} %% for each letter do: \begin{letter} {% \\ % full name and title \\ % address } % city, state, zip-code \vfill \opening{Dear % % greeting name ,} \vfill \closing{Sincerely,} \vspace{ \signature{Jim Dennis,} \vfill \end{letter} %% end letter (Repeat as necessary) \end{document} ... is all you need to create nicely typeset letters. Basically you just fill in the blanks, run a command like: latex myletter.tex .. and (if all goes well) send the .dvi file to the printer (or run it through the 'dvips' command and then to the printer). If you've been hearing about XML (the next generation "extensible" enhancements to HTML) then this should give you an idea of what they're doing. LaTeX and TeX are extensible --- and there are packages to do all kinds of interesting and specialized typesetting and layout with them (things like "chess" and "backgammon" diagramming and all sorts of scientific and technical diagrams, tables and bibliographies). (Frankly XML sounds like a bit of a mess --- something like the old LU6.2 and APPC morass that my mainframer friends used to describe). The problem for many people is that this mode of thinking and working is totally alien. They're used to WYSIWIG interfaces and manual/visual layout (make it "look right" rather than "make it clear what you mean"). There are some efforts to provide a friendlier interface to LaTeX --- the most notable is "LyX" (pronounced "licks") and the KDE variant called KLyX). You can read more about LyX at its own web site: http://www.lyx.org Anyway, I hope that helps. ____________________________ (?) How to read DVI files? From Gregory F.I. Sewbalak on 22 Aug 1998 Thanks for the explanation! ____________________________________________________ (?) Bad Super-block on Filesystem From Mike Klicpera on 20 Aug 1998 I am trying to correct a corrupted super-block on my Linux (Redhat ver. 4.2) system. When using the command "e2fsck -av /dev/hda2" the resulting message is that "a bad magic number in super-block" When using the command "e2fsck -b 8193 /dev/hda2" the resulting message is "attempt to read block from filesystem resulted in short read while trying to open /dev/hda2". In neither case did the program e2fsck correct the super-block. Could you provide any advice or point me in the right direction? I'd start by looking at the partition table. Use fdisk -l [the letter ell, not the number one -- Heather] to list all the partitions that your Linux system can see. Make sure that the /dev/hda2 really is supposed to be a Linux native partition (that you haven't swap devices and the partition has moved to /dev/hdb2 --- and that it isn't a swap partition or whatever). It's also possible that you've switched from some autotranslation mode to linear (LBA) or otherwise changed how the system addresses this drive. Normally this shouldn't affect Linux --- but I don't know what sort of situation you have. The "short read" error causes me to suspect that the partition table is wrong or that you're pointing fsck to the wrong device/slice. That's the error I get if I run debugfs on a directory or file rather than the proper /dev/ node. It's also possible to get this if you've got a partition that's listed as Linux native that has no filesystem yet made on it or when you try the e2fsck -b on an MS-DOS filesystem. You can try a number of other superblocks (they should be scattered every 8K clusters) In a particularly bad case you can try mke2fs -S (make superblocks and group descriptors only). This is described in the man page --- and is for "last ditch" efforts only. If you have a tape drive or a suitably large extra disk drive you can make an "image" backup of this device before you try any other (more radical) attempts at data recovery. You'd just use a command like: dd if=/dev/hda2 of=/dev/nst0 ... or better: dd if=/dev/hda | buffer /dev/st0 ... to backup the entire drive through the "buffer" program to stream all of the data out to your SCSI tape drive. You can write the image to another block device, such as hdc3 using a command like: dd if=/dev/hda of=/dev/hdc3 (assuming you have a large enough blank partition on the extra or loaner drive). You can even send the data to another system with a command like: dd if=/dev/hda | rsh $othersystem dd of=/dev/hdc3 (or whatever). The advantage of any of these techniques is that it allows you to experiment with various recovery techniques with less chance of "making it worse" (any time you think you've "made it worse" you use the reverse commands to "start over"). There are a number of hex editors for Linux, some with nice ncurses interfaces. These can allow you to explore a filesystem trying to find out where things are. I haven't played with any of these enough to be any good with them --- and I've never read through the sources to find out where the interesting data structures should be, or what they should look like. Eventually I'd like to see the Linux programming community produce a set of fs recovery tools to rival the Norton Utilities for DOS (for which I used to be a professional support rep). The first such tool would be one that could scan a raw device, find superblocks and report the information from them. (!) Thanks in advance for any help. (!) I hope it helps. ____________________________________________________ (?) Multiplexing the Computer -- ISDN Modem Connection From Todd on 29 Jul 1998 Is it possible to have more than one process accessing a single serial port simultaneously? I have a USR Courier I ISDN, and would like to use the analog B-channel for serving faxes using Mgetty+Sendfax while the data channel is engaged. The problem is that pppd locks the port and Mgetty times out while waiting on it. Is there a way around this? Cheers, Todd (!) The short answer is: you can NOT do what you want from standard Unix/Linux. It would have to be via some special (and probably proprietary) protocols and drivers that would have to be supported by the Courier. It's certainly possible for multiple Unix/Linux processes to concurrently access a file or device. It's just a bad idea for serial devices. That, indeed, is the reason why we mess with lock files. Without file locking our processes will blithely step all over one another, disrupting communications. Let's think about this a bit. You have your ISDN device (If I recall correctly the Courier I is a combined NT1 and TA with analog modem/FAX support) connected to your PC via a single serial line. When that line is carrying data, it's busy. In order for it to carry two distinct streams of data there would have to be some form of multiplexing going on. This multiplexing would have to involve some protocol. The device would have to have a method for encoding and interlacing data from these two sources --- and the system would have to have some way (some DRIVER) for de-multiplexing it (splitting the original two streams back out the single serial string). I've never heard of a device that does this. Any that did would have to have drivers on the PC side --- a Unix/Linux driver of this sort would undoubtedly make the one serial port appear to be two (or more) tty devices. (That would allow it to work with any standard Unix/Linux utilities). PPP and SLIP have the effect of multiplexing multiple connections over a serial line. Theoretically a smart enough ISDN device could have its own IP address assigned to it and insert its own TCP/IP packets over your PPP/SLIP link when that was active. I've never heard of such a device. Bonding the two B channels using the MP (multi-link protocol) simply allows your two channels to act as on highspeed interface. However this requires that both B channels connect from your end to the same point at the other end of the connenction (usually the other B channel on the same physical device -- NT1 or NT2 at your ISP's end). In addition the top speed of a standard PC serial line is 115200bps. The total bandwidth of your two B channels is 128Kbps. Any multiplexing would involve some overhead on this bandwidth. So the PC serial line becomes a bottleneck even when you're just bonding the two B channels. Another approach would be to simply have two serial ports on the ISDN TA/NT1. That would allow you to access the analog services via one tty and the digital services The best resource I know of for info on ISDN would be from Dan Kegel's pages: http://alumni.caltech.edu/~dank/isdn That should provide more than you wanted to know. ____________________________________________________ (?) Permission to Set up a Linux Server From ChipX on 20 Aug 1998 Hi, OK, just a quick question (sort of)... My friend came over one day and we were just surfing like usual (using Win95). He asked if he could check his mail; I said "Sure." So he opens up telnet and logs onto a friend's RedHat Linux 4.2 Server. He checks mail, updates his finger, and leaves. I really need to know how to set up a server of my own. Do I need my isp's permission or some junk like that, cuz they wont be willing to give up any of their ethernet for me and my linux box :) (!) Alright, I finally figured out what you were asking. It took a little work, though. First note: when you set up a Linux system it defaults to providing many services. It is already a "server." What you seem to be asking is: "How do I make my server accessible via the Internet?" As you surmised you would have to make arrangements with some ISP to have some dedicated (or at least "dial on demand") connection to the net, or to "co-locate" your hardware with them. There are a number of ISP's that provide co-location services. This is where you provide a system that they plug into their network (and power). Generally these are moderately expensive services (about $150 to $500 per month usually with a limited average bandwidth utilization per month). Some of these plug you into their ethernet, others provide a null modem (serial) connection over which you'd configure a "local" (direct) PPP link. This allows them to effectively limit the amount of bandwidth you're using. (The latest 2.1 Linux kernels have an experimental "shaper" interface that allows one to limit bandwidth utilization on ethernet --- but I don't know of any ISP that's using that). I know some businesses that co-locate an extra server for redundancy. If their dedicated network connection gets hit by the proverbial (and sometime very real) 'backhoe' then their web site and mail server is still accessible to their customers. This is relatively low cost to companies that are used to paying for T-1, T-3, or fiber charges. This brings up to the second option. You can get a dedicated connection to your home or office. These range from 28.8 dial-up over POTS (plain old telephone service) to OC-48 (optical connections --- even past 622Mbps). As you might expect most of these are prohibitively expensive for home use (not to mention potential zoning and regulatory issues). For practical purposes you have the following options for home and SOHO (small office, home office) dedicated connections: modem over POTS: least expensive, might be as low as $130 (US) per month. Slowest. As discussed in my articles about modems you usually won't get 56Kbps out of a "56K" modem. ISDN (Centrex or not): This is usually at least $200/mo. Centrex is a little confusing. Typically it allows you and your ISP, if you are located in the same telephone CO (central office), to have an ISDN line that is essentially an extension of your ISP's office. This typically just eliminates the "per-minute" charges of keeping the ISDN line up. It also limits your ISDN line so that it can only be used with that ISP. (This also implies a very limited selection of ISP's for each user). DSL: Not available in all areas. Somewhat confusing right now since it is a fairly recent offering. Basically DSL takes advantage of an old obscure feature in the pricing structure and responsibilities of US phone companies. They used to provide "dry copper" lines (that is telephone wires with no dial-tone or signal) to alarm companies and similar services. Using these lines and connecting DSL routers at each end (rather than alarm monitoring equipment) one can get various speeds (depending on the distances between client, CO, and ISP). DSL typically costs about $300/mo where it's available. If I was getting a DSL line I'd get it from Idiom (http://www.idiom.com) or some other Covad partner (http://www.covad.com). I know the owner and founder of Idiom, and one of the principles of Covad. Those are both SF Bay Area companies. 56K leased line: (I'm not a telco expert but I think this is the same as a "fractional T1" --- that is that is a a fraction, 1/24th of a T-1 --- which in turn is a bundle of 24 channels for a total of 1.54Mbps). This is about as much as any sane person would pay to put in his or her home. They cost about $300 or more per month. cablemodem: These are very fast, and only available in a very limited number of places. Also they frequently limit your ability to provide services (through packet filtering or by periodically disconnecting you and assigning new IP addresses. While they sound great for web Frame Relay: I've seen these in various speeds, from 56K or 64Kbps to 1.5Mbps and in various prices ranging from $200 per month to over $1000. Wireless: A couple of providers in the Silicon Valley (and San Francisco Bay Area) offer wireless dedicated connections. One of them is Innetix (http://www.innetix.com) Conceivably an ISP could provide "dialout" or "service on demand" services --- that is that they could dynamically dial your server when TCP/IP traffic is destined for your site. (It would work almost the same way that your copy of diald allows your system to dynamically call your ISP --- only the underlying routes would be different). I've never heard of a company that actually offered this service and I doubt that there's any advantage for them to do so. This would probably be quite expensive for them --- and there's probably almost no demand for it (I doubt that one customer in a thousand would understand or care about such a service --- and I can see any pricing niche that would make it worthwhile). I only mention as a theoretical possibility. (?) Can I do this with X? Thanks. ChipX (!) X is a communications protocol for windowing (GUI) and keyboard/mouse events. The X Window System provides a client/server windowing environment --- which allows programs on your local machine, and on selected remote systems, to act as clients on your "display server" (a display is one or more screens, a keyboard and a mouse and/or other pointing device). This is why you call the program that you run on your Linux system an "X server" --- because it provides display services to programs like 'xterm' 'netscape' etc. The fact that most of these programs are usually running on the same host as the server is of no consequence to X. The X server communicates with all of its clients via sockets. Those are unix domain sockets ("s" special nodes on your file system --- usually under /tmp) for most localhost clients, internet domain sockets (TCP/IP networking) for most others). So, I suppose you can do "this" with X (that is, you could have an ISP co-located a server on the Internet, or you could have a dedicated connection fed into your home such that you could allow access to an X server from any client on the Internet. This would be horrible from a security standpoint --- but that's not something you've expressed any concern about. Shifting into "requirements analysis" mode we ask: What information, applications and resources to you want/need to make available to whom? ... which leads to a more fundamental requirements question: Who are the involved parties? (You, and each person or class of persons to whom you would like to provide access to the aforementioned resources). You can use these two lists (resources, parties/customers) to build a table of "business relationships" (even if this isn't really a business, the principle applies --- you relate groups/users to the resources with verbs like "read" "write" "execute" "append/add" etc. When you have a clear understanding of these things you can evaluate and prioritize them. That is to say: you can place values on each of these relationships. You may find that many the items you listed are not really requirements --- but are really preferences or constraints. That's fine, keep them on the list. You could then look at your possible approaches (from the list above, and by doing additional research into ISP offerings in your area). All possible designs/plans which fit your requirements without violating any of your constraints form a "solution space." This may be an empty set (there may be not solutions to your set of requirements within your stated constraints). If there are multiple options a mapping of these overlaid on your preferences may find an optimal solution (that's why you prioritize/evaluate the preferences --- so you can do sums and scoring). At that point you'd be in a position to do a cost/benefit analysis. Undoubtedly costs/pricing formed some of your constraints. Presumably your preference (all other things being equal) would be to pay less. However, it is possible that you're costs will exceed perceived or potential benefits in such a way as to convince you to abandon the solution set (and a whole project). Actually all you said about your requirements was that you "need to know how to ...." --- hopefully you now "know"; presumably you are, or were, considering actually setting something up and I'll have to guess beyond that. All I can guess about your requirements was that you want to be able to remotely get your mail, telnet to your machine, and update your .plan (finger info). You currently think you want to be able to do this "over the Internet." I'm not sure that you've really considered alternatives regarding this last one. If you connect a modem to your Linux box at home you can dial in and use it from anywhere that you can get at a modem and dial your home number. Unless you are a real globetrotter your home is probably a local call to you most of the time. In addition if your area has "Ricochet" or "Metricom" or [Ricochet is the product sold or leased by Metricom. -- Heather] any similar service it may be that you can get a wireless "modem" (provides a Hayes compatible AT command set and serial interface to your computer) with optional dial out service. (This allows you to use a "Ricochet" on your laptop, from the local coffee house or wherever you can get a signal to dial into your machine at home). Actually, oddly enough, this service has a strange idea of locality. I subscribe to it in the SF Bay Area. This lets me dial to any modem number in the 408, 415, 650, 510, and nearby area codes. It also allows me to dial to 800 numbers. I can dial to these, toll free and without connect time charges from any where that Metricom's service extends. Thus I've dialed into my home computer from the Burbank Airport near L.A. and from a hotel lobby in Seattle while I was at a USENIX conference. Another thing that's not evident from your question is just what benefits you hope to get from all of this. Is it just "coolness" --- so you can do the same thing your friend did? If so, see if you can get an account on this other friend's machine. Is it convenience? Do you have any security concerns? How much is it worth do have this much "coolness" or convenience? ____________________________________________________ (?) Detaching and Re-attaching to Interactive Background Processes From Lawrence Tung on 16 Aug 1998 Hi, James: When I run a background interactive process, e.g. ncftp, I logout. Is it possible to reattach this process to a terminal again after I relogin? Lawrence (!) Look for a program called 'screen'. It's included with most distributions and it's available at the UNC repository (Sunsite) at: http://sunsite.unc.edu/pub/Linux/utils/console (which technically isn't the the best place for it since it isn't a "console" specific utility --- you can use it from any terminal, dial-up line, xterminal, or over telnet and rlogin sessions). 'screen' gives you the ability to multiplex a number of interactive shell sessions through a single terminal session. You reserve one keystroke ([Ctrl]+[A] by default) which is the meta key that provides access to all 'screen' functions. Thus the key sequence [Ctrl]+[A], [D] will "detach" your currently running screen session from the current terminal connection. To re-attach later (from that terminal session or any other) you use the 'screen -r' command. When you first start 'screen' you might first think that "nothing" happened. It normally just starts a single shell session. If you start a command (such as 'vi' or 'emacs') and use the key sequence [Ctrl]+[A], [C] you'll "create" a new session. You can now toggle between the current subsession and the most recently active one by by typing [Ctrl]+[A] twice. You can "cycle" from one screen to the "next" (through all of them in a round robin fashion) by typing: [Ctrl]+[A] [Space] ... and you can get to any of your subsessions (interactive windows) using the meta key ([Ctrl]+[A]) followed by a single digit. You can have up to ten 'screen' subsessions for any screen session. You can even have multiple screen sessions detached. Your 'screen -r' command will list the PID's/sockets of the sessions that you own and will let you specify the PID (socket extension) of the one you want to resume. To send a [Ctrl]+[A] through to your applications (to move your point/cursor to be beginning of an 'emacs' line, for example) you'd type [Ctrl]+[A], [A]. (Note that the first is a "control" character and the next is just the normal, unshifted, letter). Of course you can change the "meta" key --- but I like it just the way it is. There are the usual sorts of command line options, and there are also "colon" commands to set many parameters modes and options. When you "detach" a screen session then all of the "windows" or "subsessions" are left running. They are not suspended (as they'd be if you used [Ctrl]+[Z] on them. There are many other 'screen' features: * Interactive Backscroll (using 'vi' keys for paging, searching, and scrolling). * Keyboard driven cut and past of anything in the backscroll buffer. * "Screen shot" (quick paste of current screen to a file). * "Log to file" (open a log file for a given subsession --- similar to a typescript 'script' command). * Command/shell driven creation of new subsessions (if you issue a 'screen' command from within a 'screen' session you can start a command asynchronously in a new subsession). ... and many others. About the only three features I think are missing from 'screen' are: * support to 'split' your terminal (to see parts of two subsessions concurrently --- either top and bottom, or side-by-side). (You can run 'splitvt' under 'screen' to get this effect). * embedded 'expect'/TCL support for context sensitive keyboard remapping (although you can run 'expect' under 'screen' spawning other programs and using the same 'interact' logic you would in any other 'expect' script). * multi-user "co-pilot" (allowing two or more users to "share" a 'screen' session). It appears that the authors were playing with some "co-pilot" features and gave up on them. Some of the colon commands have to do with controlling in memory "access control lists" (ACL's) which are apparently preliminary support to allow multiple users, on multiple terminals to have concurrent access to one screen session (with some or all of them being "read-only" and others having full "read-write" to the session). It is possible to do very simple two way collaboration using 'kibitz' (an 'expect' script that's included with some Linux distributions). It's a bit clunking but functional. However it does give the "kibitzer" full terminal access to your session, including the ability to kill or suspend a running program and access to your shell. Thus 'kibitz' (as it's written) should be used only to work with trusted parties (someone that you'd sit down at your keyboard with access to one of your login sessions). It might be possible to modify 'kibitz' for read-only access. This would be conception similar to running a command like: script /tmp/mytypescript.broadcast ... and having your 'clients' run the command: tail -f /tmp/mytypescript.broadcast ... except that it would flushing it's buffers far more frequently. (If you actually try running the 'script/tail' pair as shown --- or you use 'screen' "log to file" feature and slap a tail -f on that file --- you'll see that a few kilobytes of buffers are not written to the tail -f process until enough other activity has occurred on that screen/subsession or shell. By comparison 'kibitz' is writing synchronously --- it's flushing each character out to its file (which I think is actually a Unix domain socket rather than a regular file). So, the things that I'd like to see added to 'screen' are merely a consolidation of tools that are already available, and that all seem to be completely compatible with it. In any event, I highly recommand 'screen' --- it's the closest thing to DESQview that I've ever found for Unix. ____________________________ (?) Detaching and Re-attaching to Interactive Background Processes From Lawrence Tung on 17 Aug 1998 Hi, Jim: Thanks for your reply. I've used "screen" before. I need to start "screen" before I "detach" the terminal. right? Is it possible to reattch a process to a terminal after you relogin without starting any utilities in advance? Lawrence (!) Not under current versions of Linux. Conceptually you could have a kernel feature that allowed you to do something like this ---- for example if you were using a 'streams' based terminal driver (or some custom terminal driver). Basically the gist of it is that you have to know what you want in advance. There is no "normal" (standard or conventional) Unix/Linux mechanism for this --- so you have to either use a utility for it --- or you have to select an implementation that has "nonstandard" features. Frankly I don't know of any Unix that does support this sort of thing (except via 'screen' of course). As a couple of side notes: ncftp has direct support for background and "re-dial" file fetching. Some of these are enhanced in the at ncftp.com. If you need this sort of "attach and detach" feature for X the best I've been able to find thus far is to use the VNC Xserver and attach to it via the VNC client of your choice. Look at http://www.orl.co.uk/vnc for details. ____________________________________________________ Mark Heath asked about a Disk At Once CDR driver in July (Issue 30) and described different kinds of CDR sessions in August (Issue 31). The following was found in the comp.os.linux.announce newsgroup, and should be very useful to those who've been following this thread... ____________________________ (!) Cdrdao 1.0 From The Answer Guy on 18 Aug 1998 Mark, Hopefully this package will fit your needs. Please consider writing up a review of it and submitting that to Linux Journal and/or Linux Gazette. ____________________________ (!) Cdrdao 1.0 - Disc-at-once writing of audio CD-Rs From Andreas Mueller to the comp.os.linux.announce newsgroup on 5 Aug 1998 -----BEGIN PGP SIGNED MESSAGE----- This is the first release of cdrdao - a tool for writing audio CD-Rs in disc-at-once mode. Currently only the Philips CDD2600 writer is supported but it may work with other Philips writers, too. Contributions for other CD writers are welcome. Features: * variable pre-gap length (down to zero) * non zero audio data in pre-gaps * control over sub-channel information like ISRC code, pre-emphasis * tracks may be composed of several audio files * support for audio CD copy Andreas - LSM entry: Begin3 Title: cdrdao Version: 1.0 Entered-date: 03AUG1998 Description: Writes audio CD-Rs in disc-at-once (DAO) mode allowing control over pre-gaps (length down to 0, nonzero audio data) and sub-channel information like ISRC codes. All data that is written to the disc must be specified with a text file. Audio data may be in WAVE or raw format. Keywords: CD-recording, audio, disc-at-once Author: mueller@daneb.ping.de (Andreas Mueller) Maintained-by: mueller@daneb.ping.de (Andreas Mueller) Primary-site: sunsite.unc.edu /pub/Linux/utils/disk-management/ 67 kB cdrdao-1.0.src.tar.gz 103 kB cdrdao-1.0.bin.x86.linux.tar.gz Alternate-site: Platforms: Linux Supported hardware: Philips CDD 2600 Copying-policy: GPL End - - -- [PGP and other moderation details omitted for clarity] This group is archived at http://www.iki.fi/mjr/linux/cola.html ____________________________________________________ (?) High Speed Serial (RS422) under Linux From Sujeetharan Sivasubramaniyam on 18 Aug 1998 Hi I want to use RS422 card under linux. I am not sure whether this is suppored by linux already. Also I would like to see some documentation on this. Any pointers will be greatly appreciated! Thanks Since you don't mention which specific card you are hoping to use, I'll assume that you are willing to buy a new/supported card if necessary. Naturally, with any question about supported hardware, the first place to check is the Hardware-HOWTO (http://sunsite.unc.edu/LDP/HOWTO/Hardware-HOWTO.html). At first we might be led astray (searching on serial gets us to section 26.6: "Modems and Serial Cards" --- but that's in reference to PCMCIA cards). However, we don't find any references to 422 under "Controllers, I/O" and "Controllers, Multiport" nor anywhere else in the HOWTO. Running the Excite search on the LDP site using the string "422" doesn't net any useful results either. So, we have to dig a bit deeper. A Yahoo search using the string "+rs422 +linux" nets about 53 hits. The first of these is from NTLUG (North Texas Linux User's Group) which lists a company called Vikom, in Irving, TX with the note: "manufactures RS232 and RS422 multiport cards for Linux" (http://www.ntlug.org/lint/index.html). ... No URL or contact information is provided. Another result of this Yahoo! search is for GTek Inc. (a long-time manufacturer of serial hardware for PC's). They list a Cyclone 6 and note that there are "RS422 and RS485 interfaces available" and separately they say: "Drivers are available for all popular operating systems: WindowsNT, Windows95, Linux, OS/2 and DOS." http://www.gtek.com/news.html. (Granted, this second lead is buried under about a dozen unlikely results that I didn't follow). You might also search the Equinox, Digi, Specialix, and other web pages for manufacturers of specialized and multi-port serial hardware to see if they have something that suits your needs. Presumably there is some RS422 support built into the Kernel --- at least in the PPC (PowerPC) port. As far as I know most Mac's and Mac clones shipped with built-in RS-422 (a.k.a. EIA-422) ports for their modems (and printers?). I also found a note that seems to confirm my own experience --- that is is possible to interface RS-422 connectors to most RS-232 devices using the proper cabling (the classic "Mac to PC Modem" cable). There is even one message about using a special cable to connect an Apple printer to a PC's serial port. This suggests that some RS-422 devices can be interfaced to RS-232, as well. (I mention this since your requirements aren't clear from your message --- potentially you don't need an RS-422 interface at all). Obviously if you already have a specific device and you are interested in details about that RS-422 card you'll have to provide details about it. Since I don't have any first-hand experience with any RS-422 hardware on PC's and I've shown exactly how I did my searches, it would make sense to do your own searches for support on your card. If you can't find a manufacturers label or any docs on the card (i.e. you inherited it in "used" condition) then you're probably out of luck. ____________________________________________________ [We seem to be coming in at the middle of this thread, but I don't see where the previous might be found. We'll have to make do with what's been quoted here, Sorry, readers! -- Heather] ____________________________ (?) ANOTHER MODEM PROB From CodeWaRi0r on 18 Aug 1998 > It sounds like a "winmodem." (These are devices which don't > support the standard AT command set and which require proprietary > drivers in order to function. Currently those drivers are only > available for Windows --- I gather that some of them don't even > work under Windows NT). > The acid test for this is to try to use the modem under DOS (boot > from DOS and try run a program like Telix). If that also can't > "see" the modem --- then it's not a real modem; it's a "winmodem." > Although I've read rumors about an effort to reverse engineer the > "Rockwell chipset" (RPI) --- which is the one commonly used in > these "winmodems" --- I believe that you're only reliable recourse > for now is to exchange this piece of junk with a real modem. nope.. im sure i dont have a winmodem. actually, I have to modems on my computer .. one winmodem on COM1 and a regular modem by BOCA on COM3.. ok.. just help me out with one more thing and ill be out of your hair :) ok.. ive established that i dont have a winmodem.. i start up X. run the modem configurer (this program makes a link between /dev/modem and cua2.. or so it says... i just select the COM port and press ok and thats it. Am i missing a step here?) (!) It could also be an IRQ conflict --- on a conventional PC COM1 and COM3 share IRQ's with COM4 and COM2 respectively. I don't know how you're sure that you don't have a winmodem --- unless it's external, or you've been able to connect to it under DOS or Linux with standard AT commands. However, I'll assume for the moment that this is the case. I would leave X completely out of this for now. I wouldn't use any "modem configurator" (that's just one more piece of junk between me and the troubleshooting). One way to do this testing is to run minicom or C-Kermit to connect to the modem, Then you should be able to type AT commands directly to the modem and get responses back. Before you do this, look at the nodes under the /dev/ directory with the 'ls -l' command. Make sure that they look something like: lrwxrwxrwx 1 root uucp 5 Jul 13 16:52 /dev/modem -> ttyS2 crw-rw---- 1 root uucp 4, 64 Aug 18 15:17 /dev/ttyS0 crw-rw---- 1 root uucp 4, 65 Aug 18 15:17 /dev/ttyS1 crw-rw---- 1 root uucp 4, 66 Nov 30 1997 /dev/ttyS2 crw-rw---- 1 root uucp 4, 67 Nov 30 1997 /dev/ttyS3 .... where the "modem" entry is a symlink to the appropriate serial port (ttyS2 for a normal PC's COM3). You should also make sure that the permissions on most of your communications software are such that it is run by 'root' (such as pppd) or is at least SGID 'uucp' (you could chgrp all these to 'modem', create a 'modem' group, etc --- I use 'uucp' since that is an older Unix convention). So, an 'ls -l' of your minicom, kermit, uucico, pppd, chat, should look something like this: -rwxr-sr-x 1 root uucp 119280 Nov 30 1997 /usr/bin/minicom -r-sr-sr-x 1 uucp uucp 395148 Nov 30 1997 /usr/lib/uucp/uucico -r-xr-sr-x 1 root uucp 632609 Feb 10 1997 /usr/local/bin/kermit -rwsr-x--- 1 root dialout 83856 Nov 30 1997 /usr/sbin/pppd -rwxr-xr-x 1 root bin 12224 Nov 30 1997 /usr/sbin/chat Note that minicom, uucico and kermit are SGID uucp --- they don't need higher privileges. uucico is the actual communications engine for the uucp suite --- it's normally only called by the uucp command, but it needs access to some directories (usually /var/spool/uucp* or /usr/spool/uucp*) to work properly. So we make it SUID to the uucp user as well as SGID to the uucp group. You can ignore all of those details for pppd since it must be run as root (in order to set IP addresses and add routing entries). Here I've set it to be in the 'dialout' group --- so that I can restrict who can execute this (thus limiting who could try to use exploits on pppd to gain 'root' privileges). This is a simple and far too uncommon way to narrow the security holes on a system. Note that the 'chat' program gets no special permissions. It is invoked by pppd which is already running as 'root' --- and thus, it needs no "extra" privileges. (!) so i do that, and quit X. I run chat with the following script TIMEOUT 5 "" ATZ OK ATDT6161038 ABORT "NO CARRIER" ABORT BUSY ABORT "NO DIALTONE" ABORT WAITING TIMEOUT 45 CONNECT "" TIMEOUT 5 "ogin:" blah "assword:" blah i run chat $chat -f /etc/ppp/chatscript (where i store the above script) i see linux go: AT (!) Aha! That's your problem. You aren't supposed to run 'chat' directly. You are supposed to run pppd, and let it invoke the 'chat'. This is due to some technical internals about how file descriptors are inherited. The 'chat' program writes to its 'stdout' file descriptor and expects input (modem responses) from it's 'stdin' For that to work you have to provide it with suitable open standard files. One way to do that would be to type: chat -V "" "ATZ" "OK" < /dev/modem > /dev/modem ... which engages in the simplest meaningful dialog that I know of with a Hayes compatible modem. It simply "waits for an empty string" (nothing), sends an ATZ, and waits for an "OK" The -V switch here tells 'chat' to "be versbose to 'stderr' which will, in this case, by our console/terminal. On my system that command returns: AT&FS2=255 ... and an exit value (errorlevel) of 0 (no-error). This isn't exactly the output I would expect from my interpretation of the man page. But it's close enough (it suggests that my modem returns this string when it recieves an ATZ comment --- which suggests that this modem is translated ATZ into "AT&F" (return to factory zettings), and setting the S2 register to 255). If I issue the command: chat -V "" "ATZ" "OK" < /dev/ttyS3 > /dev/ttyS3 ... (a serial line with no modem attached) I get no output and a return error of 3. In any event you need to let pppd start your 'chat' command. The way to do that is to provide pppd with a suitable "options" file. In my case I have multiple PPP accounts (some with customers, some with ISP's). So I create an /etc/ppp/options file that just consists of one line: lock ... and I create different supplemental options files that contain the details specific to a given provider. For example one looks pretty much like: asyncmap 0 crtscts defaultroute mtu 296 mru 296 modem /dev/modem 57600 connect "/usr/sbin/chat -f /etc/ppp/options.myisp" I'll skip the first several lines (you can read the PPP-HOWTO and/or the pppd man pages for details about those) and focus on the last couple of lines. The modem directive tells pppd to use serial line settings (ioctls?, stty) that's appropriate for a modem. The other option would be "local" (use a null modem cable or some sort of network connection). This has purely to do with how the line is "conditioned" and how the handshaking lines are handled. The /dev/modem parameter tells it which device to open (and thus what file descriptors to pass to the command that will be invoked via the "connect" directive. The 57600 is simply the speed to which pppd should should set the serial line (I presume this is also used at part of an 'stty' like function call). The last line is the "connect" directive. It tells pppd what command to invoke to deal with the modem. So, you then invoke pppd with a command like the following: pppd file /etc/ppp/options.myisp ... this processes the "global" options in /etc/ppp/options and then the options that you've specifed with the file directive. Many examples I've see specify many of the options one the command line. For example it's pretty common to see a command like: pppd modem /dev/modem 57600 file /usr/local/etc/ppprc ... which should be reasonably comprehensible once you understand that pppd basically interprets all its command line in the same way as it processes directives in it's options files. Carefully reading the 'man' page should clarify what order and precedence affect the processing of all of these directives and options. For example, you have to be sure to avoid any conflicting options ~/.ppprc (just get rid of any such file unless you know what you're doing with it). Another simple testing trick is to use 'minicom' to dial the phone and establish your connection (log in). Then use the "Quit without Resetting the Line" option (using the [Ctrl]+[A], [Q] key sequence). This should dump you out of minicom and back to a shell prompt without disconnecting your modem. (It is then possible to invoke pppd on that line --- using an alternative version of the ISP options file without any "connect" directive). That trick doesn't work with kermit --- it won't exit without resetting the communications line. From what Frank de la Cruz tells me you can't use C-Kermit as a replacement for 'chat' because of this. Basically he says it violates some programming standards to do this. (I still don't understand that --- but it's not currently a priority to me. If someone understands it and wants to explain --- write an article and send me a copy). but the modem doesnt respond... i know its weird.. but i just dont think linux knows its there.. is the "linking" i do in X all that's necessary? or am i missing something? The problem is that you haven't understood the PPP-HOWTO. I can sympathize with that --- I spent quite a few hours banging my head on it (at least an earlier version of it). However, try reading it again. There are several GUI and dialog driven frontends to configure your PPP connection for you. I've never used any of them (they didn't exist back when I was doing my head banging). In any event a search of the Linux Software Map using Boutell's search page (http://www.boutell.com/lsm) on the term PPP will give a list of several of these and a couple dozen related samples and utilities. There are KDE, Motif, and Tk interfaces (among others). You could try a few of those to see if any of them works for you. You should also be able to get your ISP to help. If they refuse to help with Linux/Unix ppp technical support --- consider getting a new ISP. (!) again sorry to bother u. (!) The question doesn't bother me. No apologies are necessary for that. If I seemed grumpy, it's because I don't like to see this sort of "hakkerboyz" text. I don't think it's cool and I do find it difficult to read. It is only a bit less offensive than GETTING MESSAGES IN ALL CAPS. People who won't take the time to use reasonable punctuation and spelling in their questions cause me to wonder why I should take the time to answer them. I'm sure that many don't even pause to wonder; they just hit the delete key and move on. I have no idea how old you are, or why you choose to write this way. Perhaps you have some very good reasons --- and perhaps you think I'm some sort of pedantic curmudgeon. You're welcome to hold and share that opinion (and I'll even agree with part of it). However, one of the few liberties I take in this column is the opportunity to occasional jump up on my soapbox and express my opinion. The evidence is that some of my readers find at least some of my tirades amusing, even (according to some reports) "inspiring." For those that don't like it --- I can only say "Well, I did at least attempt to answer the question." (I don't remember of any occasion where I just flamed someone without answering their question. I just won't say "RTFM" --- I'll at least say which "FM" to "R"). Anyway, I hope that the hint about how to invoke 'chat' helps. If that doesn't work (and direct invocations with the redirection as shown, and 'minicom' tests) all don't work, ____________________________ (?) More on Grammar From CodeWaRi0r on 18 Aug 1998 BTW: I apologize for my sloppy spelling and grammer... it's just the internet talk that does that to you :) Despite my erronious writing (I'm sure I spelled that wrong.. LoL), I'm an accompilshed programmer in C\C++ (including Window's MFC and UNIX), Visual Basic, Perl, and Assembly languages (!) Thanks. I knew you could do it. I do recognize that particular style of writing as a symptom of too much time in "chat lounges" (or in IRC depending on your preferences). However, it is still difficult to read --- and it still does say something about the relative importance you place on a given communication. (Certainly you wouldn't expect a resume' to be taken seriously if it was written in this style). Incidentally the word is normally rendered as "erroneously." If you are an emacs user you can "quick check" a word using the M-$ key binding. Since I use 'viper' mode, which makes it irritating to get at the [Esc] for use as [Meta] I add the following binding to my .emacs file: (global-set-key '[f3 ?$] 'ispell-word) (global-set-key '[f3 ?%] 'ispell-buffer) ... which means that the two key sequence: [F3], [$] will check the word at point and [F3], [%] will check the whole buffer. (I suspect that vim also has some features for running ispell --- though I've never used any of the IMprovements of that editor). The fact that using emacs (xemacs, actually) gives me 'ispell' in all of my editing (including in my favorite mail reader, mh-e, and my preferred newsreader, Gnus) is one of the reasons why I use it. I personally despise the default emacs key bindings (which I think are designed to torture the pinky until you're ready to chop it off). So, I use viper-mode (a 'vi' emulation package) for the majority of my editing --- and I supplement it with a fairly long list of custom binding, most of which start with [F3] (the first available function key that had nothing assigned to it --- [F1] was used as a "Help" prefix and [F2] had some weird two column mode function bound to it). So, I switch buffers with [F3] [b] and bring up a "buffer menu" with [F3] [B] (capitalized). I bring up the 'emacs' calendar package with [F3] [C] (capitalized) and a "shell-mode" buffer with [F3] [c] (lower case --- for "command"). I check my diary (appointment list) with [F3] [D] (capitalized), and bring up "dired" (file-manager buffers) with [F3] [d] (uncapitalized). I "unsplit" my display with [F3] [1] and split it vertically with [F3] [2] or horizontally with [F3] [4]. (The experienced 'emacs' buff may note that most of these are functions that are normally accessed via C-x keybindings; a prefix that I find truly abominable). But ... I'm rambling. Some day maybe I'll write an article on how I use 'xemacs' --- it may be amusing to some. ____________________________________________________ (?) /usr/bin/open not found From Jesus Alejandre on 16 Aug 1998 I tried to use your solution in "User Shell on Virtual Console 1" (Linux Gazette issue 30), but the command /usr/bin/open doesnŽt exist in my system. I installed it from S.u.s.e. 5.2 , kernel 2.0.33 distribution. Could you please tell me where can I find it?. Thank you very much. Jesús Alejandre. (!) It should be one the CD's somewhere. I've got it on my S.u.S.E. 5.1 and my 5.2 systems, and I've seen it on several of my old Red Hat 4.x and 5.x systems over the years. You can also look under: http://sunsite.unc.edu/pub/Linux/utils/console ... which will get you the a "tarball" (rather than an RPM). This probably contains source code which you'd compile for yourself. If you'd prefer to avoid that you could look in the "contrib" directories at Red Hat, it's mirrors and at S.u.S.E. That might net you an RPM if you have a strong preference for using the package manager. ____________________________________________________ (?) Tuning X to work with your Monitor From Alan Morton on the L.U.S.T Mailing List on 14 Aug 1998 I am wondering if anyone on the list has any experience with a vid card/monitor combination similar to mine, or have any experience with this particular problem that can either give me some constructive advice or can point me to some relevent documentation. I am using a Jaton 67-P video card with 4 megs of RAM. I am trying to use it with a HP 4033A monitor. The problem is that whem I use Xconfigurator (I'm using a RH5.0 distro) irregardless of what it says about an acceptable configuration, when I start up X it trips the Power Saving function on the monitor and it goes blank. When I use the cntrl-alt-backspace it naturally kills X, plus it retrips the monitor and my screen begins to reappear. I am making the assumption that it is engaging some error procedure in the monitor, which is fine. But I really want to get this working at a reasonable resolution. If I use a setting of 800x600 and 8 bit color it works just fine, but that seems a bit wasted on a 21" monitor. I know the monitor will handle a lot because I can use it at 1024x748 in 16 bit color with my Windows NT boot-up. And I have a friend with an identical monitor running 1280x1024 in 24 bit color under Win95. I don't think the problem is the card because I have been running it with a 14" Magitronic monitor previously at 800x600 in 24 bit under both NT and Linux with no problems. Any assistance with this will be greatly appreciated. Alan Morton (!) The problem you are having is related to the refresh rate that X is trying to use for the mode you are trying to display. The rate may be either too high or too low. I am not sure exactly about the specifics of what needs to be done to change the refresh rate that X uses, but I'm sure there is a good source of information for configuring the display properties of X. Hopefully this will narrow your scope in searching for an answer, because I don't really know what else would cause this problem. Any other suggestion welcome. Configuring X on a system is like a classic "three-body problem." You have to get the correct software and libraries all installed (which any distribution makes pretty easy). You have to select the correct "server" for your video card. This is complicated by the number of video card manufacturers --- many of whom have many models of video card, often identified by a stream of very similar sound digits and letters while using completely different combinations of chipsets, clock chips, and RAMDAC's (digital to analog coverters?). You have to provide a "tuned" set of video timings. This is primarily dependent on your monitor. Getting all of these things to work together is still a pain. However, the xf86config, SuperProbe, and Xconfigurator have helped alot. I've heard that Eric S. Raymond's Video-Timings-HOWTO has helped alot of people (though I've never used it myself). I've played with 'vidtune' (an X program that allows you to tune your video times after you've gotten "close enough") --- but it's still a bit mysterious to me. (How to I get it to write the timings out to a format I can use in my XF86Config?) I usually just use a "hit and miss" approach by looking through the list of samples in /usr/X11R6/lib/X11/doc/Monitors (or thereabouts). The one on my S.u.S.E. box lists about 65 different samples for various monitors. I just play with them a bit while I hand hack on the /etc/X11/XF86Config file (which is probably the least efficient way of doing it --- but usually Xconfigurator and a good multisync monitor get along O.K. so I usually don't have to fuss too much). ____________________________________________________ (?) The End of libc5: A Mini-Interview with H.J Lu From The Answer Guy on 29 Jul 1998 H.J. I hate to bother you with a more discussion on this topic. However, I'd like to have a definitive posting for the Linux Gazette (I do the Answer Guy column there) to allay people's concerns about this migration. So, I'd like to ask a couple of questions: Is there a definitive archive or document on the web that you feel accurately answers most questions about the glibc vs. libc 5 controversy? (!) No. I have posted a few articles from time to time. As far as I know, glibc 2 is very good now. The only problem it has is some Linux specific programs use the kernel header files directly. It won't work with glibc 2. We have to change those programs and we have to add those missing pieces to glibc 2 if necessary. So far, everything looks good. RedHat and Debian have built their entire Linux on glibc 2. (?) It is my impression that libc.5 and glibc can co-exist on a system concurrently and transparently. Is that so? (!) Yes. (?) Are there exceptional cases? (!) All those binaries compiled by egcs and gcc 2.7.2.3 should be ok. But if you have a C++ binary which uses libg++ and was compiled by gcc 2.7.x.x other than 2.7.2.3, it may load the wrong libg++.so. But I don't know if there is a case where you cannot recompile it. (?) I gather that a couple of the major advantages to glibc have to do with: support for NIS, thread-safe library calls, transparent support for the shadow password suite in the getpwent() and related functions, and much easier compatibility with other GNU sources (without requiring as much porting effort on the one side nor as much library maintenance by you, personally, on the other). Is that all true? Are there other compelling advantages to glibc over libc.5? (!) How about conforming all those standards, like ANSI C, POSIX, XOPEN and UNIX98? (?) I've heard complaints that glibc takes up significantly more disk space and run-time core (RAM). I've also heard (!) glibc is compiled with -g by default so it does take more disk space. The harddrive is cheap. I'd like to keep those debug info in library. In anycase, you can run strip on them. But I won't recommend it unless you are building a very small Linux installation. As for memory, glibc 2 has more stuff than libc 5. It has to bigger libc 5. I don't call it libc 6 if it is smaller than libc 5. But Unix has demand paging. Those unused portions won't be loaded into memory anyway. (?) that running 'strip' on it can significantly reduce its disk footprint. Where can I find a more comprehensive comparison of the speed, disk space and memory requirements of the two sets of libraries? (!) We have to pay the price on speed and size for a MT-safe C library. But there are so many new optimizations in glibc 2. I would say glibc 2 is faster than libc 5. As for disk space, it depends on if you want to run strip or it. By default, glibc 2 provides much more debug info than libc 5. With libc 5, when something goes wrong with the C library, I have to use a speciial C library just to debug it. With glibc 2, I just run gdb on the binary compiled with -g. As for memory requirement, I believe it is the similar to libc 5. (?) I presume that someone else could (if they wanted) take up the maintenance and continue to improve libc.5 (in the spirit of the GPL) if they really wanted to do so. Is there (!) Sure. Anyone is free to do so. (?) any argument you would present to, or question you would ask of someone who was proposing to do this? (!) libc 5 is quite stable. Most of the fixable bugs are fixed. All the new developments have gone to glibc 2. glibc 2 also has fixed those bugs which are hard to fix in libc 5. I don't see why we should spend more time on libc 5. (?) Last one: Now that you're free of libc.5, are there any projects that you are particularly interested in pursuing (open source or otherwise)? (!) I have been working on egcs/gcc, binutils and gdb for Linux. I will keep doing so. In the meantime, I am very interetsed in projects using C++, thread and distributed technology, something like CORBA. I have been thinking a new paradigm for Unix. Instead of a C or C++ library, we use something on the line of CORBA or DCOM. Another thing, every program is compiled with -fPIC as a shared library, i.e.: # gcc -shared -o ls.so -fPIC ls.c ...... # gcc -o ls ls.so Now, we have both ls and ls.so. Another program can do int (*ls_main) (int, char **); void *handle; handle = dlopen ("ls.so", ...); ls_main = dlsym (handle, "main"); ls_main (argc, argv); We can also use it instead of expensive isystem ("ls"); It can go with the CORBA/COM idea. That is we define an object oriented interface for a service, which can be a local shared library or a different process which can be local or remote. (?) With your permission I'd like to submit your answers for possible publication in next month's Linux Gazette (whichi is available online and distributed under the terms of the LDP GPL). Please feel to add any comments you like to your response, (or to tell me to buzz off for that matter). (!) It is fine with me. (?) Regardless of your response I'd like to personally express my gratitude for all the work you've done for Linux over these last few years. If you're ever in the SF Bay Area, ____________________________________________________ (?) Where to put 'insmod' and 'modprobe' Commands for Start-up From anonymous on 14 Aug 1998 If it entertains you, a couple of questions: Where the bleep should one specify modules to be installed when a system boots? I can't find it stated directly in any of the books, maybe /lib/modules/default ? (!) There are three ways to do this. The simplest is to load and unload the modules as you need them (thus you find your first 'ifconfig' command and insert an 'insmod' or a 'modprobe' command for your ethernet card before it; you change your ppp startup script to load the ppp module, etc). Another way is to put all of you 'modprobe' or 'insmod' commands in some file like /etc/rc.d/init.d/modules and call that from one of your early rc scripts. You can trace through these rc script by starting with the inittab which generally has a set of references like: # /etc/inittab l0:0:wait:/etc/rc.d/rc 0 l1:1:wait:/etc/rc.d/rc 1 l2:2:wait:/etc/rc.d/rc 2 l3:3:wait:/etc/rc.d/rc 3 l6:6:wait:/etc/rc.d/rc 6 ... all of these call the /etc/rc.d/rc script -- with a parameter to specify the runlevel, of course. So you look in that script and insert a call to your /etc/rc.d/init.d/modules in the appropriate block or you put a set of symlinks under each of the rcX.d/ directories that correspond to each of the runlevels where you want these modules loaded. You'll want to prefix any of those symlink names with the SXX --- using a low number like S01modules --- to make sure that the "modules" script is called very early in the boot process, before anything that depends on them is called. The difference between 'insmod' and 'modprobe' is that 'insmod' is a somewhat simpler program. You usually have to specify a full path with it -- and you much load modules in the correct order. 'modprobe' relies an a modules dependency tree to find and load the specified module *and any that it requires*. To prepare the dependency tree you must run 'depmod -a' at least once after building and installing any new kernel or modules. Some distributions will run a 'depmod -a' command as part of the normal startup sequence. Yet another way, ultimately the one that is most convenient, is to run kerneld (2.0.x) or kmod (2.1.x and eventually 2.2). These kernel module loaders will dynamically load and unload modules and their dependents. This is similar to the way that Solaris does it (although it doesn't seem to be optional under Solaris). (?) The "multiple configuations" thing in linuxconf (control-panel/system) seems to be reasonably broken; are you writing about any of this? (!) I played with linuxconf only briefly. It seemed like it was often trying to do "too much" and I'd've preferred a mode where I could just use it to spit out configuration files and instructions on where I should put them. (?) Boy does the world need your book; the docs that are there seem pretty hopeless... (!) They can be frustrating. I try to help because I figure I've beat my head against that wall enough for any ten people. ____________________________________________________ (?) The BIOS Clock, Y2K, Linux and Everything From Ward, David on 12 Aug 1998 How does linux keep track of "real time". Does it get its information from the BIOS system clock, or can it keep track of time by setting the correct time zone, and setting the time, even though the BIOS is incorrectly reporting the "real time"?. Thanks, David Ward (!) Linux's initial clock settings (at boot up) are from the BIOS. However, the kernel internally keeps its own time thereafter. It turns out that there is an immense about of work that is done on system clock synchronization over the Internet and among Unix systems. I'm assuming that you're concerned about some specific systems that have a buggy BIOS --- that you know will report invalid dates after the year 2000. To detect this condition you could use a script like: CURRENT_YEAR=$( date +%Y ) FILE_YEAR=$( find /etc/README -printf "%TY" ) [ $CURRENT_YEAR -eq $FILE_YEAR ] || { # We've suffered a backslip: the current 4 digit # year arimetically precedes the date on our # marker file logger "Backslip in Time Detected ... Fixing" # Recover here.... # After recovery and during shutdown, when # the clock is in a known good state, we can # touch the marker file to unsure that it's # date is periodically updated. } ... note that I'm using the $() (Korn/bash) construct rather than the equivalent "backtick" operators. This is to avoid ambiguity; the effect is the same. One way to ensure that you have the correct date set on your system is to use the 'ntpdate' command around boot time. This sets your clock based on that of another system. Oddly enough, though this command is included on many Linux systems, there often seems to be no man page installed for it. However I've read the man pages at (http://www.eecis.udel.edu/~ntp) --- and they don't make things any easier. With all due respect to Mr. Mills (one of the key figures in the NTP system) these pages (man and web) look like they were written for a federal funding grant. A simple HOWTO would be nice. (Maybe I'm just stupid but these pages seem to talk about everything other than how does a typical home or SOHO sysadmin configure their systems to have the correct time). In any event here's the command I use to initially set my date: /usr/sbin/ntpdate -s ntp.ucsd.edu ns.scruz.net ntp1.cs.wisc.edu ... this calls the ntpdate command and lists three time servers (stratum-2 in this case). In the complicated world of NTP the "stratum" of a clock is a measure of how "far" it is from the NIST atomic clocks which are used as the international standards. In essense it is a measure of the time server's "authority" (as in 'how authoritative is that answer'). It isn't actually a measure of how "accurate" that clock is, just how many hops are between it and the top of the hierarchy. Thus my system (betelgeuse) becomes a "stratum-3" NTP server after I refer to these "stratum-2" servers. It is the system that I use to set the time for the rest of the house. After the time is initially set I periodically re-run this command to reset it. It reports to me the adjustment that it makes (typically under one second). This is NOT recommended practice. (Mixing ntpdate and xntpd on a system). However, in my case, I don't want to configure my xntpd to refer to those same servers since it would mean that my ISDN router would fire up an unnecessary connection to the Internet every twenty minutes round the clock. Since I have no easy way to prevent this (the ISDN router I'm using is a separate box) I choose do use my method. If you have a full time connection to the Internet then the best solution is to use the xntpd (extended Network Time Protocol Daemon) to keep your system clocks in sync with a set of time servers. I'd set up one or two systems on your 'perimeter' network (the one that's exposed to the Internet --- assuming you have a firewall). Then I'd have the rest of your systems use that (or those) as their time reference. xntpd also includes support for a couple of dozen GPS and radio clock devices. These range from a couple hundred to a few thousand dollars (and typically connect to your host via a serial line). In all cases ntpdate and xntpd use sophisticated protocols to measure latency and network communications delays and to account for deviations between the reference servers. You're pretty well guaranteed sub-second accuracy when you use them. In some versions and configurations, the NTP suite supports cryptographic integrity preservation methods, to prevent spurious and hostiles changes to your network time references. The web pages I referred to above does have a wealth of details about the protocols and the suite. If you can manage to decode it into a set of simple instructions for us "tape apes" I'd love to see it written up as a HOW-TO. Perhaps the subscribers to the comp.protocols.time.ntp newsgroup might be more helpful. (My e-mail exchange with Mr. Mills on this issue was not terribly helpful). There is one existing mini-HOWTO that could be expanded to suit the bill: Clock Mini-HOWTO: http://www.ssc.com/linux/LDP/HOWTO/mini/Clock.html (written by Roy Bean). ... it only contains a few words about xntpd. Also, someone once told me about a GPS reciever that was very inexpensive. It had no display, only a DB-9 serial connector. If anyone out there knows of a reliable source for these, I'd like to know about it, and I'll be happy to publish the URL. I wouldn't mind paying $100 for a good time source --- but two or three hundred is just too much for my applications. ____________________________________________________ (?) Conditional Execution Based on Host Availability From the L.U.S.T Mailing List on 07 Aug 1998 #!/path/to/perl $ping = `ping -c 1 10.10.10.10`; exec ("program") if $ping =~ /100\% packet loss/; (!) What's wrong with a simple: ping -c 1 $target && $do_something $target || $complain ... where you fill $do_something and $complain with commands that you actually want to run on success or failure of the 'ping'. That's what shell "conditional execution operators" (&& and ||) are for after all. (?) or something similar with a shell script... or, a quick socket program (probably a little easier on the system) john (!) I don't know why any other socket operations would be "easier on the system" than a single 'ping' (ICMP echo request). > Hi, > I'm looking for a program that can ping a host, and based on whether or not > the host is unreachable execute a program. Anyone know of something like > this, (or how to write one...)? > Thanks for any help. > -Corey ____________________________ (?) Failover and High Availability for Web Servers From L.U.S.T List on 12 Aug 1998 Re a command like: ping -c 1 $host && do_if_up || do_if_unreachable_or_down > The orginal poster asked a very simple question with > a very simple answer. He (or she) did not go into any > details about his (or her) requirements. Jim, Thanks for your spirited post. It made my morning... :) It's a sure bet that if you ask a bunch of technical guys a simple question, you're going to get about 10 different, complicated, and lengthy answers. But, we do live in the real world, and the simplest answer is often the best. Although I liked John Lampe's perl script because of its flexibility, I think I'll be using the shell conditionals that you proposed. In case you're curious, here's what I actually want to do... I have a web server over here, and it's pretty important that it remains up. PC's are cheap, so why buy one when you can have two for twice the price? :) I use IP aliasing for my important machine names, "www" "mail" etc. I'd like the backup machine to ping the primary machine. Should the primary machine stop responding, I'd like the backup machine to run another script and pick up the important aliases. As soon as the primary machine goes back up, the secondary machine will drop the aliases and go back to its "waiting" state... The only part of this little mess I didn't know how to do was execute a script based on the result of a ping... So that was all that I asked. :) Maybe next time I'll just lay out the whole thing so nobody starts guessing. Anyhow, thanks to everyone who offered advice. I'm now able to complete this project. -Corey (!) This set of requirements is pretty common --- common enough to have a name: "failover" I'd suggest that you assign each of these two systems two IP addresses (one on each can be from RFC1918 --- something like 192.168.1.*). We'll call that the "control address" and the one that the web server is on the "service address." Now, when you detect a failure on the service address you take it over (address assumption). You can then get messages from the control address to let your failover host know that the other system is back up and running --- which is when you relinquish control of that address. Naturally you can expect some discontinuities in sessions that were running at failover point. Luckily normal HTTP is pretty robust and stateless --- so it should be O.K. for that. If you are running complex systems of CGI scripts which maintain state via local temp files, you might have some problems with this simple failover approach. Look for the "High Availability HOWTO" for some other ideas on this. In addition I recommend that you look at the panic= kernel parameter and that you consider running your web servers out of your inittab (so that 'init' will automatically respawn them as necessary). You can also consider configuring the built-in watchdog support (re-compile your kernel) and even installing a hardware watchdog timer card. A WDT (watchdog timer) is basically a "deadman's switch" for your computer. Once initialized it must be updated periodically (by the kernel or some daemon) or it will trigger the reset line on your system bus. ____________________________________________________ (?) SysAdmin: User Administration: Disabling Accounts From Glenn Jonsson on 05 Aug 1998 Answerguy, I'm doing a course on unix administration, unfortunatly i don't recall being taught to disable a user, I was hoping you would be able to tell me how to do this? Thanks Glenn (!) I suppose you could disable a user by taking a sledgehammer or other LART to his or her kneecaps (least gruesome among many means that come to mind). However, I presume the intended question at hand was: "How does a Unix/Linux sysadmin disable a user's account?" Now, I should preface this answer with a bit of a flame (more of a spark, really): This is not the "do-your-homework" line. When you get an assigment as part of your coursework you've either been presented with the information needed to answer the question --- or you're expected to know where to find that information (how to do the research). So, before I answer this question let me answer the meta-question: "How does a sysadmin find out how to perform routine user and group management operations on his systems?" One way is to look for commands that relate the operation, to find out what might be 'apropos' --- so we issue the 'apropos' command (or 'man -k' --- keyword search command). Since this relates to a user's account let's try: man -k account On my system that gives me two commands: userdel (8) - Delete a user account and related files usermod (8) - Modify a user account These and many others will show up when we issue: man -k users ... and 'man -k login' will give us just about the right number of other alternatives. However, 'usermod' sounds promising. Looking at the man page for usermod and searching on the word "disable" leads be to the following paragraph: -f inactive_days The number of days after a password expires until the account is permanently disabled. A value of 0 disables the account as soon as the password has expired, and a value of -1 disables the feature. The default value is -1. ... so that's one way to do it. If we near the end of the man pages we'll often find a "SEE ALSO" section which will point us to related man pages. So we look at passwd(1) (the section/chapter 1 command 'passwd') and search on disable and find: Account maintenance User accounts may be locked and unlocked with the -l and -u flags. The -l option disables an account by changing the password to a value which matches no possible encrypted value. The -u option re-enables an account by changing the password back to its previous value. ... so that's method number two. This note about "changing the password to a value which matches no...." sounds in intriquing. Technically it is using the wrong terminology since passwords in the /etc/passwd file are technically not "encrypted" --- they are "hashed" using a cryptographically strong algorithm (DES by default, MD5 or others on some systems). The distinction is lost on most writers and it is a bit longer to explain --- but the way that DES hashing of a password under Unix works is that the password and a "salt" (a random 12 bit value) are used as a "key" to "encrypt" a string of ASCII NUL's using the DES (data encryption standard --- a 20 year old U.S. mandated encryption system derived from IBM's "Lucifer" research). The result value is expressed as a subset of the printable characters with the "salt" prepended to it. (The "salt" exists simply to make "dictionary attacks" more expensive --- computationally and in terms of storage. It means that the "crack-er" has to have about 4000 different hashes for every word in his or her 'crack' dictionary). One of the properties of DES that made it attractive for commercial deployment is that that it is highly resistant to "known plaintext attacks." That means that it was computationally infeasible to recover the key even if one had arbitrary samples of the plain text (our string of NUL characters in this case) and the crypt text (the hashed form of the password from the /etc/passwd file). There are two reasons why this, technically, is not being used for "encryption." First, is a matter of semantics; if I know the plain text (the string of NUL's) then I'm not "encrypting" anything becuase I'm not "hiding information." I am "using cryptographic protocols and algorithms" for other purposes (such as authentication, digital signatures, etc). The other reason is more technical and pedantic. Conceivably there might be multiple keys (passwords) that encrypt a string of NUL's into the same hash. This is a defining property of hashes (checksums, CRC's, etc). You can verify that there is a given "message" has a given likelihood of being valid (you can measure its probability of integrity) --- but you can not definitively say that a given specific message was the same one that generated a given hash. (Given other constraints on the use of DES for Unix passwords it's possible to try all 1 to 8 character combinations to detect collisions --- but that is a different matter). In any event, given an alleged password the 'login' program (or the xdm, xlockmore, POP/IMAP daemon, or PAM module, etc) will attempt to encrypt a string of NUL's with it (and the "salt") and compare the resulting hash to the one stored in the /etc/passwd or the /etc/shadow file. (Generally this is done via the 'crypt(3)' library call). There are minor differences in the details (particularly on 'shadow' vs. non-shadowed systems) but that's the gist of it. If the hashes match than the user is presumed to have entered the correct password. If we follow another cross-reference from the passwd(1) man page we might find a list of characters that cannot be generated by the crypt(3) function. Actually we'll find a list of characters that can result from the operation and a bit of thought about that (taking the complement) will show us some characters that will never work. Now, the Linux man pages don't just come out and say this but logically we can see that we could use the following procedure to "manually" disable an account: * Edit the /etc/passwd ('vipw' command) (or the /etc/shadow, as appropriate) * Find the user. Insert an '*' into the the password field (i.e.prepend the password has with an asterisk) ... and it makes sense that no password will ever result in a matching hash. Thus the user will be locked out of direct logins. To re-enable the account *with the same password it used to have* just remove the asterisk. That's why we don't overwrite the passwd --- then we'd have to go through extra work to securely re-enable the password and get the user's new password set. (Most organizations are quite sloppy about this procedure --- sending initial passwords over e-mail, setting them to well-known and easily guessed values, etc. I recommend much better protocols). Now, I have a confession to make: I didn't figure this all out on my own. I didn't read all of these man pages. I learned about the "prepend with an asterisk" trick from other sysadmins. It was so long ago, I really don't remember where I saw it first. It might have been netnews. It might have been over someone's shoulder. Who knows. The point here is that you should find some of those sysadmins to "hang out" with. To be an effective sysadmin you need to become part of a sysadmin community. You and your classmates should probably form such a community and work together --- there's too much in this field for anyone to "know it all" (as a perusal of my back issues will sure prove with regards to me). The most organized and widespread community of sysadmins would be SAGE (the "System Adminsitrator's Guild" --- the 'e' is silent). Look at http://www.usenix.org/sage for details. Of course none of this is to suggest that you should neglect your textbooks. There are two standard textbooks on systems administration today: Unix System Administration Handbook, 2nd Edition by Evi Nemeth, Trent R. Hein, Scott Seebass and Garth Snyder (Prentice Hall, 1995, ISBN 0-13-151051-7) Known as the "cranberry book." Essential System Administration, 2nd Edition by Aeleen Frisch (O'Reilly & Associates, 1995, ISBN 1-56592-127-5) ... the "armadillo book" ... I'm working on one that I hope will go well beyond these --- although it will appear as a "Linux" title. Of the two Frisch's work tends to give a more "step-by-step" HOWTO approach to these things --- so I'd look in there (indeed I tried to review it so I could remind myself of what she says --- but my copy must be out on loan somewhere). Glancing in the cranberry book's index I find no entry under "accounts" and "users" refers me to "logins" where I find "disabling" on pp. 95 & 95: Occasionally, a user's login must be temporarily disabled. Before networking invaded the UNIX world we would just put a star in front of the encrypted password, making it impossible for the user to log in. However the user could still log in across the network. These days we replace the user's shell with a program that prints a message explaining why the login as been disabled and provides instructions for rectifying the situation. There is no further explanation of this at that point --- and they don't cover a number of other issues related to the situation. They are, of course, referring to the fact that this user might have a .rhosts file that permits them access to their account without a password. Their approach is part of a solution --- but it is incomplete. In the Linux System Administration Handbook by Mark F. Komarinski and Cary Collett they go into a bit more detail (p. 24) but show a bit less experience: If you want to disable a user account (that is, prevent the user from logging in again), replace the password in the /etc/passwd file with an * or some other character. Since the * isn't a valid encrypted password, there is no password that will allow you to log into that account (2). ... no notion of re-enabling the account with its old password here. We've replaced it. Komarinski and Collett also mention that the account can receive mail --- but doesn't mention that there may be other forms of access that are possible by this "disabled" user. They miss the same things that Nemeth et al gloss over --- and a bit more. A few of the problems with just knocking out the password --- and changing the shell: You might have various other services, like ssh, that don't require a password. Their .forward file might route their mail through a script or customer filtering program (like procmail). That script could do anything that they could do under their UID --- including opening up some sort of connection to some system to which they still have access and allowing them to have interactive access to a shell. (I don't know of a tool that does this --- but I know it's possible and I could probably cook one up in a few hours using 'netcat' and/or 'expect' --- there's probably a set of "warez" that does this for you. They might have started a daemon. This might wake up periodically and change their password and shell back to some setting (we didn't prevent their UID from running the chsh and passwd commands, perhaps via an 'expect' or a Perl/comm.pl script). They might leave in 'cron' or 'at' jobs to periodically renable their access or as a logic bomb. If their directory was on a writable NFS volume and they can get at any of the hosts that are trusted by that NFS server ---- they can put in new .forward and other "magic" files to do these sorts of things. There might be other mechanisms that I don't even know about. In fact there almost certainly are. Many of these won't apply to many hosts. However they should all be considered. One potential method would be to remove their line from the /etc/passwd file completely. Perhaps you'd save it in a special file so you could restore it later. I don't like that approach since it leaves their files as 'orphans' (an ls -l command will show numeric ID's for the ownership and they will be found by a 'find -nouser' command). So, the minimum I recommend to disable an account is: * star out the password. Change the shell to a binary such as /usr/sbin/nologin --- DO NOT USE A SHELL SCRIPT FOR THIS! I'd also recommend not linking it against any standard libraries --- it should use a few direct write()'s and one exit() system calls and that's it! (Various magic environment variables are used by most any shell and by the standard I/O libraries --- these can sometimes be overflowed or subverted). * Change the user's home directory to /home/.graveyard * Remove any 'cron' or 'at' jobs for that user (or review them thoroughly, if there's some reason to retain them). * Kill all processes owned by that user. (Manually go through a ps listing or use a command like: ps -u | while read user pid rest; do kill $pid; done Using PAM you can do other things, in addition to this. For example you can use the listfile.so (module) to check a magic file in /etc/ (one that you create for yourself) with the "sense=deny" parameters. Another thing I personally recommend (at least optionally) is to scour your filesystems for files owned by this user --- move them all into a "graveyard" .tar file. You can use a command like: { cd / && find . -user $GOING \ | tee /root/tmp/scourge.$$ \ | tar cTzf - /home/.graveyard/$GOING.tar.gz } \ && cat /root/tmp/scourge.$$ | xargs rm -f \ cat /root/tmp/scourge.$$ | xargs rmdir 2> /dev/null ... this is intentionally simplified (I usually do this by hand since I have reason to go through a disabled users files to re-assign them to other users. That's appropriate for former or suspended employees while it wouldn't be in most educational or ISP environments). The point of this process is to create a "graveyard" file that contains every file that this user owned. I remove them (dangerous if they used degenerate filenames --- this part I'm glossing over since I usually manually look over the list to catch them and I suggest running a 'skulker' to warn you about 'weird' filenames anyway). In most cases I don't recommend re-using user names (for several months at least) or UID's (until you've "wrapped around" on the UID's). This is a complex issue, but it really amounts to avoiding the confusion when you restore from backups, or encounter other files (perhaps members from .tar files) etc. This is another case where the standard practice for ISP's and educational institutions is necessary quite different from business and government sites (typically the turn over in ISP's and at colleges, etc, is far to quick to worry about UID re-use after about one fiscal quarter). Note: there may be other things you'll have to do in sites that use NIS/NIS+ (make sure you update the account entry in the master yp maps), and in Kerberos realms (remove their credentials on the KDC). I don't know all the details of these. Hopefully I've made some important points here: Read the man pages. (I have never left it at RTFM --- I prefer to tell you which M to F'ing R). Look beyond the first answer. If you'd stopped at the usermod -f 0, you'd be stumped at the first box you came to that didn't have the shadow suite installed, or that had a different implementation. Likewise with the passwd -l (which I think is not supported in the PAM suite that came with Red Hat 5.1). Knowing about "star-ring out" the password is pretty portable --- it works with DES, MD5, and "Big-DES", at least. HOWEVER, you have to check on each new version of UNIX that you encounter! Try it on a test account and make sure you method works. Tomorrow someone may implement a Unix passwd scheme that uses SHA-1 (the NIST secure hashing algorithm) in some way --- possibly with a bug --- that ignores the "*'s." A minute's test on each new system is probably worth a professional sysadmin's time. Look beyond the initial question. It sounds like you were told to just stop the user from "logging in" --- which might lead to an incomplete solution (prevent password authenticated logins). If the requirement is disable the account then perhaps you need to do more than merely prevent password authentication. This last point is crucial. Just knowing how Unix and Linux work is not enough. Knowing that many sites have the r* (rsh, rlogin, et al) utilities enabled, and knowing that a .forward file can be used to run arbitrary shell scripts with arbitrary side effects; these go beyond just knowing how Unix works. More importantly you have to think about the implications of these things, and know people who've experienced some of them. Reading the threads on comp.unix.admin for a few months will help quite a bit with that. Netnews for all the bad rap that it takes and all of the spam it endures, is still the largest set of open, ongoing technical discussions in the world. The regulars in comp.unix.admin are particularly helpful UNLESS YOU EXPECT THEM TO DO YOUR HOMEWORK. Finally: it's more important that you know *how to find answers* than how to perform a specific operation. It is even more important that you learn how to ask the right questions. This goes beyond the nitpick (disabling the user vs. his or her account) --- and asking "disable access to which services" (which requires some understanding of all of the services and forms of access that are available on the system at hand. Anyway, good luck on your studies. ____________________________________________________ (?) Articles on LILO Saves Life? From Erik Liles on 07 Aug 1998 Thank you! I just install Linux on my PC and LILO was corrupted. You are a life saver! Thanks again! Erik Liles (!) You're welcome. I presume that this is in response to one of the many back issues where I've described various scenarios involving LILO. [Some of the more recent Answer Guy notes about LILO: * 30: Installed on a Secondary SCSI HD: Lilo Stops at LI * 29: Removing Lilo from a multi-boot machine * 20: LILO and More on LILO * 19: Weird LILO Problems He's not the only one who's written something useful about LILO, though. If you want more, check out the Search Linux Gazette feature at the bottom of the main index. -- Heather] ____________________________________________________ (?) Novell NDS Support for Linux From ac in the comp.unix.questions newsgroup on 23 Jul 1998 Mr Dennis, Does the new version of red hat linux support Novell's NDS. -AC- (!) Support what aspects or services of NDS? Do you mean the user account management and authentication? I've heard of a PAM module that allows one to authenticate (user) against an NDS domain (using a Novell or compatible server). I don't know where that is, but I'm sure there are archives of the PAM (pluggable authentication modules) mailing list. I think Red Hat was hosting the PAM list and providing the archive and search space for it for awhile, so I'd start there. Are you referring to the ability to ncpmount Netware 4.x filesystems? I have no idea. Generally I'd suggest Caldera OpenLinux (Standard?) for any interoperability with Netware. They have the exclusive commercial implementation of the Netware NDS and bindery client code (which is apparently licensed such that they can't make it freely available). ____________________________ (?) NDS (Netware Directory Services) for Linux: Clients and Servers From Dave Kauffman on 07 Aug 1998 Do you know if the NDS client is now free? Caldera just released their Netware for Linux product and the 3 user license is free. Included in this package is their Netware client. -Dave (!) That's an interesting question. As far as I know their Netware client software (nwclient) (which supported bindery and NDS servers) was proprietary and under exclusive license with Caldera's distribution. From, what I gather, you could use the package with another Linux installation -- but I suspect that you were legally required to own a copy of Caldera OpenLinux (Standard?) for every copy of 'nwclient' you deployed. (I'm a bit unclear on this since the only occasions when I've deployed Caldera were in situations where it was a gateway between a set of Netware domains and a set of Unix systems --- basically a set of cron jobs existed to rdist files from Netware servers out to Unix ftp and web hosts (in one case) and all of the Linux/Unix users had shell accounts to access Netware servers (in another case). It would have been handy if I could have deployed nwclient through the workgroup in the second case --- but a number of them were able to use ncpfs (the free, OpenSource Netware client) for their needs. So, with this release of Netware for Linux we get a ability to download a copy of nwclient for free. The question becomes --- what rights does this entail. I really don't know, but it is an interesting question --- one that I didn't think to ask when I was talking to Ransom Love (the General Manager for OpenLinux) last night. (He was in town to speak at the Silicon Valley Association of Software Entrepreneurs (http://www.svase.org) along with a number of other OpenSource and Linux notables. So, I'll defer it to Caldera's support address. At the same time I'd suggest taking a careful look at the copyright notices and licensing that are included in the package, as you downloaded it. That should state any intended limitations on the use and redistribution of the package or its components. It may be that the included copy of 'nwclient' is only legally valid for use with that server --- that, technically, you are only allowed to deploy upto three copies of 'nwclient' and that all three of those are to be used only with that particular server. Their licenses and copyright notices are really the final authority in the matter. ____________________________________________________ This refers to an issue raised in August (Issue 31) about a second drive with linux slowing down a Windows boot sequence. ____________________________ (?) More 'Win '95 Hesitates After Box Has Run Linux?' From Zdenek Kabelac on 07 Aug 1998 Hi As I read your answer book for this question in LG Issue 31 "Win '95 Hesitates After Box Has Run Linux?" I have to say I had (and still have) similar problem - I have one HARD HD (6.5MB hda) and 1 mobile (1.5MB hdb) HD and 16xWearnesCDROM (hdd) (!) It is odd that your Wearnes/ATAPI CDROM is accessed as /dev/hdd --- if it was the "master" or "standalone" on the second IDE channel I'd expect to see it as /dev/hdc Perhaps it would work better if you reconfiged the CDROM (changed to to "master" or "standalone" mode). (?) When my computer goes up and I run WIN95 from the begining I can access CDROM normaly, but when I run Linux before windows and don't push RESET button W95 can't locate CDROM and it also takes some time for W95 to figure this out (about 10-15sec) but linux can access CDROM always. I have tryied all combination of reboot=cold,bios ... as kernel boot parameters, but it was no good. (I'm quite experienced linux user, but this is probably some HW magic) But I can easily live with this problem as I run W95 rarelly and pressing the reset button is not that big problem. I suppose the problem will be with the combination: board & W95 & CDROM - something in HW setup - for a few days I had connected 32xSamsung CDROM to the same computer and there were NO such problems - the only problem was toooo big noice from my computer and I hate this - so I rather bought the 16x - anyway I don't have UltraDMA board so the reading speed on my computer is the same (~2.4MB/s) (!) It is an interesting anomaly. As you say there is an easy workaround. I'd recommend that. (?) Small advice which might be interested to other users (those who likes silent): If they don't want to hear such a big noice from their computers, they should switch power suply from 12V to 5V to the computer funs. (its cheap and about an 1/2 hour of work) - don't know how this works in 44C :) but in my country with temperatures around ~30C everythings run quite well and when HD spin down my computer is completely silent. Zdenek Kabelac (!) While I don't like the excess noise I haven't have much incentive to play with the fans and power supplies in my machines. I've seen some people who swear that they've just disconnected their power supply fan and left it that way for years. However, I'm too much of a wimp to do that --- I just don't want to have to replace those parts if this does cause them to fail. ____________________________________________________ (?) Another Non-Linux Question! From weasel_90 on 02 Aug 1998 Hi, I have a 1.2GB Hard Drive. It has a few Bad Clusters, which are mostly at the end of the drive. Everytime I use Scandisk it freezes on 99%. All my programs run fine, but sometimes they freeze and I have to reboot the computer. I assume it is accessing data from the bad clusters and it freezes sometimes. I thought scandisk was suppose to mark bad clusters and tell the computer not to use them. Is there any way that is really safe where I can either seperate or put all those bad clusters away so that the computer doesn't use them and store data on them? I would appreciate your help. This one has stumped me. Thank You! (!) So, what happens when you install Linux and run the 'badblocks' program (or make the filesystem using the mke2fs -c option)? This may seem like an unsatisfying answer --- but you should probably be aware that "answerguy@ssc.com" is for the "Linux Gazette Answer Guy." I volunteer my time to answer questions that related to Linux. I do that to show my appreciation for all the effort that Linus Torvalds, Alan Cox, Stephen Tweedie, Ted T'so, and hundreds of other programmers have put into creating the operating system that I use. I do NOT volunteer my time to answer questions that are purely about Microsoft's operating system (or any other proprietary software, for that matter). The companies that produce these products can pay for their own support staff. If they choose not to do so, or are otherwise unwilling or unable to provide you with support that meets your needs --- you should probably reconsider your purchasing decisions. Sigh! Since I suffer from a compulsion to answer questions thoroughly here's a few suggestions: Your obvious alternative would be to replace that drive with a newer, less defective one. Normal, modern IDE and SCSI drives have extra sectors on every track which are "mapped" over any bad sectors on that track. Thus it is relatively rare for bad sectors to be visible to the operating system's drivers. You could put the new drive in as the "master" and you could install Linux over the old drive --- just to learn more about it. Another alternative would be to use something like the Norton Utilities. Perhaps one of those is more robust than the accessories that came with your OS. If none of that works, backup your system, re-install the OS from scratch and see if the re-FORMAT detects and properly handles these bad sectors --- or re-partition, make the last partition a couple of percent smaller and then re-install. If you don't have a backup system which is sufficiently reliable and of sufficient capacity to do a full system backup and restore --- then you're hopeless. While we're on the subject of "hopeless" --- it may seem awfully curmudgeonly of me, but surely some of these options must have occurred to you. I really hope that you weren't actually "stumped" by this! ____________________________________________________ (?) Integrated Programming Environments for Linux More Nostalgia for the old Turbo C Package From BiN on 29 Jul 1998 Do you know of a programming environment c/c++ as Turbo C for linux? Tnx! (!) There are a number of these. The first couple that come to mind are: Wipeout xwpe/wpe One good place to look for these is at Goob's Linux Links: http://www.linuxlinks.com/Software/Programming/Development/ ____________________________________________________ (?) Web server clustering project From Jim Kinney in the comp.unix.questions newsgroup on 22 Jul 1998 I am starting the research into the design and implementation of a 3 node cluster to provide high availability web, database, and support services to a computer based physics lab. As envisioned, the primary interface machine will be the web server. The database that provides the dynamic web pages will be on a separate machine. Some other processes that accept input from the web process and output to the database will be on the third machine. (!) Have you looked at the "High Availability HOWTO"? http://sunsite.unc.edu/pub/Linux/ALPHA/linux-ha/High-Availability-H OWTO.html There's also the common "round robin DNS" model --- which is already used by many service providers. It has its limitations --- but it's the first thing to try if the clients can be configured to gracefully retry transactions on failure. There's also the MOSIX project which was developed under BSD/OS and is allegedly being ported to Linux. This provides for process migration (again, more of a performance clustering and load balancing feature set). http://www.cs.huji.ac.il/mosix/ However, there is another concept called "checkpointing." You can think of this as having regular, transparent, non-terminal "core dumps" (snapshots) taken of each process (or process group). These are written to disk and can be reloaded and restarted at the point where they left off. I'm not aware of any projects to provide checkpointing to Linux (or checkpointing subsystems). (Obviously any application can do its own checkpointing in a non-transparent fashion --- roughly equivalent to the periodic automatic saves performed by 'emacs' and other editors). I have a pointer to some miscellaneous notes on checkpointing: http://warp.dcs.st-and.ac.uk/warp/systems/checkpoint/ The implication here is that you could create a hybrid checkpointing and process migration model that would provide high availability. In a client/server context this would probably only be suitable for situations where the communications protocols were very robust --- and it might still require some IP and/or MAC address assumption or some specialized routing tricks. One such routing trick might be the IP NAT project. IP masquerading is one form of NAT (allowing many clients to masquerade as a single proxy system). http://www.csn.tu-chemnitz.de/~mha/ Another form of NAT is many-to-many. Let's say you connected two disconnected sites that both chose 10.1.*.* addresses for their use --- you could put a NAT router between them that would bidirectionally translate the 10.1.*.* to corresponding 172.16.*.* addresses. Thus the two sites would be able to interoperate over a broader range of protocols than would be the case for IP masqurading --- since the TCP/UDP ports would not be re-written --- each 10.1.*.* address corresponds on a one-to-one basis with a 172.16.*.* counterpart. The one other form of NAT is one-to-many (or "load balancing"). This makes one simple router look like a server. In actuality that "server" is just dispatching the packets it receives to any one of the backend servers it chooses (statistically or based on metrics that they communicate amongst themselves, privately). Cisco has a product called "Local Director" that does exactly this. One of the experimental versions of the Linux IP NAT code also appeared to do this with some success. I don't know if any further work as progressed on these lines. Yet another approach that might make sense is to provide for replication of the data (files) across servers and to use protocols that transparent select among available servers (mirrors). This sounds just like CODA. http://www.coda.cs.cmu.edu/top.html A less sophisticated approach to replication is to use the rsync package to maintain some failover servers (mirrors) --- and require that writes all go to one active server. (?) So, I am open to suggestions, comments, info, links to sites, book titles, etc. I have proposed a one year development time for the whole cluster, with a single machine application prototype of the user visible/used portion by around the Jan 1999. I love my job! Jim Kinney M.S. Educational Technology Specialist Department of Physics Emory University (!) Web, mail, DNS and a number of other Internet services are naturally robust. With DNS you normally list up to three servers per host (in the /etc/resolv.conf) and all of these will be checked before a name lookup will fail. With SMTP the client will try each of the hosts listed in the results of an MX query. Round robin DNS will force most clients to try multiple different IP addresses on failure most of the time. However, the applications that really need HA (fail over) and clustering for performance are things like db servers. Having two systems monitor and process something like a set of db transactions in parallel (one active the other "mimic'ing" the first but not returning results) would be very interesting. The "mimic" would attempt to maintain the same applications state as the server --- and would assume the server's IP and MAC (ethernet media access control) addresses on failure --- to then transparently continue the transaction processing that was going on. You might prototype such a system using web and ftp (the FTP application is a more dramatic demonstration --- since a web server involves many short transactions and mostly operates in a "disconnected" fashion). One approach might be to have a custom ethernet driver that can be instructed to throw all of its output into the bit bucket. Thus the mimic is normally silent, but following a failure on the server it does the address assumption and rips off the muzzle. I suspect you'd have to have another interface between the two servers, one which is dedicated to maintaining the same state between the server and the mimic. (For example if the server get a collision or an error that wasn't sensed by the mimic -- or vice versa -- the two might get horribly out of sync when the upper layer protocols require a resend. With special drivers the two systems might resolve these discrepancies at the kernel/driver layer --- so that the applications will always get the same data on their sockets). I really have no idea how much tweaking this would take and whether or not it's even feasible. However, it seems that your intent is to provide failover that is transparent to the applications layer. So, the work obviously has to happen below that. It is unclear whether you are primarily interested in deploying a set of servers for use by your Physics team or whether you are interested in doing research and development in the computer science. In any event your project will probably involve a hybrid of several of these approaches: * Round robin DNS * Failover with IP and MAC address assumption. (and/or load balancing NAT). * Replication and or "mirroring" (more failover). * Multi-initiator SCSI (where a single SCSI bus has multiple computers active on it, such that these computers have shared access to the attached peripheral devices). It would be very interesting to see someone develop process migration and checkpointing features for Linux though there doesn't seem to be any active work going on now. I'd also love to see an "Beowulf enabled" SQL dbserver (where a couple of failover capable "dispatchers" could farm out transactions to multiple clustered Linux boxes in some sensible manner). I'm not even sure if that's feasible --- but it sure would knock down the scaleability walls that I hear about from those dbadmins. ____________________________________________________ (?) WU-FTP guestgroup problems From Marco Iannacone on the comp.unix.questions newsgroup on 9 Jun 1997 It looks like I never answered this question. (I'm going through my old archives). Hi James, how you doing? I'm writing to you as The Answer Guy 'cause I have some problem with setting up the guest trick with wu-ftpd. What I mean is to have a chrooted enviroment for some special user with their home directory and user-id and password. I'm using Slackware '96 Linux with the wu-archive-ftp that comes already compiled with it. This is what I did: * I compiled gnu ls statically and put it in ~ftp/user-foo/bin directory. * I did the /etc hack: + added the guest group in/etc/group + modify the/etc/passwd file for the user I want to be chrooted giving him /home/ftp/user-foo./ directory (!) I think this is supposed to be /home/ftp/./user-foo ... if you want the guestgroup directive in wu-ftpd's ftpaccess file to chroot to /home/ftp and initially place this user in the/home/ftp/user-foo directory. (?) I don't recall whether the "ftponly" (or whatever you call your "guestgroup" group) has to be that user's primary group (the one listed in /etc/passwd) or whether it can be one of the supplemental groups (as listed in /etc/group) + added /etc/ftponly to /etc/shells + I modify the /etc/ftpaccess file adding ... path-filter guest /etc/pathmsg ^[-A-Za-z0-9_\.]*$ ^\. ^- .... guestgroup guest * I created the user home directory which has the following attribute: [root]:/home/ftp>ls -la total 104 dr-xr-xr-x 9 root root 512 Jun 2 14:01 . drwxrwxr-x 6 user-foo guest 512 Jun 3 13:54 user-foo dr-xr-xr-x 2 root root 512 Jun 3 09:45 bin Now the ftp server is running fine (both with normal and anonymous users) and even the chrooted enviroment for guest is working fine: the user can login, upload and download files and it is locked in that directory... i.e. can go in all the subdirectory but can't go up. So it is perfect! The only problem is that ls and dir are not working and he can only list files using nlist. For example: Name (localhost:root): user-foo 331 Password required for user-foo. Password: 230 User amex logged in. Access restrictions apply. ftp> nlist 200 PORT command successful. 150 Opening ASCII mode data connection for file list. bin .profile etc .rhosts .forward .sh_history test-directory test-file.txt 226 Transfer complete. ftp> dir 200 PORT command successful. 150 Opening ASCII mode data connection for '/bin/ls'. 226 Transfer complete. ftp> ls 200 PORT command successful. 150 Opening ASCII mode data connection for '/bin/ls'. 226 Transfer complete. ftp>quit What am I missing? how can I allow him to do ls and dir? Note: i'm sure that the new ls is working: [root@Goliath /home/ftp/user-foo//bin]#./ls compress cpio gzip ls sh tar [root@Goliath /home/ftp/user-foo/bin]# and that is statically linked: [root@Goliath /home/ftp/user-foo/bin]#ldd ./ls Statically linked (ELF) [root@Goliath /home/ftp/user-foo/bin]# Thanks a lot, Marco (!) Everything else sounds right to me. Naturally I hope you've long since solved this problem. I just hate to leave a question unanswered. Incidentally, you might look at ncftpd (a newer FTP daemon from Mike Gleason, author of the popular ncftp client). ncftpd allegedly offers better options for locking users into their home directories and it contains built-in support for 'ls' and similar commands. ncftpd is shareware, rather than freeware, and Mike wants $40 (US) for small servers (50 concurrent sessions or less) and about $200 for larger servers. However you can evaluate the whole package for free. Start by taking a look at: http://www.probe.net/~mgleason/ncftpd/ ... or at: http://www.ncftp.com/ ... and reading about the features list. Naturally this hasn't been around as long as wu-ftpd, and the sources don't seem to be openly available. So ncftpd doesn't benefit from the informal process of code review that we take for granted for most Linux networking packages. (This informal process of auditing does not seem to have been terribly effective, however, since we still find new security problems in code that's been free for decades. For this reason there are have a couple of more organized and formal efforts --- the OpenBSD project and the Linux Security Audit http://www.att.net/~Bandit2006/ to name the two with which I'm familiar). _________________________________________________________________ Copyright © 1998, James T. Dennis Published in Linux Gazette Issue 32 September 1998 _________________________________________________________________ [ Table Of Contents ] [ Front Page ] [ Previous Section ] [ Next Section ] _________________________________________________________________ A Convenient and Practical Approach to Backing Up Your Data By Vincent Stemen _________________________________________________________________ July 19,1998 Every tool I have found for Linux and other UNIX environments seems to be designed primarily to backup files to tape or any device that can be used for streaming backups. Often this method of backing up is infeasible, especially on small budgets. This led to the development of bu, a tool for backing up by mirroring the files on another file system. bu is not necessarily meant as a replacement for the other tools (although I have set up our entire disaster recovery system based on it for our development servers), but more commonly as a supplement to a tape backup system. The approach I discuss below is a way to manage your backups much more efficiently and stay better backed up without spending so much money. * Some problems I have found with streaming backups 1. The prices and storage capacities often make it infeasible. The sizes of hard drives and the amount of data stored on an average server or even workstation is growing faster than the capacity of the lower end tape drives that are affordable to the individual or small business. 5 and 8 gig hard drives are cheap and common place now and the latest drives go up to at least 11 gig. However, the most common tape drives are only a few gig. Higher capacity/performance tape drives are available but the costs are out of the range of all but the larger companies. For example: Staying properly backing up with 30GB of data (which can be just 3 or 4 hard drives) to a midrange tape drive, can cost $15,000 to $25,000 or more inside of just 2 to 4 years. There is a typical cost scenario on http://www.exabyte.com/home/press.html. This is just the cost for the drive and tapes. It does not include the cost of time and labor to manage the backup system. I discuss that more below. With that in mind, the comments I make on reliability, etc, in the rest of this article are based on my experience with lower end drives. I haven't had thousands of extra dollars to throw around to try the higher end drives. 2. The cost of squandered sys admin time and the lost productivity of users or developers waiting for lost files to be restored, can get much more expensive than buying extra hard drives. To backup or restore several gig of data to/from a tape can take up to several hours. The same goes for trying to restore a single file that is near the end of the tape. I can't tell you how frustrating it is to wait a couple of hours to restore a lost file only to discover you made some minor typo in the filename or the path to the file so it didn't find it and you have to start all over. Also, if you are backing up many gig of data, and you want to be fully backed up every day, you either have to keep a close eye on it and change tapes several times throughout the day, every day, or do that periodically and do incremental backups onto a single tape the rest of the days. With tapes, the incremental approach has other problems, which leads me to number 3. 3. Incremental backups to tape can be expensive, undependable and time consuming to restore. First, this kind of backup system can consume a lot of time labeling, and tracking tapes to keep track of the dates and which ones are incremental and which ones are full backups, etc. Also, if you do incremental backups throughout a week, for example, and then have to restore a crashed machine, you can easily consume up to an entire day restoring from all the tapes in sequence in order to restore all the data back the way it was. Then you have Murphy to deal with. I'm sure everybody is familiar with Murphy's laws. When you need it most, it will fail. My experience with tapes has revealed a very high failure rate. Probably 20 or 30% of the tapes I have tried to restore on various types of tape drives have failed because of one problem or another. This includes our current 2GB DAT drive. Bad tape, dirty heads when it was recored, who knows. To restore from a sequence of tapes of an incremental backup, you are dependent on all the tapes in the sequence being good. Your chances of a failure are very high. You can decrease your chance of failure, of course, by verifying the tape after each backup but then you double your backup time which is already to long in many cases. * A solution (The history of the bu utility) With all the problems I described above, I found that, like most other people I know, it was so inconvenient to back up that I never stayed adequately backed up, and have payed the price a time or two. So I set up file system space on one of our servers and periodically backed up my file systems over nfs just using cp. This way I would always be backed up to another machine if mine went down and I could quickly backup just one or a few files without having to mess with the time and cost of tapes. This still wasn't enough. There were still times I was in a hurry and didn't want to spend the time making sure my backup file system was NFS mounted, verifying the pathname to it, etc, before doing the copy. Manually dealing with symbolic links also was cumbersome. If I specified a file to copy that was a symbolic link, I didn't want it to follow the link and copy it to the same location on the backup file system as the link. I wanted it to copy the real file it points to with it's path so that the backup file system was just like the original. I also wanted other sophisticated features of an incremental backup system without having to use tapes. So, I wrote bu. bu intelligently handles symbolic links, can do incremental backups on a per directory basis with the ability to configure what files or directories should be included and excluded, has a verbose mode, and keeps log files. Pretty much everything you would expect from a fairly sophisticated tape backup tool (except a GUI interface :-) but is a fairly small and straight forward shell script. * Backup strategy Using bu to backup to another machine may or may not be a good replacement for a tape backup system for others as it has for us, but it is an excellent supplement. When you have done a lot of work and have to wait hours or even days until the next scheduled tape backup, you are at the mercy of Murphy until that time, then you cross your fingers and hope the tape is good. To me, it is a great convenience and a big relief to just say "bu src" to do an incremental backup of my whole src directory and know I immediately have an extra copy of my work if something goes wrong. It is much easier and faster to restore a whole file system over NFS than it is from a tape. This includes root (at least with Linux). And, it is vastly faster and easier to restore just one file or directory just using the cp command. So far as cost: You can get extra 6GB hard drives now for less than $200 dollars. In fact I can buy a whole new computer with extra hard drives to use as a backup server for $1000 or less now. Much less than the cost of buying just a mid to high end tape drive, not counting the cost of all the tapes and extra time spent managing them. In fact, one of the beauties of Linux is, even your old 386 or 486 boat anchors make nice file servers for such things as backups. For those individuals and small businesses who use zip drives and jaz drives for backing up so they can have multiple copies or take them off site, bu is also perfect, since incremental backups can be done to any file system. I often use it to back up to floppies to take my most critical data and recent work off site. Here is an interesting strategy we have come up with using bu that is the least expensive way to stay backed up we could come up with for our environment. It is the backup strategy we are setting up for our development machines which house several GB of data. Use bu to backup daily and right after doing work, to file systems that are no more than 650 mb. Then, once or twice a month, cut worm CD's from those file systems to take off site. WORM CD's are only about a dollar each in quantities of 100, and CD WORM writers have gotten cheap. This way your backups are on media that doesn't decay like tapes and floppies tend to do. Re-writable CD's are also an option if you don't mind spending a bit more money. If you have just too much data for that to be practical, hard drives are cheap enough now that it is feasible to have extra hard drives and rotate them off site. It is nice to have one of those drive bays that allow you to un-plug the drive from the front of the machine if you take this approach. Where bu will really shine with large amounts of data, is when we finally can get re-writable DVD drives with cheap media. I think, in the future, with re-writable DVD or other similar media on the horizon, doing backups to non-random access devices such as tape will become obsolete and other backup tools will likely follow the bu approach anyway. * Getting bu bu is freely re-distributable under the GNU copyright. http://www.AdvancedResearch.org/bu/ ftp://www.AdvancedResearch.org/pub/vstemen/bu/bu.tar.gz _________________________________________________________________ Copyright © 1998, Vincent Stemen Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Welcome to the Graphics Muse Set your browser as wide as you'd like now. I've fixed the Muse to expand to fill the aviailable space! © 1998 by mjh ______________________________________________________________________ Button Bar muse: 1. v; to become absorbed in thought 2. n; [ fr. Any of the nine sister goddesses of learning and the arts in Greek Mythology ]: a source of inspiration W elcome to the Graphics Muse! Why a "muse"? Well, except for the sisters aspect, the above definitions are pretty much the way I'd describe my own interest in computer graphics: it keeps me deep in thought and it is a daily source of inspiration. [Graphics Mews][WebWonderings][Musings] [Resources] T his column is dedicated to the use, creation, distribution, and discussion of computer graphics tools for Linux systems. Well, there were quite a few announcements in the past month and I'm finding that not all are being cross posted to both comp.os.linux.announce and to freshmeat.net. It takes a little more diligence on my part to catch all the announcments but since I visit both places fairly often it really isn't that big of a problem. On the other hand, is it really necessary to repeat those announcements here? I thought about this for a while and finally decided it is worth the effort since both c.o.l.a and freshmeat are sites for general announcements and the graphics specific items can easily be overlooked. By gathering them up and reprinting them here I can let my readers worry less about missing the important stuff through the sea of other announcements at other sites. I've finally started to catch up on my Musings too. This months issue includes discussions on: * Managing your CGI Perl scripts using "require" in Web Wonderings * A closer look at the libgr package of image file format libraries * A little fun with the Gimp plugin "QBist" I also considered taking a look at Blender, but I'm not certain my system is stable enough for that right now. Its been acting a little strange of late - I'm beginning to think some recent power outages may have corrupted some libraries. I have plans to upgrade to Red Hat 5.2 whenever it comes out (I expect the difficulties with dealing with libc/glibc will all be worked out, much like the 4.2 release had worked out most of the a.out vs. ELF issues), plus take a look at Xi Graphics Maximum CDE at some point too. But I hadn't planned on doing either until the October time frame. I may have to change my plans. Anyway, a review of Blender is a definite future Musing. The last time I tried it the program seemed to be stable, but the interface is rather complex. A general examination showed that this modeller is quite feature rich. Its just that the interface is not intuitive to a 3D newbie, perhaps not even to an experienced 3D graphic artist. A better set of documentation is reported to be on the way, due out some time in September. I'll wait and see what this might offer before stepping up for a review of Blender. [INLINE] You can also keep an eye out for a new and improved Graphics Muse Web site coming soon. I expect to be able to launch the new site sometime in the middle to end of September. It will combine the Linux Graphics mini-Howto with the Unix Graphics Utilities into a single searchable database, provide recommended reading material and allow you to post reviews of software, hardware and texts, plus it will provide more timely news related to computer graphics for Linux systems. And of course all the back issues of the Graphics Muse column from the Linux Gazette will be there too, in a semi-searchable format with topics for each month provided next to the links to each months issue. I'll probably post an announcement about it to c.o.l.a when its ready. Graphics Mews Disclaimer: Before I get too far into this I should note that any of the news items I post in this section are just that - news. Either I happened to run across them via some mailing list I was on, via some Usenet newsgroup, or via email from someone. I'm not necessarily endorsing these products (some of which may be commercial), I'm just letting you know I'd heard about them in the past month. indent The ParPov and Pov2Rib Homepage ParPov is a free (GNU), object-oriented library written in C++ for parsing Scene Files from the Persistence of Vision (POV-Ray) Ray-Tracer. It will read a scene written using version 1-3 syntax and creates a structure of C++-Objects, representing all details of the original description. You can query those objects and use the information to convert the scene to other formats or many other uses. Pov2Rib is also a freely available progam, which allows you to convert scene files from POV-Ray to a RenderMan Interface Bytestream (RIB). The tool is the first application of libParPov. http://www9.informatik. uni-erlangen.de/ ~cnvogelg/pov2rib/index.html ______________________________________________________________________ GQview 0.4.0 GQview is an X11 image viewer for the Linux operating system. Its key features include single click file viewing, external editor support, thumbnail preview, thumbnail caching and adjustable zoom. GQview is currently available in source, binary, and rpm versions and requires the latest GTK and Imlib libraries. http://www.geocities.com/ SiliconValley/Haven/5235/ indent TKMatman TKMatman is a tool that lets you interactively set and adjust parameters to RenderMan shaders and preview images with the given parameters. It can handle surface, displacement, interior, exterior, atmosphere, light and imager shaders and their combinations. The idea for the program comes from Sam Samai, who wrote the very useful IRIX version. With the availability of the Blue Moon Rendering Tools for different platforms the author of TkMatman thought that a lot more people will use the RenderMan interface and need ways to select their shaders. That's why he published his private LINUX version of MatMan. The program was initially only meant for his own use, but it is in a pretty stable state now. All feedback is appreciated and new versions will be made available at the following site: http://www.dfki.uni-sb.de/ ~butz/tkmatman/ ______________________________________________________________________ ImPress ImPress allows you to create good quality documents using vector graphics. You can use ImPress within a web browser with the Tcl/Tk plugin. It's a reasonable desktop publishing and presentation tool in a small package designed for Linux and for integration with Ghostscript. The GPL'd .03alpha release fixes many bugs and adds better web and presentation functionality. http://www.tcltk.com/tclets/impress/index.html [INLINE] [INLINE] LibVRML97/Lookat 0.7 LibVRML97 is a toolkit for incorporating VRML into applications, and Lookat is a simple VRML browser based on that library. This code is currently being developed and is one of the more complete open source VRML browsers available. All VRML97 nodes except Text/FontStyle and the drag sensors (CylinderSensor, Plane, Sphere) are supported. The Script node supports much of the Javascript API with more on the way. Version 0.7 adds Javascript scripting, MovieTextures, TouchSensors, Anchors, Inlines, and command line arguments -url and -geometry for running under XSwallow as a Netscape plugin. http://www.vermontel.net/ ~cmorley/vrml.html Slidedraw Slidedraw is a drawing program in Tcl/tk for presentation slides with postscript output and full-featured. You can see snapshots, get slide collections or the very latest package available from it's new web page. URL: http://web.usc.es/~phdavidl/slidedraw/ Beta testers are welcome. Contributors for slide collections and documentations are also invited. ______________________________________________________________________ MindsEye 0.5.27 MindsEye is a project to develop a free (in terms of the GPL) available 3D modelling program for Linux. It features modular design, Multi-scene/user concept, Kernel-system view instead of Modeler-system view, Object oriented modelling design and network support in a MindsEye-kernel way. http://mindseye.luna.net/ [INLINE] Visual DHTML Visual DHTML is a free Web-based authoring tool that lets you create interactive web content using various DHTML technologies. Visual DHTML brings JavaScript1.2 and DHTML towards its full and future potential at the application level by bringing more traditionally low-level programming techniques and features to Web-based scripting languages. Features include such things as an object oriented, component-based ("Bean" style) architecture with "Drag and Drop" functionality. Also included are several pre-built DHTML widgets, such as the dynamic Drawer and Ticker that you can customize along with component properties that you can modify. Also, if you like the functionality of this tool, you can copy and paste the source code by viewing the Page Source from within Navigator. http://developer.netscape.com/docs/examples/dynhtml/visual/index.html ______________________________________________________________________ Javascript Debugger 1.1 Netscape JavaScript Debugger is a powerful tool for debugging JavaScript on multiple platforms. Written in Java, the debugger runs in Netscape Communicator. Netscape JavaScript Debugger 1.1 supports client-side JavaScript debugging capabilities, including such features as a watch mechanism, conditional breakpoints, enhanced error reporter, signed script support, and the ability to step through code. Using the debugger while developing your JavaScript application, you can debug scripts as they run, determine what's going on inside your script at any moment, and find problems quickly. A Linux version is not mentioned explicitly, but the Unix version works perfectly. http://developer.netscape.com/software/jsdebug.html ______________________________________________________________________ S.u.S.E. announces XFCom_P9x00 and new version of XFCom_3DLabs XFCom_P9x00-1.0 It took a while, but finally a free server for Weitek P9100 based cards is available. XFCom_P9100 is not yet accelerated and has not received as much testing as we would have liked it to, but it should work fine on most P9100 boards. XFCom_3DLabs-4.12 With this version of XFCom_3DLabs several problems with earlier versions should be solved. New features and fixes include: * Permedia 2v support * Permedia 2 AGP hangs fixed * 24bpp mode improved * many drawing bugs removed * DPMS support added You can find both servers (and the rest of the XFCom-family) at our web site http://www.suse.de/XSuSE/XSuSE_E.html As always, these servers are freely available, the sources to these servers are already part of the XFree86 development source. Binaries for other OSs will be made available, time permitting. XSuse Matrox Millenium G200 support Suse appears to have also added support for the Matrox Millennium G200 AGP to their Matrox X server. No official announcement has been seen, but word of this development first appeared to 'Muse's eyes via Slashdot.org. The driver is available from ftp://ftp.suse.com/pub/suse_update/XSuSE/xmatrox/. ______________________________________________________________________ The Visual Computer Journal Special Issue on Real-time Virtual Worlds Submissions due: October 31, 1998 Real-time Virtual Worlds are now possible on most workstations and PCs. The challenge is to design user-friendly systems for creating new applications and tools. This special issue of the Visual Computer is dedicated to new algorithms, methods, and systems in Real-time Virtual Worlds. Original, unpublished research, practice, and experience papers are sought that address issues in all aspects of Real-time Virtual Worlds. Topics include, but are not limited to: * Modeling for Real-time Virtual Worlds * Real-time animation * Real-time rendering algorithms * Real-time motion control and motion capture * Real-time talking heads * Intelligent interfaces for real-time computer animation * Avatars and Real-time Autonomous Virtual Humans * 3D interaction with Virtual Worlds * Networked Virtual Environments * Artificial Life in Virtual Worlds * Virtual Worlds on the Web * Real-time audio and speech for Virtual Worlds * Real-time simulation * Games and entertainment applications Schedule: Paper Submission: October 31, 1998 Acceptance/Rejection Notification: January 15, 1999 Final Manuscript Submissions: February 15, 1999 Publication: Summer 1999 The editors for this issue of the Visual Computer are: Nadia Magnenat-Thalmann Associate Editor-in-Chief MIRALab, University of Geneva Email: thalmann@cui.unige.ch Daniel Thalmann Computer Graphics Lab EPFL Email: thalmann@lig.di.epfl.ch Submission guidelines: Authors may submit their paper either as an HTML URL or by ftp. For ftp, the electronic version of your manuscript should be submitted in PDF (preferred) or Postscript (compressed with gzip) using anonymous ftp to ligsg2.epfl.ch. The paper should be submitted as one file. The file name should be first author's name. Please follow the procedure: ftp ligsg2.epfl.ch username: anonymous password: cd tvc put In any case, you should send an email to tvcanim@lig.di.epfl.ch with the title of the paper, the authors with affiliation, the contact author, and either the URL or the filename used for ftp. For author guidelines, please consult: http://www.computer.org/multimedia/edguide.htm ______________________________________________________________________ KIllustrator 0.4 KIllustrator is a freely available vector-based drawing application for the K Desktop Environment similiar to Corel Draw(tm) or Adobe Illustrator(tm). Features include: * different object types: polylines, circles, ellipses, squares, rectangles, (symmetric) polygons, freehand lines, bezier curves and multiline text * tools for moving, scaling, rotating as well as grouping, ungrouping, aligning, distributing and reordering objects * various line styles and arrows * a multilevel undo/redo facility * a property editor * multi-window support with cut/copy/paste between the windows * zooming and snapping to grid * multilevel undo/redo * (network-transparent) drop support with the KDE filemanager * printing to PostScript (file or printer) * preliminary WMF support * export to raster image formats (GIF, PNG, XPM) and Encapsulated Postscript * import of Xfig files The installation requires a working KDE 1.0, QT 1.40 as well as gcc-2.8.1 or egc-1.03. KIllustrator is tested on Linux, FreeBSD and Solaris. For further information (screenshots, download) please consult my homepage at: http://wwwiti.cs.uni-magdeburg.de/~sattler/killustrator.html Please, for question, comments, bug reports or contributions e-mail me at kus@iti.cs.uni-magdeburg.de. Kai-Uwe Sattler ______________________________________________________________________ RenderPark RenderPark is a photo-realistic rendering tool being developed at the Computer Graphics Research Group of the Katholieke Universiteit Leuven, in Belgium. The goal is to offer a solid implementation of many existing photo-realistic rendering algorithms in order to compare them on a fair basis, evaluate benefits and shortcomings, find solutions for the latter and to develop new algorithms that are more robust and efficient than the algorithms that are available today. RenderPark will offer you several state-of-the-art rendering algorithms that are not yet present in other rendering packages, not even in expensive ones. Allthough RenderPark is in the first place a test-bed for rendering algorithms, it is evolving towards a full-featured physics-based global illumination rendering system. http://www.cs.kuleuven.ac.be/cwis/research/graphics/RENDERPARK/ [INLINE] Did You Know? ...there are two True Type® font servers based on the FreeType package: xfsft and xfstt. The latter is reported to have some problems with fonts over 90 pixels high and appears to go into "memory starved mode" after extensive use of the Text tool in the Gimp. Aside from these issues, however, both are reported to be fairly stable servers. ...The computer magazine PC Chip will be publishing an interview with Ton Roosendaal, owner of Not a Number which is the company bringing us the 3D modeller Blender. This interview has been placed online so readers can get an early glimpse at it. Q and A Q: Is there a way to include carriage returns with the text tool, or to align phrases created with individual uses of the text tool? A: I didn't know the answer to this one, but found the following answer on the Gimp-User mailing list (unfortunately I didn't get the responders name - my apologies to that person): Try the "Script-fu --> Utils --> ASCII 2 Image Layer" command. This allows you to import a text file as one or more layers of text. Note that this Script is available either from the Image Window menu's Script-Fu option or from the Xtns menu's Script-Fu option. Q: Mark Lenigan (mlenigan@umdsun2.umd.umich.edu) wrote to the Gimp User mailing list: I'm trying to create a transparent GIF with a drop shadow for the title graphic on my Web page. I'm pretty much following the cookbook from www.gimp.org/tutorials, except that I'm not including the background color layer and using "Merge Visible Layers" to keep the final image transparent. Everything goes fine until I need to convert the image to an indexed image just before I save it in the .gif format file. At that point the shadow in my image immediately disappears and the text seems to lose its anti-aliasing. Can anyone shed some light on to this? A: Simon Budig responded: Yes. Gimp can only handle 1-bit transparency in indexed color mode. So when you convert an image to indexed the different levels of transparency will get lost. There is the great "Filters/Colors/Semiflatten" plugin. It merges all partially transparent regions against the current Backgroundcolor. Select a BG-Color (i.e. matching to the BG-Color of your Web-page) and watch the effect of the plugin. Then you can convert your Image to Indexed and save it as GIF. (GIF can also handle just 1-bit transparency). [INLINE] Reader Mail zen@getsystems.com wrote: I'd like to hear more technical details of the internals of Gimp, and comparing Gimp to photoshop - eg. Photoshop 5 is now out with multiple undo - undo history list, even. 'Muse: Unfortunately, I can't do this sort of comparison. I don't run anything but Unix boxes (specifically Linux) at home and don't have access to any Photoshop packages. I might be able to do the comparison based on Photoshop texts, but thats the best I could do. Also modelling tools. Gimp is 2D. Where is 3D? Pov-Ray can render, but is there anything to compare with say Lightwave, or 3D-StudioMax? 'Muse: There are no real competitors to Lightwave or 3D-StudioMax for Linux. There are quite a few modellers available, each with different levels of sophistication. But none that compares to the sophistication of either of the two tools you mention. You can find a list of modellers in my June 1997 Graphics Muse column. Not all of the links in that issue are still valid. Some of the modellers seem to have disappeared and some have changed URLs. You can try a search using the package name through freshmeat.net if the links in the June 1997 issue don't work for you. One modeller that was not listed in that issue but that looks quite interesting is Blender, which is a commercial package that has only recently been released for free (no source code) to Linux users. I hope to do a review of it soon. However, the last version I tried was not documented sufficiently to allow me to understand how to do even the most basic tasks. The interface is complex and feature rich, but not intuitive to 3D newbies. Distributed rendering. 'Muse: I'll see what I can do about this. One tool to consider is PVMPOV, a patch to POV-Ray to allow for distributed rendering across multiple systems on a network. PVM is the Parallel Virtual Machine, a package for distributed processing used on many Unix systems. You should probably note that this is a patch to POV-Ray, so you'll need to understand how to apply patches to source code in order to use it. Just some things I'd be delighted to read about. Cheers, Zen. 'Muse: Again, thanks for the ideas. I'll see what I can do. [INLINE] [INLINE] Managing your Perl scripts: using 'require' Last month we talked about accessing an mSQL database from CGI scripts using Perl with two modules: CGI.pm and Msql. In the example described there we built a couple of HTML tables and embedded some text stored in a table in an mSQL database. It turns out that generating HTML using CGI.pm is quite simple and using Perl with the Msql module makes combining your HTML output with information from a database really rather painless. But that example was extremely simple. Real world examples often have dynamic pages that are built from multiple databases. And each page often links to other dynamically built pages that provide some, or even all, of the same information from those databases. In other words, parts of each page contain the same HTML formatting and data. How can you avoid having to duplicate that HTML in each page? With older static page development methods there really weren't any methods for including common regions into multiple pages unless you used frames. The frames allowed you to create a region on the browser display that would be a single page of HTML that could be displayed along with various other pages. In this way you need only maintain a single copy of that one common page. From a Web developers point of view this was an ideal situation - it meant the probability of error in trying to update identical HTML in multiple pages was eliminated. It also meant less work. But to readers of those pages it could mean frustration, since not all browsers at the time supported frames. Even now, frame handling is not consistant between the two main browsers, Netscape Navigator and Microsoft's Internet Explorer. Although frames can be used to produce some terrific Web pages, they are not the ideal solution for supporting different browsers, especially older browsers. Fortunately, this problem can be overcome with our new friend Perl. The method for inclusion in multiple pages of common formats and data is simple. However, the management of these common regions takes a little thought. Lets first look at how to include Perl code from different files into your main Perl script. In perl, a subroutine or other piece of common code would be written in a module, a separate file of perl code. Modules can be included at any point within a perl script. By default, Perl looks at a special variable called @INC to determine where to find these modules. Also by default, the current working directly, ".", is listed in the @INC variable as the last directory to search for modules. Note: @INC is a list variable, that is, it is an array of strings with each string being the name of a directory to search for modules. To include a module into your main Perl cgi script you would use the require function. The format is simple: require 'modulename.pl'; This function tells the Perl interpreter to include the named module but only if it has not been included previously. In this way you can include the same module multiple times without worry that doing so will cause serious problems. When the module is included the code within it is run at the point of inclusion. You can, if you so desire, write the module to have code that runs right then and there using variables with a global scope (ie they are visible to the original program as well as the included module). However, it would probably make more sense to write the module as a subroutine call instead. You can still use globally scoped variables but by making the module a subroutine call you can guarantee the code is not run until you specifically request it. You can also run it more than one time if you want. So how do you make a subroutine? Just wrap the code inside the following construct: sub subname { } 1 The 1 at the end is important - modules must include this or else the require function will fail. Now invoke the subroutine with the following command: &subname(); The ampersand is important - you should always prefix calls to your subroutines with the ampersand. Although things may work properly if you don't, proper Perl syntax suggests the results can be unexpected if you don't use the ampersand. If you want to pass parameters into the subroutine you can do so as a list. For example: &subname("one item"); &subname("one item", "two items"); &subname(@listitems); To access the command line arguments in the subroutine you can do something like the following: sub subname { # @_ contains all parameters to the subroutine. # We first assign these to the @params variable because the variable # name "@params" is a bit more intuitive than "@_". @params = @_; foreach $arg (@params) { # now run through each parameter one at a time # and process it. if ( "$arg" eq "" ) { ... } } } [INLINE] Musings [INLINE] libgr - A collection of image libraries Many users of graphics tools discussed in this column will find that those tools are dependent on any number of file format specific libraries. For example, the Gimp needs libraries for JPEG, PNG, PNM, MPEG and XPM in order to support these file formats. The Gimp doesn't understand how to read these files directly - it is dependent on the image format libraries for assistance in reading and writing files in these formats. Since the Gimp (and other tools) don't include these libraries in their source distributions, users are often required to retrieve and install these libraries manually. Normally users would download format specific libraries and build them separately. Each of the formats mentioned earlier, plus a few others, are available somewhere on the Net in source format. Most are available somewhere on the Sunsite archives. Unfortunately, not all of these format specific libraries are easily built on Linux. The Gimp User Mailing list is often flooded with questions about how to get the JPEG library to build shared libraries. By default this library doesn't build a Linux ELF shared library. In fact, even with the proper configuration it still only builds a.out shared libraries. A better solution is needed. Enter libgr. This is a collection of image format libraries that have been packaged together and organized to easily build and install on Linux systems. The package builds both static and ELF shared libraries automatically. The distribution is maintained by Neal Becker (neal@ctd.comsat.com) and is based on the work done originally by Rob Hooft (hooft@EMBL-Heidelberg.DE). The latest version, 2.0.13, of libgr can be retrieved from ftp.ctd.comsat.com:/pub/linux/ELF. Libgr contains the following set of graphics libraries: * fbm * jpeg * pbm * pgm * pnm * ppm * png * rle * tiff It also contains the zlib compression library which is used specifically by the TIFF and PNG graphics libraries. It may also, although I'm not sure of this, be used by the FBM library to (at a minimum) support the GIF format. FBM is the Fuzzy Pixmap Manipulation library. This package is related to, but not part of, the PBMPlus package by Jef Poskazner. The library can read and write a number of formats, including: * Sun rasterfiles * GIF files * Amiga IFF * PCX * PBM * Face files (CMU format for 1bit files) * FBM * Utah RLE files (from the Utah Raster Toolkit) It also supports quite a number of image operations, all of which are described in the Features text file in the fbm directory. Like PBM, FBM is a format designed specifically by the FBM library author for handling images internal to the library (although you can write that format to a file too). JPEG is actually a standard that defines a suite of encodings for full-color and continuous-tone raster images1. The software for this library, which is essentially the same as the software that comes in the standalone JPEG library package found on the Gimp's ftp site, comes from the Independent JPEG Group and, as far as I can tell, supports the complete JPEG definition. JPEG is a common format for the Web since it is one of the formats listed by the WC3 in the early HTML specifications for Web images. The PBM, PGM, PNM, and PPM formats are all part of the NetPBM/PBMPlus packages. These formats are often used as intermediary formats for processing by the NetPBM/PBMPlus tools. Although these libraries provide the capability of saving image files in these formats, I have not seen this as a common practice. This is probably due to the fact that the files tend to be rather large and the image formats are not generally supported by non-Unix platforms. These formats are widely supported, however, by Unix-based graphics software. The PNG library supports the relatively new Portable Network Graphics format. This format was designed, at least in part, to replace the GIF format which had both licensing as well as a few format limitations. PNG is now an officially supported format by the WC3 although support for these images is not commonly mentioned by either Netscape or MSIE. I'm not sure if either supports PNG yet. RLE is Run Length Encoding, a format from the University of Utah designed for device independent multilevel raster images. Although the format is still in use today, you won't see it referenced often in relation to tools like the Gimp (though the Gimp does support the format) or 3D rendering engines like BMRT or POV-Ray. -Top of next column- [INLINE] More Musings... * Fun with QBist [INLINE] Finally, the TIFF library is a set of routines for supporting the reading and writing of TIFF files. TIFF files are popular because of their wide support on multiple platforms (Mac, MS, and Unix) and because of their high quality images. However, they tend to be extremely large images since they do not use any form of compression on the image data. Building the package Once you have retrieved the libgr package you can unpack it with the following command: % tar xvzf libgr-2.0.13.tar.gz This will create a directory called libgr-2.0.13. Under this directory you will find the format specific directories, Makefiles and a number of text files. In the INSTALL text file you will find instructions on how to build the software. For Linux this is a simple process of typing % make most which will build all the software but not install it. I recommend doing this once to test that the build actually completes successfully for all directories before trying to install anything. If the build fails and you attempt to install you may confuse yourself as to what has and hasn't been installed correctly. After the build completes, check each directory and see if the lib*.so files - the shared libraries - have been created. If all appears to have gone well, type % make install This will install the libraries for you. There are other options available for building and installing. Read the INSTALL text file in the top level directory for details on the other options. At this point you're ready to use these libraries with other tools, such as the Gimp. Why use libgr vs the individual libraries? Libgr provides support for a large range of image file formats, but it doesn't support every common and/or popular format. So why use it instead of the individual format libraries? One reason is convenience. Instead of having to retrieve a whole slew of packages you can grab one. Second, as mentioned earlier, not all of the individual packages are setup to build proper ELF shared libraries for Linux. Libgr is specifically designed for building these type of libraries. What libraries does libgr not include that you might want? One fairly common X Windows format is XPM. Libgr does not support this format so you'll need to retrieve the XPM library separately. Fortunately, most Linux distributions already come with this library prebuilt and available to you during installation of the operating system. Libgr also does not support any animation file formats. If you have need to read or write files in MPEG, FLI or FLC formats, for example, you will need to locate and install those libraries individually. Caveats One minor caveat to using the libgr package exists with the zlib distribution. According to the documentation for libgr (in the NEWS text file) the zlib release numbers went down at some point. This means its possible for you to have an older version of zlib installed even though its version number is higher than the one in libgr. How to resolve this is a tricky question but in my opinion it makes sense to install the zlib that comes with libgr because its known to work with the rest of the image libraries in the libgr package. If you agree with this logic then you will probably want to remove the old version of zlib first, before doing the make install for libgr. Summary Libgr is not a drop-in replacement for all your image file format needs, but it does offer added convenience to the Linux users by providing a Linux-specific, easy to use build and install environment. Since the libraries included in the libgr package do not change all that often it makes good system management sense to deal with the one distribution than to try to deal with updates to multiple image format packages. And if you're dealing with building the Gimp, which requires many image libraries, libgr is a much simpler solution to get you up and running in the least amount of time and with the least amount of frustration. [INLINE] ______________________________________________________________________ 1. C. Wayne Brown and Barry J. Shepherd, Graphics File Formats: Reference and Guide, Prentice Hall/Manning, 1995. [INLINE] Resources The following links are just starting points for finding more information about computer graphics and multimedia in general for Linux systems. If you have some application specific information for me, I'll add them to my other pages or you can contact the maintainer of some other web site. I'll consider adding other general references here, but application or site specific information needs to go into one of the following general references and not listed here. Online Magazines and News sources C|Net Tech News Linux Weekly News Slashdot.org Digital Video Computer Graphics World General Web Sites Linux Graphics mini-Howto Unix Graphics Utilities Linux Sound/Midi Page Some of the Mailing Lists and Newsgroups I keep an eye on and where I get much of the information in this column The Gimp User and Gimp Developer Mailing Lists. The IRTC-L discussion list comp.graphics.rendering.raytracing comp.graphics.rendering.renderman comp.graphics.api.opengl comp.os.linux.announce [INLINE] Future Directions Next month: Let me know what you'd like to hear about! ______________________________________________________________________ © 1998 Michael J. Hammel _________________________________________________________________ Copyright © 1998, Michael J. Hammel Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ More... Musings indent © 1998 Michael J. Hammel indent Managing your Perl scripts: using 'require' - continued Ok, so now you know how to make a subroutine and how to include it in your Perl cgi script. What does this have to do with building common HTML code for multiple pages? Simple: by assigning the HTML constructs, plus any associated database information, to global variables you can then simply add the variable to your main pages at the point of interest. For example, lets say you want to include an advertising banner across the top of all pages. You can write a small module that builds a table for the ad, centers it on the page and assigns it to the global variable $adbanner. This might look something like this: #!/usr/bin/perl5 # Include the CGI.pm and Msql modules use CGI qw/:standard :html3 :netscape/; use Msql; # The subroutine to create a table for our ads. sub setads { # Open a connection to the database. my $dbh1 = Msql->connect(); $dbh1->selectdb('mydb'); # Get the ads from the database. We assume # here that there is at least 1 ad in the # "ads" table. We also assume the table has # the format of # 1. imagename # 2. URL # The results from the database query are stored # in the @results variable. This list variable # will contain one array element for each field # in the "ads" table. $sth2 = $dbh1->query("SELECT * FROM ads"); while ( (@result = $sth->fetchrow) { # Add a entry with the image for the ad # linked to the specified URL. The "a({-href" # portion is where we use the CGI.pm # a() function to establish the hyperlink. push (@tableelments, td({-align=>'CENTER', -valign=>'CENTER'}, a({-href=>"$result[1]"}, img( { -src=>"/images/$result[0]", -alt=>"$results[1]", -border=>'0', -hspace=>'0', -vspace=>'0' } ) ) ) ) } # Now assign a table to our global variable and include the # table elements we just created. $adbanner = center( table( {-border=>1, -width=>'100%', -height=>'60'}, Tr( @tableelements ), ) ); } # Return true from included modules. 1 Since embedding one Perl function inside another, especially with the use of the CGI.pm functions, is such a common occurance I tend to align the closing paranthesis so that I can keep track of which function has been closed. You'll note in this example that the img() function (which will print an HTML IMG tag) is an argument to the a() function (which assigns a hypertext link to the image). This in turn is an argument to the td() function. Such multilayer embedding becomes quite extensive when you use CGI.pm table function (table(), Tr(), td()) to align elements of your HTML pages. This is why you will often find yourself using variables to which you assign Tr() and td() constructs and then simply reference the variables within the table() construct. At a minimum this makes the code easier to read. But even more important is that you can create lists of td() constructs to stuff inside a Tr() construct later by simply referencing the list variable. If we now include this module in our main script we can then print out the advertisement table at any time we wish: require 'setads.pl'; &setads(); print header, start_html( -author=>'webmaster@graphics-muse.org', -title=>'Our Little Web Site', -bgcolor=>'#000000', -text=>'#000000' ), $adbanner, table( {-border=>0, -width=>'100%', -height=>'97%', -cellpadding=>0, -cellspacing=>0}, Tr( td({-align=>'LEFT', -valign=>'TOP', -rowspan=>2, -width=>'110', -bgcolor=>'#FFCC00'}, $news_table), td({-align=>'CENTER', -valign=>'CENTER', -width=>'78%', -bgcolor=>'#FFCC00'}, $nav_bar), td({-align=>'CENTER', -valign=>'TOP', -rowspan=>2, -bgcolor=>'#FFCC00'}, $book_table) ), Tr( td({-align=>'CENTER', -valign=>'TOP', -height=>'80%', -bgcolor=>'#ffffff'}, $qd_table ) ) ); end_html; Here we printed out the ad banner right above another table that will contain other information for this page. The variables $news_table, $nav_bar, $book_table, and $qd_table were filled in by parts of the code not shown here. They could just as easily have been filled in by other external modules, just like $adbanner was. This last bit of code actually comes from the code I'm writing for the new Graphics Muse web site. I have a common table definition for all pages (the table printed after the $adbanner in the last example), and modules for assigning HTML formats and data to the $news_table, $nav_bar and $book_table. Then each main CGI script fills in the $qd_table variable with page specific data. In this way I can make modifications to the way data is displayed in, for example, the news_table by only having to edit one script. Management of the site becomes much more simple than having to edit all the scripts each time a single change to news_table needs to be made and I avoid annoying many users by avoiding the use of frames. In the short time I've been using Perl I've grown to truly appreciate both its sophistication and its simplicity. Things that should be simple to do are simple. Additional tools like CGI.pm and Msql make integrating Perl with my Web site a breeze. I've managed to rebuild my Web site from the ground up in less than a a couple of weeks and I'm not even making full use of what Perl can do for me. If you manage a Web site and have access to the cgi directory you should definitely consider learning Perl, CGI.pm, and one of the many databases which Perl supports. indent Fun with Qbist [INLINE] One of the more interesting plug-ins in the Gimp is Qbist, written by Jens Ch. Restemeier and based on an algorithm from Jörn Loviscach that appeared in the magazine c't in October 1995. I've had quite a good time playing with this plug-in creating backgrounds for logos and other images. The filter is really pretty easy to use. The plug-in dialog is made up of a set of 9 preview windows. By clicking on any one of these the entire set is updated with new previews and the preview you clicked on is displayed as the new middle preview. This central preview is used as a basis to generate the rest of the previews. You can generate a set of previews that are somewhat similar to the basis preview by clicking on the middle preview. In most cases, at least one of the previews will be significantly different from the basis. Selecting another preview usually generates quite different previews, but this isn't always guaranteed. [INLINE] The algorithm is sufficiently random to make it possible that not only can the other non-basis previews be radically different, they can also be nearly exactly the same as the orginal. From a creative standpoint, I find this rather interesting. At times, when I'm tired of coding or writing, I pull this filter up and start to become creative. The patterns it generates are on the edge of randomness, with just enough recognizable geometry to make you say "No, thats not quite right, but its close". The problem, of course, is it keeps you saying this ad infinitum until you realize its long past midnight and you have just enough time for one cup of coffee and a shower before you have to be to work. But this is the kind of creativity I used to feel with coding when I first got my hands on PC (ok, it was a TRS-80, but you get the point). Its refreshing to feel it again. Once you've selected the preview you want in your image, making sure its been selected and is displayed as the basis preview, you can add it to the current layer of your Image Window by clicking on OK. [INLINE] Qbist will fill the entire layer, or the active selection, with a scaled version of the basis preview. Since there are no blend modes for Qbist the selection/layer will be completely overwritten with the Qbist pattern. The real trick to using these patterns comes from being able to make selections out of the geometrically connected pieces, creating drop shadows from the selections and slipping other images or text inbetween the layers. Some drawbacks and limitations Although I really like this filter, it does have a few drawbacks. First, opening the dialog doesn't always get you the same set of previews as the last time you opened the window, although the basis is the same. It would be nice if you could get the same set of previews since you may see another preview in the current Qbist session that you'd like to use after selecting the current basis. Unfortunately you won't be able to do that since the dialog closes after you click on the OK button. You can save the basis preview, but reloading it later has the same effect - the rest of the previews are random and not likely to be the same as the ones you had seen originally with that basis. Another problem is that the Save/Load options don't deal with a Qbist-specific directory. A number of other plug-ins manage saved files within directories under the users $HOME/.gimp directory. It shouldn't be difficult to update Qbist to do the same. Its just a matter of getting around to updating the code. Speaking of the code, a quick examination of the source to Qbist shows some hard coded values used in various places that appear to be the sort of values that should be user configurable. The interface could be expanded to allow the user to change these. I may try this sometime soon, just as an experiment to see how changes to these values affect the previews. [INLINE] Since I'm not familiar with the algorithm its unclear if these values are necessarily specific or just good initial seed values. Another option might be to allow the user to choose some color sets from which Qbist could render its patterns. Right now Qbist chooses colors on its own, without input from the user. Finally, probably the most annoying aspect to Qbist is that there are no blend modes available. I'd love to be able to render a Qbist pattern in one selection and then use another selection to blend a different pattern over a corner of the first selection. I can do this with multiple layers, but it would be more convenient to be able to do this from within Qbist itself. Qbist renders it patterns in both the previews and the image window fairly quickly, so changes like adding blend modes shouldn't cause serious performance problems. Qbist is a plain fun filter. Like many of the Render menu filters, Qbist gives you a chance to explore some of your true creativeness. By letting you wander through a random collection of patterns it lets you play with your computer in a way that a game can never quite equal. Although your control over these patterns is a bit limited, the patterns themselves are sufficiently fascinating to make Qbist a filter well worth exploring. indent © 1997 by Michael J. Hammel "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Installing StarOffice 4.0 on RedHat 5.1 by William Henning Editor, CPUReview Copyright July 29, 1998 ALL RIGHTS RESERVED Today while shopping, I found StarOffice 4.0 (Commercial version) at a local cdrom shop. I already own (and use) ApplixWare, but I could not resist - given the usually positive reviews, I just *had* to try it. Please note that Caldera currently has a special on StarOffice 4.0 - $49.95US. That is an excellent price for a commercial license. Also note that StarOffice is available via ftp without cost for non-commercial use. I wanted to see how it would perform on a fairly low performance system, so I loaded it onto my server. In order to benefit others, I thought I would document my installation. I will use it for a few days, after which I will write a review on my 'user' experiences. The Software StarOffice comes on two cd's, in a jewel case. The first CD contains StarOffice and also appears to contain OpenLinux Lite along with some additional contrib packages. The second CD, a pleasant surprise, appears to be OpenLinux Base. This means I will have a busy couple of nights - I'm going to have to try out OpenLinux. The Computer * Tyan Titan-II motherboard (Socket 5, 256k sync cache) * WinChip 200Mhz (yes, it does work in single voltage motherboards!) * S3-968 video card, 4Mb of VRAM running at 1024x768x16M * 32Mb FPM memory, 127Mb swap * 24x Panasonic cdrom * 6.4Gb Quantum ST * DLINK 500TX 10/100Mbps 10BaseT, running at 10Mbps The Operating System * RedHat 5.1 * reasonably up to date with updates from RedHat The Installation I read the instructions - and the 'README' file. I logged in using my regular user account, went to /mnt/cdrom/StarOffice_40, and entered './setup'. After the installation program started up, I got the infamous dialog "line 1: syntax error at token 'I' expected declarator; i.e. File..." prompting me to press ok. In all honesty, I must admit I was expecting this problem - I remember people asking for help with this very same problem while reading the Linux news groups. I went to Dejanews to find out how people solved this problem. I used "Staroffice 4.0 RedHat 5.1" as my search string, and got 61 matches. First Try The very first match was a posting from Simon Gao, who on July 27 wrote: This is a well known problem with RedHat 5.x. The problem is that StarOffice4 requires libc.5.4.28 above file system. Check out at www.waldherr.org/soffice and you find how to solve this problem. Off I went to Stefan Waldherr's web site. There I found that the version of StarOffice I purchased today is already outdated - and that I should download the latest version. As most people who purchase the commercial StarOffice package will get the same version I got (and as I did not want to wait to download 4.0.3 yet) I just downloaded the staroffice wrapper and proceeded to see if I could install 4.0 as shipped on the CD. I become root to install the rpm. The rpm would not install, I was treated to an error message: Error during install, staroffice tar file not found. Looking for any of the following files or directories /tmp/so40sp3_lnx_01.tar.gz /tmp/so40sp3_lnx_01.tar.gz Since I *REALLY* did not want to download 4.0.3 yet, I got stubborn. Second Try I looked through some more messages, and based on the information I found, I tried the following: I ftp'd libc-5.4.46-1rh42.i386.rpm from ftp.redhat.com/pub/contrib/i386, and tried to instal it. I got a "failed dependencies: ld.so >= 1.9.9 is needed by libc-5.4.46-1rh42.i386.rpm" message. Good thing I kept my ftp session open. I now ftp'd ld.so-1.9.9-1rh42.i386.rpm. This time I got a pile of glibc conflicts. Nope, there *HAS* to be a simpler way. Conclusion: Third Time Lucky Back to the drawingboard - or DejaNews, as the case may be. I found an article by Tommy Fredriksson, originally posted in stardivision.com.support.unix. Tommy wrote: In article <35A4B35E.CAA00699@actech.com.br> wrote: > I just got StarOffice 4.0 ServicePack 3 but I can't run on my RedHat Linux > 5.1 box, it shows that dreaded "line 1 syntax error at token 'l'", etc. RH > 5.1 is libc6-based (glibc), but I also put libc-5 on my /lib directory. > Even this would not make it work. Could someone help me on this? Put your "libc-pack" anywhere you can find it - tell /etc/ld.so.conf (on top) where you put it and run ldconfig -v and look for errors - if non, install SO. That's all... Based on this message, I improvised. To save all of you some work, here are some step by step instructions on how to install StarOffice 4.0 on RedHat 5.1: 1. Go to http://sunsite.unc.edu/pub/Linux/GCC/ 2. cd to the home directory of the user you are installing it for 3. download libc-5_4_46_bin_tar.gz into the current directory 4. become root 5. mkdir ~/tmp 6. cd ~/tmp 7. tar xvfz ../libc-5_4_46_bin_tar.g 8. cd lib 9. cp * /lib 10. edit /etc/ld.so.conf 11. add a new line at the top, "/lib" (without the quotes) 12. ldconfig -v 13. go back to the normal user session under X (stop being root) 14. cd /mnt/cdrom/StarOffice_40 15. ./setup 16. follow the prompts - I chose custom install, and let it install everything. 17. you can safely remove ~/tmp after you have installed StarOffice It Works! Following the README, I typed Office40/bin/soffice. After some disk activity, it ran! Note, I did not time how long it took, but it seemed like 20-30 seconds. I chose to create a new document. I resized the window, and docked the paragraph style floating bar on the left hand side. The text in the default view was pretty poor, so I chose the 'Optimal' view (why don't they default to Optimal?) under the 'View' menu. This looked much better. I proceeded to type a few lines, and chose to print. I let it print as if to a PostScript printer. Lo and behold, my HP4L printed out the text quite nicely! Conclusion I am afraid that a review of StarOffice will have to wait for another day. So far, I like what I see, however I will only be able to intelligently comment on its features after using it for a while. Caldera or StarDivison has to make installation easier. I fully intend to try OpenLinux, and I am sure that the StarOffice installation will be much smoother than under RedHat. At this point, a Linux beginner who tried to install StarOffice on a RedHat system, and was not used to using excellent resources such as Dejanews, would have a very frustrating experience. The fine help available on the net from individuals like Tommy Fredriksson, Stefan Waldherr and many others, makes a mockery of the assertion that Linux has no support. I hope their postings and this article will save some time for those trying Linux for the first time. I hope you enjoyed this article, William Henning Editor, CPUReview _________________________________________________________________ Copyright © 1998, William Henning Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ An Interview with Linus Torvalds By Alessandro Rubini _________________________________________________________________ Alessandro: Can you please dedicate some of your time to us? Linus: Sure, no problem, I'll try my best. Alessandro: Are you happy with living in the States? I preferred having a Finnish leader rather than another american ruler.. how do you feel in this respect? Do you plan to come back? Linus: I've been very happy with the move: I really enjoyed being at the University of Helsinki, but I decided that trying out something different was worthwhile, and so far the experience has been entirely positive. I agree that Finland is a lot more "neutral" in many ways, and that had its advantages in Linux development: I don't think anybody really dislikes Finland, while a lot of people are nervous about or even actively dislike the US. So in some sense that could have been a downside, but I felt that most people trusted me more as a person than as a Finn, so I didn't feel it to be a major issue. Moving to the US has meant a lot better weather (even though this has been one of the rainiest winters in a _long_ time here in the Bay Area), and has also been very interesting. While I really liked my work at the University, the new stuff I'm doing is more exciting - more on the edge, so to say. Alessandro: We all know about the USA restrictions on cryptography; do they affect Linux features in the field? Linus: It doesn't seem to be a real issue. The idiocy of the US cryptography export rules were a problem even before I moved here: mainly because if/when we add strong cryptography to the kernel we would have to make it modular anyway in order to let the CD-ROM manufacturers (many of whom are based in the US) take it out in order to sell overseas. So me moving here didn't really change that fact - it only made it apparent that I can't be the person working on cryptography (something that was fairly obvious anyway, as I'm not really an expert in the area). Anyway, regardless of the above I sincerely hope that the US cryptography rules will be changed in the near future. The US rules have made a lot of things more difficult than they should have been. (In all fairness the US isn't the only country with problems: the French have even more of a problem in this area and are trying to get other European countries to do the same thing. Happily the French are a fringe group of loonies in this matter, while the US has been a real problem due to being so central when it comes to information technology). Alessandro: Did you ever think about leaving your role as Linux coordinator or is it fun like it was in the beginning? If you would leave, what would your next project be? Linus: I've never seriously considered leaving - the only times the issue has come up is really when somebody has asked me what the succession would be in the case I no longer felt interested. Linux has always been so much fun to coordinate that while it obviously takes a lot of my time I have always felt that it was more than worth it. Alessandro: Out of curiosity, how long do you write code daily, and what is your current main activity? Linus: I usually don't spend all that much time coding on Linux any more: occasionally I have bursts of things I do when I code full-day for a few weeks or so, but those are fairly rare, and mainly happen when there is some fundamental shift in the kernel that I want to get done. During the last year it's happened four or five times, mainly with regards to SMP or the so-called "dentry" filesystem cache. Most of the time I spend reading and reacting to emails - coordinating the others working on things, commenting on ideas, and putting together patches. This is by far the most work: I'd say that my coding is only about 10%, while the coordination is 90% of the work. Alessandro: How did you manage to write a free kernel and still earn your living? Linus: Initially, I was a university student at the University of Helsinki. What that means in Finland is that you get support by the goverment for a number of years in order to be able to finish your degree, and there is also a possibility to get special student loans. I suspect Italy has something similar, although probably not as comprehensive as the Finnish system. And after a year or two I was actually employed by the university as first a teaching assistant and then later a research assistant, and the university also actively encouraged me to be able to write Linux at the same time. Right now, I obviously work at a commercial company, but even here I get to do a lot of Linux work even during work hours because even though Transmeta doesn't sell Linux or anything like that, there is a lot of _use_ of Linux inside the company, so me continuing to work on it is obviously supportive of the company. So I've always been able to do Linux together with doing my "real" work, whether that was studying or working for a university or working for a commercial entity. There has never been much of a clash, even though obviously my working hours aren't exactly nine to five.. Alessandro: Why didn't you turn to commercial support like Cygnus did? (I think I know why :-) Linus: I just never felt the interest to turn any part of Linux commercial: it would have detracted a lot more from my time to maintain a company or something like that, and it was never what I was interested in. It would also have implicated Linux money-wise: I wouldn't have been free to do what I wanted technically because I would be bound by constraints brought about by having to feed myself and my family. In contrast, working at the University or here at Transmeta, I make a living without having to involve doing Linux-decisions into it - so I'm free to do whatever I want with Linux without having to worry whether it will pay my next months rent.. I feel a lot happier not having those kinds of pressures on Linux, and I think most other developers feel the same way (they don't have to worry about my technical judgement being corrupted by any financial issues). Alessandro: Do you think you changed the world or just fired the straw? (Again, I know you) Linus: I started it, and I feel very proud of that. I don't hink I "changed the world", but I feel privileged in being instrumental in changing a lot of lives - it's a good feeling to know that what you do really matters to a lot of people. I wouldn't go as far as saying that it "gives my life meaning", but Linux definitely is a _part_ of my life, if you see what I mean. Alessandro: What's your opinion of Richard Stallman's work and philosophy? Linus: I personally don't like mixing politics with technical issues, and I don't always agree with rms on a lot of issues. For rms, there are a lot of almost religious issues when it comes to software, and I'm a lot more pragmatic about a lot of things. As a result, we know we disagree about some things, and we actively don't try to work together too closely because we know it wouldn't work out very well. The above may make it sound like I dislike rms, and at the same time that is not at all true. Rms has obviously been the driving force behind most of the current "free software" or "open source" movement, and without rms the world would be a poorer place. And he _needs_ to be religious about it to be that driven. So I guess the best way of saying it is that I really admire rms, but I wouldn't want to be him, because our worldviews are different. Alessandro: On the practical side, what's the schedule for 2.2? What are the main differences between 2.0 and the upcoming 2.2? Linus: As I looks now, 2.2 should be sometime early summer or so, but it's hard to judge: there are a few things that really need to get fixed, and before they are fixed there's no point in even thinking about it. Right now there's a bad TCP performance problem that is holding things up: everything _works_ ok, but it is serious enough that I can't imagine a 2.2 before it is fixed. The changes 2.2 will have are mainly much more mature support for the new things in 2.0, namely SMP and multiple architectures. There are a _lot_ of other things in there (the new dentry code, totally rewritten NFS etc), but the SMP and architecture maturity is one of the most fundamental things that 2.2 will have. Alessandro: Bruce Perens claims "world domination: 2003"; is that realistic? In your opinion, will the concept of free software gain polularity in the mass market? In this respect, what's your opinion about the move of Netscape Corp.? Linus: The "World Domination" thing is obviously always a bit tongue-in-cheek, but I think that yes, a five-year timeframe for the free software movement and Linux to make a major noticeable impact is not at all unrealistic. The Netscape open source thing is one of the first indications of this, and I think we'll see others doing similar things. Alessandro: How will the various free OS's coexist, in your opinion? Linus: I think the current setup where people are aware of each other, but there is no organized or official co-operation is probably how it will continue. The whole point of Linux is that there is definitely room for more than one operating system (especially if that one operating system is a bad one made by microsoft ;), and I don't see that changing - the FreeBSD's and other operating systems will be around. Maybe not in the same form (more specialization etc), but I don't see any fundamental issues here.. Alessandro: Or do you think that development of Wine and other tools will lead to the cohexistence of two systems of similar technical value, one free and the other proprietary, running the same application programs? (Horrible question, IMHO). Linus: No, I think the development of Wine will be an important step for the PC operating systems arena, but that step will be in the way of leveling the playing ground: when just about everybody can run the basic legacy Windows applications like MS Office etc, that allows the systems to really compete on being good at other things. So rather than having two systems of similar technical value, I think that you'd have many systems that are all able to run the same basic applications, but where the emphasis is on different things. Microsoft, for example, ha salways emphasized mediocrity and high volume, while Linux has (and will continue to) emphasized more technical issues. Alessandro: Currently we lack free office applications. Is this a matter of time, or do you think that these programs will only be available from commercial companies? Linus: I think that there will always be a niche for commercial programs, and while I think we'll see free office applications proliferate, I don't think that we necessarily _have_ to have them. The reason I personally want a free operating system and basic applications is that I really think that if the basics aren't stable and you can't modify them to suit your own needs, then you are in real trouble. But when it comes to many other areas, those issues are no longer the most pressing concerns, and then it is not as critical that you have free access to sources. Alessandro: Sometimes we hear of so-called ``standards'' that remain proprietary (like I2O), is this the last rant of dying companies, or is free software at risk? Linus: I don't worry too much about I2O and other proprietary standards. The whole idea of a proprietary standard has always failed - all of the successful standards these days are fairly open. Sometimes they are proprietary because the company that made them had enough clout to force it to be that way on its own, but I don't think that kind of clout exists anywhere else than at Intel and at Microsoft, and that even those two are being eroded by competition. Alessandro: What is your position about the availability of Linux modules in binary-only form? Linus: I kind of accept them, but I never support them and I don't like them. The reason I accept binary-only modules at all is that in many cases you have for example a device driver that is not written for Linux at all, but for example works on SCO Unix or other operating systems, and the manufacturer suddenly wakes up and notices that Linux has a larger audience than the other groups. And as a result he wants to port that driver to Linux. But because that driver was obviously not _derived_ from Linux (it had a life of its own regardless of any Linux development), I didn't feel that I had the moral right to require that it be put under the GPL, so the binary-only module interface allows those kinds of modules to exist and work with Linux. That doesn't mean that I would accept just any kind of binary-only module: there are cases where something would be so obviously Linux-specific that it simply wouldn't make sense without the Linux kernel. In those cases it would also obviously be a derived work, and as such the above excuses don't really apply any more and it falls under the GPL license. Alessandro: What do you think about the KDE-Qt question? Is Gnome going to succeed? Linus: I personally like Qt, and KDE seems to be doing fairly well. I'm taking a wait-and-see approach on the whole thing, to see whether gnome can do as well.. Alessandro: An interesting challenge is "band reservation" in the network subsystem; is that going to happen any soon in Linux? Linus: I'll have to pass on this one. It's not one of the areas I'm personally involved with or interested in, and as such it's not something I'm going to be very involved with any efforts that way. That's how Linux works: the people who need or want something get it done, and if it makes sense on a larger scale it gets integrated into the system.. Alessandro: Many people ask why the kernel is written in C instead of C++. What is your point against using C++ in the kernel? What is the language you like best, excluding C? Linus: C++ would have allowed us to use certain compiler features that I would have liked, and it was in fact used for a very short timeperiod just before releasing Linux-1.0. It turned out to not be very useful, and I don't think we'll ever end up trying that again, for a few reasons. One reason is that C++ simply is a lot more complicated, and the compiler often does things behind the back of the programmer that aren't at all obvious when looking at the code locally. Yes, you can avoid features like virtual classes and avoid these things, but the point is that C++ simply allows a lot that C doesn't allow, and that can make finding the problems later harder. Another reason was related to the above, namely compiler speed and stability. Because C++ is a more complex language, it also has a propensity for a lot more compiler bugs and compiles are usually slower. This can be considered a compiler implementation issue, but the basic complexity of C++ certainly is something that can be objectively considered to be harmful for kernel development. Alessandro: What do you think of the Java phenomenon? Did you ever consider integrating a Java VM, like kaffe, in the kernel? Linus: I've always felt that Java had a lot too much hype associated with it, and that is still true. I _hope_ sincerely that Java will succeed, but I'm pragmatic and I'm not going to jump on the Java bandwagon prematurely. Linux already supports seamless running of Java applications as it is, and the fact that the kernel only acts as a wrapper for the thing rather than trying to run the Java VM directly I consider to be only an advantage. _________________________________________________________________ This article is reprinted with the permission of Infomedia, Italy. An Italian translation of this article can be found at http://www.pluto.linux.it/journal/pj9807/linus.html. The interview was done by e-mail in February, 1998. _________________________________________________________________ Copyright © 1998, Alessandro Rubini Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ It takes its toll by Martin Vermeer mv@liisa.pp.fi The origin of the current mess can be traced back to a short spell of ultra-liberalism, when the government caved in to the pressure to cut taxes and eliminate the national debt by selling off the road network. Politicaly, it has been a success; taxes are consistently lower than they have been for long, and the man in the street seems to be satisfied. Of course in the beginning, the situation was quite messy; highway segments were auctioned off, and the result was toll booths everywhere, so you had to stop many times and have a lot of petty cash handy if you wanted to get anywhere. But then, gradually, a market leader appeared. Federal Transport Corp. bought strategically placed road segments, connected them into a countrywide network, made it impossible for anyone else to do the same, and slowly took over the rest. By the motorists, it was felt to be a blessing. Sure, prices went up; but you could get by with getting a yearly license and putting the barcode sticker on the roof of your car; you didn't even have to brake anymore when passing the toll station. And the more roads FT acquired, the better the offer they could make their customers; such are the ways of "network externalities". Obviously as many of us now realise, the net effect was no tax drop at all. The yearly fee to FT is just another tax, if you want to use your car to go anywhere at all; and what's worse, it is paid to an authority we didn't elect ourselves. There has been a groundswell of resistance, such as the freetown (or "open roads") movement, and I sympathise fully with this. I live in a freetown now; a small one, at the foot of the mountains. Others are on the coast, or around airports. Few are inland. We have our own road network that we own ourselves collectively, just like in the old days. If you want to go to another freetown, you have the options of air, rail and water transport, which are not (yet) under FT's control. If you want to visit people outside freetown land, you have to pay the toll, of course :-( This -- referred to as "gating out" -- is minimized by careful planning. You may ask, why did I choose to live in a place, and under a regime, that limits my freedom of movement so much? Well first of all, it is my own choice. I don't want to owe my "freedom" to an authority that does not represent me. And then, there are compensations. The people. Freetowners are active, involved citizens; everything is debated, and decisions are taken by informed people. Compare that to the way outside. It's a different culture really, and I like it. They are my kind of people. And, except in the matter of transport, life in a freetown is just as good or better than outside. There is a lot of employment in hi-tech; as I said, we are a sophisticated lot. And there are no advertisements of FT, like there are everywhere outside, enquiring politely but insistently where you would feel like going today... that really gets my blood pressure up. These are interesting times we live in; recently the freetown movement has gained a lot of interest and newcomers are flowing in. Resentment at the Federal Transport monopoly is tangible, now that fees are going up and road maintenance is being neglected. Earlier, just after the sell-off, roads were maintained well; you had the option of choosing alternative routes, and the toll revenue was channeled to maintenance and improvement. Now, many road segments seem to be in free fall down towards their natural state. You still have alternative routes to choose from; but they are all under FT's control and in uniformly poor shape. And then there is this crazy project called the RoadPlane. It is a gigantic vehicle, carrying hundreds of people at 200 mph along the highways, rolling along smoothly on smart-strutted wheels, navigated by satellite, electronic map and road radar. I have heard of people riding one of those things; quite an impressive experience, it appears. FT's slogan is "A Better Plane Than The Plane", but some bad accidents have happened already. It is a very complex system; OK as long as everything works, but winter weather, the poor state of the roads, and errors in the maps -- or an animal straying on to the road -- are hard to foresee and take into account. These problems have generally been glossed over in the media; FT represents a major advertising budget for them. RoadPlane is FT hybris at its best. It is a white elephant and that fills me with glee. This could be the undoing of FT, who knows. But it will only happen if people take the trouble to inform themselves, understand how they are being ripped off, and become active! Similarity to real events and circumstances is, again, purely and wholly intentional. _________________________________________________________________ Copyright © 1998, Martin Vermeer Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Java and Linux By Shay Rojansky _________________________________________________________________ Not long ago, Javasoft celebrated Java's third birthday. Java, once seen as merely another way to animate and spice up web pages, has become much more than that. Nowadays, well-known software corporations have pledged their support to Java, and new Java APIs are being defined in record time. The Java technology enables programmers to finally write truly multi-platform programs, offers an advanced cross-platform GUI toolkit, embedded threading in the language and much more. At the same time, we are seeing remarkable events in the computer software world. Microsoft, the behemoth of the industry, is being seriously threatened by anti-trust action from both the Justice Department and 20 different states. Netscape has released the source code for Communicator and may be the first company to break free of Open Software prejudice. This has attracted much interest in Open Software from companies who have traditionally feared the concept. What do all of these events mean for the Linux operating system? It means we have a window of opportunity. Never before has the time been so right. On one hand, the industry is seriously taking a look at Linux as an open (and free) OS. Hey, if Netscape is doing it with their browser, why not an Open OS? On the other hand, Java technology offers a machine-independant way to write applications, and much of the industry has rallied behind it. _________________________________________________________________ Java and the Linux Community The Linux community itself, however, has always treated Java with an ambivalent attitude. The language that has promised to topple the hegemony of Microsoft, a dream like that of any Linux enthusiast, hasn't been accepted into the mainstream of Linux development. There are several reasons for this. First and foremost, Java is a proprietary language owned by Sun Microsystems. This means that Sun controls every aspect of the language, the APIs and their licensing conditions. Tactics by Microsoft, like changing APIs in their Java suite J++ and rendering their virtual machine incompatible with other Java virtual machines, have forced Sun to seek exclusive rights to dub a product ``Java-compatible''. Although this may be the only way to fight Microsoft's unfair tactics, never before has a language been so much in the hands of a single corporation. The Linux community was born much in protest of this kind of ownership. Second, the multi-platform concept of Java, the Java Virtual Machine (JVM), means that programmers feel they are programming for the Java environment and not for the Linux environment. This also means that it's much more difficult to exploit the features of Linux. Third, Java is still slow. Many promising enhancements are available such as Just-in-Time compilers and Sun's Hotspot (still in beta). Java has certainly improved since it was first created, but it still requires a powerful platform. The Linux world is relatively speed-minded, and one of the main advantages of Linux is its ability to run on obsolete hardware. _________________________________________________________________ The Advantages of Programming in Java Despite all these shortcomings in the nature of Java, it is the only real challenge made in the last few years to Microsoft's rule. It is also an advanced language, written from the ground up with modern programming concepts in mind; all the flaws C++ retained from C for backwards compatibility are gone in Java, along with other complex features (multiple inheritance, for example). An automatic garbage collector removes the need to free memory, drastically reducing development time. Threads are so embedded in the language they become an indispensible tool for the programmer. I hope Linux developers take a second look at Java as a development language and start using it regularly. Some Linux developers have already made impressive progress with Java tools, including several Java virtual machines (JVMs), several Just-In-Time (JIT) compilers and others. Take a look at these if you are considering using your Linux platform for developing Java. The Java-Linux resources page can be found at: http://www.blackdown.org/java-linux.html I will now go over some of the key features in JDK 1.1.x. Note that the next version, 1.2 is in beta but should be available soon. _________________________________________________________________ Object Serialization Object serializing means taking an object and flattening it into a stream of bytes. In practice, this is usually used for two things--passing objects through a network and storing objects in files. Usually, a programmer who wishes to store a data structure on disk has to write a specific algorithm for doing so, which can be quite tedious. Java simplifies all this by doing it automatically for you. For example, if you have a tree in memory and wish to pass it to another Java program on the network, all you have to do is to pass the root object--Java will follow the pointers and copy the entire tree. If you have special considerations (like security), you may design the way the object is serialized. _________________________________________________________________ Java Foundation Classes (Swing) The original AWT, which is the windowing toolkit for Java, was very clunky and uncomfortable. Many components were missing and the programming model was needlessly painful. The current accepted toolkit for Java is code-named Swing. Swing offers a large number of lightweight components; they are fully implemented in Java but do not use the underlying windowing architecure as in AWT. This assures the same functionality across platforms. Another appealing feature is the completely pluggable look and feel, which lets you switch between Windows and Motif, for example, while the program is running. You can also design your own look. _________________________________________________________________ RMI (Remote Method Invocation) RMI is the Java equivalent of CORBA, which is a way to invoke methods in objects that are in a different JVM (or even machine). For those of you who know the RPC (Remote Procedure Call) frequently used in UNIX machines, RMI (and CORBA) are its object-oriented counterparts. The concept of ``distributed programming'' has gotten very popular lately. In general, it means a very tight integration between programs across the network; objects in different machines can talk to each other simply by calling each other's methods. This is accomplished by having a Java program hold a ``stub'' of a remote object. Then, when a method is invoked on that stub, Java transparently sends the request over the network and returns the requested value. The extent by which distribution and serialization are embedded in Java show the advantage of a modern language designed to support these concepts. _________________________________________________________________ JNI (Java Native Interface) Often programmers can get frustrated when they wish to use the benefits of Java to do something that is system dependant. The JNI allows you to interface with a native-shared object and run its functions. This means you can write system-dependant code in C (or any other language) and use it from Java. Of course, as a result, your program would not be portable unless you supply the shared object to all platforms. This could be useful, for example, to catch signals in UNIX and to access the registry in Windows. _________________________________________________________________ JDBC (Java Database Connectivity) Java Database Connectivity is an SQL database access interface. It provides a database-independent way to perform SQL queries on any database that provides JDBC drivers. Currently, many popular databases do, and those that don't can still be accessed via the JDBC-ODBC bridge, which allows you to use ODBC drivers instead. For a list of database drivers see: http://java.sun.com/products/jdbc/jdbc.drivers.html. Take a good look at Java. If we could manage to separate the applications from the operating systems running them, we'd have the freedom to choose shich OS we like best. Although in spirit the Linux community has a ``renegade'' non-conformist element in it, Java has a great potential and deserves our attention. The Linux-Java combination can turn into a winning one. Java Resources Java home: "http://java.sun.com/ Java developer connection (free registration): http://java.sun.com/jdc/ Swing (JFC): http://java.sun.com/products/jfc/index.html Java for Linux: http://www.blackdown.org/java-linux.html _________________________________________________________________ Copyright © 1998, Shay Rojansky Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Linux Installation Primer By Ron Jenkins _________________________________________________________________ You've heard all the hype, and decided to find out what this "Linux" thing is all about. Or maybe you need a low cost alternative to one of the commercial operating systems. Perhaps you need an easy way to connect diverse systems and let them all communicate with each other..tomorrow.. or you'll be encouraged to "seek new employment challenges." In any case, you have a problem that needs a solution, or a curiosity that needs to be satisfied. Well, you have come to the right place. Join me as we take a journey into the exciting world of the Linux operating system. Please keep your hands inside the car at all times, and remain in your seat. _________________________________________________________________ What the heck is Linux anyway? Linux is a freely distributable version of Unix developed by Linus Torvalds and thousands of other programmers scattered all over the world. What started as a hacker's system, designed primarily for the technically adept, has now evolved in to a viable, stable operating system with a robust set of applications and tools making it suitable for both personal and mission critical commercial use. In just the past six months Linux growth has undergone an exponential expansion. Every day Linux gains more and more press and exposure. Many commercial vendors are announcing support, or ports of their products to the Linux operating system. I saw just the other day that Oracle and Informix, both major players in the Unix database world, have ports to Linux underway. _________________________________________________________________ Well, that's fine and dandy, but what does it mean to me? This is incredibly significant, not just to the techno-geeks (yes, that's me) but to the entire spectrum of computer users. One of the benchmarks of the commercial viability of any product is the support of the application vendors. While it's great fun for me to write my own programs and applications, most people just need to get some work done, on time, as easily as possible. Or perhaps you want to surf the net for entertainment, or playing games. Without the "killer apps", an operating system is doomed commercially. What this all means to you is never before has there been an operating system, with a robust set of applications and development tools, available for little or no cost, other than the "sweat equity" required to learn to use it effectively. An additional point to consider is that as you progress in your Linux education you are also increasing your skill level and, ultimately, your worth in the marketplace. One of the strengths of Linux is that you have the power to choose the depth of knowledge required to accomplish your tasks. Want to just bang out a document or two, or play games? You can. Want to surf the Internet and exchange e-mail with your friends and coworker's? No problem. Want to learn to program in a variety of different languages? Go ahead. The point here is Linux can do all these things, and much more. Additionally, with Linux, you are not required to fork over more money for each function you want to add. _________________________________________________________________ Okay. That sounds great, but I've heard that Unix is difficult to configure, hard to install, only for the pocket protector crowd, etc. While this was the case at one time, here and now, in 1998, it's simply not true. Major advancements have been made in the installation and configuration process, and in most cases Linux is no more difficult to install than any other operating system. With the advent of package managers, Graphical User Interfaces, and "smart" probing of your system's components, installation has become largely a moot issue. The truth is, you could not have picked a better time to join the Linux world. Granted, once you get to networking issues, there is more to it in a Unix machine than a Windows box, but with the new configuration utilities, combined with an intuitive, easy to understand process, I firmly feel that Linux is about as easy to configure as Windows. _________________________________________________________________ Well, if you say so, but if Linux is not a commercial product, where do I go if I get in trouble? Luckily, there are commercial distributions of Linux available, as well as commercial support contractors who will be happy to help you out. And usually for quite a bit less than the people in Redmond, and the Linux vendors actually answer the phone. And call you back. Now I'm going to tell you about Linux's secret weapon. Remember, those thousands of people I mentioned before? Well, there is a virtual universe, populated with people who are ready, willing, and able to help you out. You will find them on USENET, commonly called newsgroups, on Internet Relay Chat, commonly called IRC and in your local area, in the form of Linux User's Groups. As a matter of fact, this free noncommercial group of people have made such an impact on the end user community, that in an unprecedented move, Infoworld magazine named the Linux support community as a whole, as the 1997 Product of the Year! _________________________________________________________________ Okay, that all sounds good, but I've got an old 486 DX2/66 that's real slow. Would Linux do me any good? The answer is a resounding yes! Linux will run on anything from a diskless workstation, to an XT, to the latest whizbang hardware. As a matter of fact, I've used these machines for everything from routers to web servers, from dialup servers to file servers. I currently run 2 486 66's as my backup DNS machines, each hosting multiple zones. This is another one of Linux's strengths. The ability to take "obsolete" machines and do great things with them. This is a great low cost method for nonprofit organizations, and cost conscious organizations to squeeze extra value from "old" machines. The one exception to this is your video subsystem. X, the Graphical User Interface , is very picky about the video cards it will and will not support. This is primarily due to the fact that many video card manufacturers are reluctant to release specification information to Linux developers. However, support is improving every day, and there are also commercial X servers available to address these issues. The bottom line here is to try to make sure your video card is supported by X if you want to run more than VGA at 16 colors. That said, different distributions of Linux have different hardware requirements. And of course, I don't mean to imply that you should not take advantage of a newer machine if you have access to one. I simply want to convey to you that you don't have to have a Pentium II with 256 Megs of RAM, or a 600Mhz Alpha to be able to use Linux. As a general guideline, any 386 or better with 4MB of RAM or more should run quite nicely. If you plan on running X, 8MB would be better, 16MB better still. Generally speaking, the more RAM, the better. As a matter of fact, I often tell my clients that I would rather have a slower processor with more RAM, than a faster processor with less RAM. Contrary to what you may have been told, the processor speed is NOT the primary determining factor of performance. In reality the performance of your system is determined by the amount of RAM you have, The speed of your Disk subsystem, and your processor. In that order. Any type of IDE HDD, and any ATAPI CD-ROM drive will work quite nicely, as will most SCSI hosts and disks. However, SCSI installations can often be more involved, and will be covered in a separate document. _________________________________________________________________ Okay, you've sold me on the idea. What next? The first thing you will need to do is pick a distribution. Linux is packaged as collections of programs, applications, utilities, and the operating system, by different people and vendors. These are called distributions. There are many, fine distributions out there, and choosing the "right" one is a nebulous process. This is somewhat analogous to picking the "best" vacation spot, or the "best" outfit to wear. I will be discussing the Slackware 3.5, and RedHat 5.1, as these are the ones I am familiar with. Many of the descriptions and configuration options, most notably the autoprobing of PCI devices, and support for many newer video cards, are applicable ONLY to these distributions. All my comments and recommendations are just that - comments and recommendations. Your preferences may be entirely different. Slackware 3.5 The first distribution I ever used, and still my favorite. It has the option for either a Command Line Interface (CLI) install, or a Graphical User Interface (GUI) install. Uses Tarballs, or .tgz package format. I like this because I am not "forced" to install X just to use my system like some of the other distributions ( see below.) I am also given more control over what does and does not get installed. (Upgrade path is not too good.) Best for people who want to really learn about how the system works, and like installing and compiling their own software. A full install will eat up ~400MB of disk space. RedHat 5.1 This is the current "darling" of the commercial side of the Linux community. Probably the easiest to install. Forces the installation of the X window system, whether you want it or not. Uses the RPM package format to ensure all packages and programs are installed correctly (sort of.) Upgrade path is good. Currently has the lion's share of the media attention, and thus, application support. This is the one I recommend for people who want a working system quickly, and are less concerned about the internal workings of the Operating System. A full install will eat up ~600MB of disk space. I had originally intended to do an in-depth comparison of the various distributions, but the August issue of the Linux Journal just arrived in my mailbox today, and I see that Phil has beat me to it. I respectfully disagree with regard to the Caldera Distribution. I am overwhelmed by it's cost, and underwhelmed by it's performance. Other than that, I would suggest you refer to his article for a more in-depth comparison. He has done an outstanding job, much better than I could have ever done. _________________________________________________________________ How do I get the software? Here you have several options. All the distributions I mention are freely available on the Internet for download. Additionally, RedHat, and Slackware are available for purchase, either directly from the manufacturers, or through third parties. Finally, some or all of them are often bundled with books on Linux or can be had at your local Linux User's Group's Install Party, an event where people bring in their computers and the hosts at your Linux users Group will install the software for you. IMPORTANT NOTE: While it is possible to install some of these distributions using FTP or NFS, I strongly urge you to acquire a CD-ROM for your first installation. See the resources section at the end of this document for vendors, or check your local book store. While an Install Party is probably the easiest method to get your system up and running, You will get more out of it by doing the installation yourself. Messing up, and doing it yourself is the best way to learn. _________________________________________________________________ What sort of planning should I do beforehand? Excellent question. Here are some things to consider: While it is possible and feasible to have multiple operating systems residing on one system, I recommend using a separate machine if possible, or at least a separate disk or disks on your machine just for Linux. This will give you the confidence to bang away at it, install multiple times, and decrease the chance of harming your primary OS or data. Also, in later installments, I will show you how to make this machine do all kinds of neat tricks, like serve up your Internet connection, store files and applications, even become the starting point for your own home network. _________________________________________________________________ I'm not rich, where can I find a cheap machine like you mention? Check around in the paper, your local Linux user group, your place of employment or even your local community college for one of those "old" machines. They can often be had at little or no cost. What we are aiming for here is maximizing your chances for a successful installation, there will be plenty of time for you to learn the more esoteric methods as your Unix skills increase. If at all possible try to get a separate machine, preferably with two Hard Disk Drives, and an ATAPI compliant CD-ROM. _________________________________________________________________ That sounds like a lot of trouble. Can't I just try it out without all that extra stuff? If you absolutely must disregard my warnings, and intend to try out Linux on your primary machine, BACKUP ANYTHING YOU CAN NOT AFFORD TO LOSE ONTO FLOPPY DISK, TAPE, OR WHATEVER BACKUP DEVICE YOU PRESENTLY USE. IF YOU DON'T HAVE ONE, PUT THIS DOWN AND GO GET ONE! YOU HAVE BEEN WARNED. Consider the Slackware distribution. It offers the option of running directly off of the CD-ROM. _________________________________________________________________ Okay, I have the machine or extra disk(s), what next? If you have not acquired a separate machine, refer to the warning above. BACKUP ANYTHING YOU CANNOT AFFORD TO LOSE. The first thing you will need to do is create your boot disk, and in some cases, a root or supplemental disk. If you purchased the commercial distribution of RedHat, the required disks should already be included. The commercial version of Slackware should be bootable directly from the CD-ROM on newer systems. If you obtained the software bundled with a book, you will probably need to create the disk or disks yourself. You will need one or two DOS formatted disks for this. What boot image you need will depend on which distribution you are installing. For RedHat, look for the /images directory, which should contain two files named boot.img and supp.img. Normally only the boot.img disk will be required. For Slackware, look for a directory called /bootdsks.144, and another called /rootdsks. Unless you have something other than IDE devices in your machine, the bare.i image is the one you will be looking for as your boot disk. In the rootdsks directory, you will need the color.gz image for your root disk. The method used for creating your boot and/or root disks will depend on whether you are using a Linux (or Unix) machine, or a DOS based machine. If you are on a DOS based machine, I.E. Windows 3.x, Windows 95, Windows 98 or Windows NT, you will need to use RAWRITE.EXE to create your images. This program should be included either in the same place as the images we just discussed, or under an /install, or /dosutils directory in some cases. You will need to open a command prompt (sometimes called a DOS box) on your machine, or exit windows to get to the command prompt. Then type: RAWRITE You will be asked for the source file name:bare.i You will next be asked for your target drive: A: If the program errors out, and complains about "Attempting to DMA across 64k boundary," FTP to sunsite.unc.edu, then cd to: /pub/Linux/distributions/redhat/redhat-5.1/i386/dosutils/ And retrieve the version of RAWRITE there. It will be smaller than the one you were using (~14k,) and the problem should go away. As I recall this is only an issue on NT and possibly Windows 98 boxes. If you are on a Linux or Unix box, the command to get it done is: dd if= of= bs=1440k So, if you are making a Slackware boot disk: dd if=bare.i of=/dev/fd0 bs=1440k <:enter> For the root disk: dd if=color.gz of=/dev/fd0 bs=1440k _________________________________________________________________ Okay, I've got the proper disk(s). Now what? Now insert the boot disk into your floppy drive and re-boot your machine. At this point, you will be prompted to login as root. After you login, you must partition your disk or disks to prepare the HDD for formatting, and ultimately, the installation of your software. Linux requires at least two partitions for installation. You must have a partition for your root or top level directory to live in, and you also need a partition for your swap file to live in. This is just a fancy way of saying you need at least one place on your hard drive to store your operating system, and one place on your hard drive to be used as a temporary storage area for your operating system to put things that are not immediately needed. If you are familiar with a Windows based system, the root partition is the equivalent of your C:\ drive, and the swap file is the equivalent of your pagefile.sys. Just as it is always a good idea on a Windows box to store your data on a separate device, apart from the operating system, the same rule applies to Linux. This is why I urge you to have two HDD's in your Linux machine. Depending on which distribution you choose, the process required to create the necessary partitions will vary. Similarly, whether you have one or two HDD's will also make the best partitioning scheme vary. Slackware: Use the cfdisk utility. It is fairly easy to understand, and has decent help. RedHat: You will probably want to use Disk Druid here. For a single disk system, I would suggest two partitions: One swap partition, between 16 and 32MB in size, depending on how much RAM you have in your machine. The utility you are using, may or may not ask you to specify the hex code to tell Linux that this is a swap partition. If prompted for it, the proper code is type 82. The rest of the disk should be partitioned as Linux native. Some might argue that there should be three partitions here, in case something goes wrong with the root partition, thus saving your data. I have rarely seen a disk fail in just "spots", usually if a disk commits suicide it's an all or nothing kind of deal. I recommend two disks for precisely this sort of situation. The only time I have ever seen two disks fail at once was due to a lightening strike, which smoked the whole machine. For a two disk system, I would suggest the following: On the primary or first HDD (usually called hda in most distributions:) Create two partitions, as stated above. On the second HDD or secondary IDE interface: Another swap partition of 16 or 32MB as above. The rest of the drive should be partitioned Linux native. After partitioning the disk or disks, you will be prompted to format them. Depending on the distribution used, at some point you will be asked where you want the Linux native partition or partitions to be mounted. This simply is asking you where in the filesystem hierarchy each formatted partition should reside. For single disk systems, mount your single Linux native partition as your root, or / partition. For two disk systems, mount your first disk as described above, then mount the Linux native partition on your second drive as your /home directory. This will be where all of your user specific information and files will be stored, thus preventing an OS failure from taking all your hard work and critical data with it. THIS IS INTENDED TO COMPLEMENT, NOT REPLACE A DILIGENT, REGULAR BACKUP SCHEME. I CAN'T STRESS ENOUGH THE IMPORTANCE OF REGULAR, RELIABLE BACKUPS. If I seem to be a bit paranoid about backups, I proudly state that I am. I cannot begin to count the times my clients, friends and coworkers have snickered, giggled, and laughed outright when I talk about this. I am a constant source of jokes and entertainment for them. Until something goes wrong. Then I am suddenly a savior to them. By the way, when something like this happens to you, and it will, when all the suits are sweating bullets, and looking to you like Moses with the backup tablets in each hand, this is a great time for salary negotiation. _________________________________________________________________ Well, I've got the partitions made, and my disks are hungry for 1's and 0's. What are my options for installation, and what programs do I really need? You have, with one notable exception, four possible choices for your Linux installation. I will list them in order, from the smallest to the largest. EXCEPTION - Option one, running directly off of the CD-ROM is not available with the RedHat Distribution. 1. Running directly off of the CD-ROM, called a "live" filesystem. This is the best option for just trying out Linux with a minimum impact to your present system. Performance will be degraded, particularly if you have a slow CD-ROM. This is the ONLY option I can safely recommend if you are not doing this on a machine other than your primary system. The exact actions required to accomplish this will vary between the distributions, but will be called something like "run from CD-ROM", or "run from live filesystem" 2. A minimal, or base installation, with just enough stuff to get you up and running. Slackware: Select the following disk series: A AP (optional) RedHat: You can safely accept the defaults. (Not much choice here, accept the default, or it won't boot. You will be assimilated ;-). 3. A well rounded installation, consisting of the base stuff, plus some productivity, network, and development tools Slackware: Select the following disk series: A AP F D N X XAP RedHat: To the default selections, add: X applications Development tools 4. The entire distribution, sometimes called the "let's see how much this sucker can take" installation. Slackware: Select the top option, "complete installation" RedHat: Select the "Everything" option. A couple of suggestions concerning the everything install: Below the dialog box where you chose "Everything", there will be another box with the phrase "Choose individual packages." Select it. You will then be taken to another dialog box listing the categories of all the software that will be installed on the system. Scroll down to Documentation. For some reason RedHat wants to install the How-To's and things in every format known to man, and in every language spoken by man. Choose the text format and html format of the documents. The one exception to this is if for whatever reason, you would find it useful to have these documents in another language, in which case you should select the appropriate language desired as well. When you are finished, select done. This will save you a significant amount of disk space. Common to both of the distributions, the following tasks are ones you need to perform regardless of which distribution you use: 1. Creating boot and rescue disks. Slackware: Toward the end of the installation process, you will be asked to configure your new Linux system. I strongly recommend making both a lilo bootdisk, and a default, or vmlinuz bootdisk for your new machine, and choosing NO to the install LILO option. RedHat: Toward the end of the installation, you will be asked if you want to make a boot disk. Answer yes. Make several. If prompted to configure either X windows, or your networking, answer no. If you are forced to do either of these things for X, accept the defaults. For networking, if asked for a network address, use 127.0.0.1, or choose the "loopback" option if available. We will be configuring these things in the next installment. 2. Logging in as root for the first time and creating a user account for yourself. While there are times when it will be useful to be logged into your system as root, most of the time, you will want to be logged in to your own account on the machine. There are many reasons for this, not the least of which is that when logged in as yourself, just about the worst thing you can do is screw up your own account. However, when logged in as root, most of the safeguards built into the system go away. You can do anything, even things you should not do. Like hose the entire filesystem. This is both the strength, and the weakness of the superuser account. Treat it like a loaded gun. Don't pull it out unless you mean to use it. If you mean to use it make sure you have a clear target and put it right back in the holster as soon as you're done. Now that I hope I've properly scared you, here's what you need to do: Login as root. Then create a user account for yourself: adduser rjenkins You will be asked a series of questions. You can safely press enter to accept the defaults for these things. 3. Selecting and entering your root and personal user account passwords. Now you need to password protect the root account and your user account. Logged in as root, use the passwd command to do this for both the root or superuser account, and your personal account. passwd root And then your user account: passwd rjenkins A short comment on password selection and security. Good password discipline is very important, whether you are connected to a network or not. Briefly, here are a few guidelines: Choose something you can easily remember, say kibble. Now, add a punctuation mark and a number to it, say ?kibble4. Finally, for best security, a neat trick is to take the word you can remember easily, in this case kibble, and for each letter in the word, move up one row on the keyboard, and over either to the left or the right. So for ?kibble4 if we move up and to the left, we get: ?u8ggi34. If we go up and to the right we get: ?o9hhp44. This is easy to remember, and will defeat all but the most sophisticated password cracking programs. _________________________________________________________________ Navigating the Linux system, and obtaining help and information from the documentation. The first thing you will want to do is learn how to navigate your system. You will find a wealth of documentation in the /usr/doc directory. In particular, look at the /usr/doc/how-to directory, and check out the installation and user's guide. If you purchased your CD bundled with a book, make use of it. There should be enough information there, or in the doc directory to get you started. While the editors and document tools available will vary from distribution to distribution, every distribution should have vi available. You will probably either learn to love or hate it. There does not seem to be any middle ground, but I suggest you at least learn to use it, since it will allow you to plunk down at any Unix machine and use it. Much abbreviated, here's a short list of relevant commands: To open a file: vi filename To insert text in a file: Press the i key to enter insert mode, then enter your text. To write your changes to a file: Press the escape key, then :w <:enter> To close a file: Press the escape key, then :q An even better option is to use the Midnight Commander, if it is available on your system. Simply enter mc. It looks and acts a lot like the N*rton Commander, and makes an easy transition for anyone who has used that program, or is familiar with the DOSSHELL. Well, that's about it for now, Congratulations! See, that wasn't so hard now was it? In the next installment, we'll configure the X windowing system and your networking setup. _________________________________________________________________ Resources Software Manufacturers: RedHat Linux: http://www.redhat.com/ Slackware: http://www.cdrom.com/ Third Party Distributors: http://www.cheapbytes.com http://www.linuxmall.com http://www.infomagic.com/ http://www.cdrom.com Local User Groups: Most areas have several local computer-oriented publications available. Have a look for a local user group in your area. There are also list of user groups by area at http://www.ssc.com/glue/groups/ _________________________________________________________________ Copyright © 1998, Ron Jenkins Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ Linux Kernel Compilation Benchmark by William Henning Editor, CPUReview Copyright July 1, 1998 ALL RIGHTS RESERVED I have purchased a K6-2/266Mhz processor and a Soyo 5EHM Super7 motherboard specifically so that I would be able to benchmark the K6-2 against Intel PII and Cyrix processors. Since I have been running Linux since the v0.10 days, I thought it would be useful to perform some benchmarks under Linux. Here are my findings. As I have time (and access to equipment) to add additional results, I will update this page. Soon I hope to add PII results (ABIT LX6, 64Mb 10ns Hyundai SDRAM, Riva 128, same 1.6Gb WD hard drive). System Description - Super Socket 7 * Soyo 5EHM motherboard (MVP3, AGP, 1M cache, Super7) * 64Mb 10ns SDRAM (Hyundai, 2x32Mb sticks) * 1.6Gb Western Digital hard drive * Asus 3DExplorer AGP (Riva 128) System Description - Slot 1 * ABIT LX6 Slot 1 motherboard * 64Mb 10ns SDRAM (Hyundai, 2x32Mb sticks) * 1.6Gb Western Digital hard drive * Asus 3DExplorer AGP (Riva 128) Methodology * 'make clean; make dep; time make zdisk' * using the same kernel configuration under Redhat 5.0 * Linux v2.0.32, with this .config file * GCC v2.7.2.3 _________________________________________________________________ Cyrix/IBM PR233 Results _________________________________________________________________ PR Rating Voltage Setting BogoM User System Elapsed CPU util PR200 2.9 2.5X66 166.30 283.22 21.26 5:28.56 92% PR233 2.9 2X100 199.88 236.4 17.48 4:35.97 91% PR233 2.9 2.5X75 187.19 257.99 20.17 5:01.32 92% PR266 2.9 2.5X83 207.67 233.75 19.51 4:35.40 91% AMD K6-2 266 Results PR Rating Voltage Setting BogoM User System Elapsed CPU util 166 2.3 2.5x66 332.60 274.57 24.11 5:22.43 92% 187.5 2.2 2.5x75 374.37 244.5 20.38 4:47.52 92% 200 2.3 3x66 398.95 242.10 21.42 4:37.33 91% 210 2.2 2.5x83 415.33 221.5 19.96 4:18.61 93% 233 2.3 3.5x66 465 220.53 19.55 4:24.61 90% 250 2.2 2.5x100 499.71 183.13 17.64 3:43.42 89% 266 2.3 4x66 530.84 199.90 19.55 4:04.19 89% 280 2.2 2.5x112 558.69 164.17 15.29 3:23.83 88% 300 2.3 4.5x66 598.02 187.84 19.63 3:51.50 89% 300 2.3 4x75 598.02 176.94 19.26 3:37.84 90% 300 2.3 3x100 599.65 161.73 15.06 3:20.87 88% Intel Pentium-II 233 Results PR Rating Voltage Setting BogoM User System Elapsed CPU util 233 Default 3.5x66 233.47 197.46 15.25 3:57.26 89% 262.5 Default 3.5x75 262.14 180.75 12.73 3:38.96 88% 291.7 Default 3.5x83 291.64 157.49 11.69 3:12.69 87% Simulated Celeron Results - Intel Pentium-II 233 with L2 Cache disabled PR Rating Voltage Setting BogoM User System Elapsed CPU util 233 Default 3.5x66 233.47 324.07 20.19 6:08.43 93% 262.5 Default 3.5x75 262.14 291.43 16.96 5:32.61 92% 291.7 Default 3.5x83 291.64 262.19 16.10 5:02.45 92% _________________________________________________________________ Discussion of results _________________________________________________________________ Please note, in the comparisons below, only the "User" time of the kernel compilations is used. BogoMips is equivalent to the megahertz it is running at, and the AMD K6-2 has a BogoMips rating that is twice the megahertz it is run at. In order to be able to make a direct, Mhz-to-Mhz comparison of the processors, I underclocked the K6-2 to run at 2.5x75 (Cyrix PR233 rating) and 2.5x83 (PR266 rating). Comparing the total elapsed time for the compilation, we find that: Comparison by actual Mhz CPU Cyrix Amd Amd % faster 2.5x75 257.99 244.5 5.23% 2.5x83 233.75 221.5 5.24% The AMD K6-2 processor seems to be 5.25% faster than a Cyrix MX processor at the same clock speed. The PR rating system would not seem to apply to Linux kernel compilations. Comparison by PR rating CPU Cyrix Amd Amd % faster PR233 257.99 220.53 14.52% PR266 233.75 199.90 14.48% The AMD K6-2 is 14.5% faster than a Cyrix/IBM 686MX when comparing a K6-2 at the same Mhz as a Cyrix chip is PR rated for. How about the Pentium II? I wanted to see how the P2 would compare to the K6-2. As I only have a P2-233, I had to overclock it to approach 300Mhz. Please note, I used an extra 3" fan blowing air at the CPU. Comparison between P2 and K6-2 CPU P2 Amd P2 % faster 266Mhz #1 180.75 183.13 1.3% 300Mhz #2 157.49 161.71 2.6% NOTES 1. P2 at 262.5Mhz (75x3.5), K6-2 at 250Mhz (2.5x100) 2. P2 at 291.6Mhz (83x3.5), K6-2 at 300Mhz (3x100) The P2 was faster for compiling the kernel by less than three percent. There is no point in comparing the K6-2 to the Celeron - see the simulated Celeron benchmarks on the previous page. The Celeron is not suitable for use as a Linux development machine. _________________________________________________________________ Price Comparison _________________________________________________________________ As we all know, absolute performance is just part of deciding which processor to get. If cost was no object, we would all be running Kryotech Alpha 767's, or dual PII-400's. For reference purposes, here are some prices, in US$, as 1:14pm PST of July 5 from PriceWatch. CPU 233 266 300 Cyrix $49 $74 $96 AMD K6-2 n/a $113 $163 AMD K6 $68 $93 $125 Pentium II $158 $177 $235 Conclusion There is no question that the Cyrix processors provide excellent performance for a low cost. The K6 (non-3d) processors are also an excellent value, however as I don't have such a CPU I was unable to run tests on one - but I would expect that on the same motherboard with similar memory and hard disk the performance of the plain K6's would be very close to the K6-2's. The K6-2 appears to be an excellent value for a developer's machine. A 14.5% increase in speed over the 686MX is difficult to ignore. The P2 is less than three percent faster than the K6-2 at comparable speeds. I do not think that such a small difference in speed justifies the price differential between the P2 and the K6-2. I hope you found this article to be of use. Please remember that I welcome feedback on this (or other) articles. I can be contacted at editor@cpureview.com. Regards, William Henning Editor, CPUReview _________________________________________________________________ Copyright © 1998, William Henning Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ A Linux Journal Review: This article appeared in the November 1997 issue of Linux Journal. _________________________________________________________________ Linux Kernel Installation By David A. Bandel _________________________________________________________________ Linux is many users' introduction to a truly powerful, configurable operating system. In the past, a Unix-like operating system was out of reach for most. If it wasn't the operating system's 4-digit price tag, it was the hardware. Even the now free-for-personal-use SCO Unixware requires a system with SCSI drives, and most of us are using IDE to keep costs down. Along with the power that Linux brings comes the need to perform a task users have not had to do on simpler operating systems: configure the kernel to your hardware and operations. Previous installation kernels from 1.2.x and before suggested that you rebuild; however, with the new 2.0.x kernel, rebuilding has almost become a necessity. The kernel that comes with the installation packages from Red Hat, Caldera, Debian and most others, is a generic, ``almost everything is included'' kernel. While rebuilding a kernel may seem like a daunting task and living with the installed kernel may not be too bad, rebuilding is a good introduction to your system. _________________________________________________________________ Why Roll Your Own? The standard installation kernels are an attempt to make as many systems as possible usable for the task of installing a workable Linux system. As such, the kernel is bloated and has a lot of unnecessary code in it for the average machine. It also does not have some code a lot of users want. Then, of course, there's always the need to upgrade the kernel because you've bought new hardware, etc. Upgrading within a series is usually very straightforward. When it comes to upgrading, say from 1.2.something to 2.0.something, now the task is beyond the scope of this article and requires some savvy. Better to get a new distribution CD and start fresh--this is also true for upgrading to the experimental 2.1.x kernels. _________________________________________________________________ Kernel Version Numbering All Linux kernel version numbers contain three numbers separated by periods (dots). The first number is the kernel version. We are now on the third kernel version, 2. Some of you may be running a version 1 kernel, and I am aware of at least one running version 0 kernel. The second number is the kernel major number. Major numbers which are even numbers (0 is considered an even number) are said to be stable. That is, these kernels should not have any crippling bugs, as they have been fairly heavily tested. While some contain small bugs, they can usually be upgraded for hardware compatibility or to armor the kernel against system crackers. For example, kernel 2.0.30, shunned by some in favor of 2.0.29 because of reported bugs, contains several patches including one to protect against SYN denial of service attacks. The kernels with odd major numbers are developmental kernels. These have not been tested and often as not will break any software packages you may be running. Occasionally, one works well enough that it will be adopted by users needing the latest and greatest support before the next stable release. This is the exception rather than the rule, and it requires substantial changes to a system. The last number is the minor number and is increased by one for each release. If you see kernel version 2.0.8, you know it's a kernel 2.0, stable kernel, and it is the ninth release (we begin counting with 0). _________________________________________________________________ Assumptions I hate to make any assumptions; they always seem to come back to bite me. So I need to mention a few things so that we're working off the same sheet of music. In order to compile a kernel, you'll need a few things. First, I'll assume you've installed a distribution with a 2.0.x kernel, all the base packages and perhaps a few more. You'll also need to have installed gcc version 2.7 and all the supporting gcc libraries. You'll also need the libc-dev library and the binutils and bin86 packages (normally installed as part of a standard distribution install). If you download the source or copy it from a CD, you'll also need the tar and gunzip packages. Also, you'll need lots of disk real estate. Plan on 7MB to download, another 20MB to unpack this monster and a few more to compile it. Needless to say, many of the things we will discuss require you to be logged in as root. If you've downloaded the kernel as a non-privileged user and you have write permission to the /usr/src subdirectory, you can still do much of this task without becoming root. For the newcomers to Linux, I highly recommend you do as much as possible as a non-privileged user and become root (type: su - face) only for those jobs that require it. One day, you'll be glad you acquired this habit. Remember, there are two kinds of systems administrators, those who have totally wrecked a running setup inadvertently while logged in as root, and those who will. _________________________________________________________________ Obtaining/Upgrading the Source Kernel sources for Linux are available from a large number of ftp sites and on almost every Linux distribution CD-ROM. For starters, you can go to ftp.funet.fi, the primary site for the Linux kernel. This site has a list of mirror sites from which you can download the kernel. Choosing the site nearest you helps decrease overall Internet traffic. Once you've obtained the source, put it in the /usr/src directory. Create a subdirectory to hold the source files once they are unpacked using tar. I recommend naming the directories something like linux-2.0.30 or kernel-2.0.30, substituting your version numbers. Create a link to this subdirectory called linux using the following command: ln -sf linux-2.0.30 linux I included the -f in the link command because if you already have a kernel source in /usr/src, it will contain this link too, and we want to force it to look in our subdirectory. (On some versions of ln (notably version 3.13), the force option (-f) does not work. You'll have to first remove the link then establish it again. This works correctly by version 3.16.) The only time you may have a problem is if linux is a subdirectory name, not a link. If you have this problem, you'll have to rename the subdirectory before continuing: mv linux linux-2.0.8 Now issue the command: tar xzvf linux-kernel-source.tar.gz I have a habit of always including w (wait for confirmation) in the tar option string, then when I see that the .tar.gz or .tgz file is going to unpack into its own subdirectory, I ctrl-C out and reissue the command without the w. This way I can prevent corrupted archives from unpacking into the current directory. Once you have the kernel unpacked, if you have any patches you wish to apply, now is a good time. Let's say you don't wish to run kernel 2.0.30, but you do want the tcp-syn-cookies. Copy the patch (called tcp-syn-cookies-patch-1) into the /usr/src directory and issue the command: patch < tcp-syn-cookies-patch-1 This command applies the patch to the kernel. Look for files with an .rej extension in in the /usr/src directory. These files didn't patch properly. They may be unimportant, but peruse them anyway. If you installed a Red Hat system with some but not all of the kernel source (SPARC, PowerPC, etc.), you'll see some of these files. As long as they're not for your architecture, you're okay. _________________________________________________________________ Preparation As a final note, before we change (cd) into the kernel source directory and start building our new kernel, let's check some links that are needed. In your /usr/include directory, make sure you have the following soft links: asm - /usr/src/linux/include/asm linux - /usr/src/linux/include/linux scsi - /usr/src/linux/include/scsi Now, you see another reason to standardize the location of the kernel. If you don't put the latest kernel you wish to install in /usr/src/linux (via a link), the above links will not reach their intended target (dangling links), and the kernel may fail to compile. _________________________________________________________________ How to proceed Once everything else is set up, change directories into /usr/src/linux. Although you may want to stop off and peruse some of the documentation in the Documentation directory, particularly if you have any special hardware needs. Also, several of the CD-ROM drivers need to be built with customized settings. While they usually work as is, these drivers may give warning messages when loaded. If this doesn't bother you and they work as they should, don't worry. Otherwise, read the appropriate .txt, .h (header) files and .c (c code) files. For the most part, I have found them to be well commented and easy to configure. If you don't feel brave, you don't have to do it. Just remember you can always restore the original file by unpacking the gzipped tar file (or reinstalling the .rpm files) again. _________________________________________________________________ Beginning to Compile The first command I recommend you issue is: make mrproper While this command is not necessary when the kernel source is in pristine condition, it is a good habit to cultivate. This command ensures that old object files are not littering the source tree and are not used or in the way. _________________________________________________________________ Configuring the Kernel Now, you're ready to configure the kernel. Before starting, you'll need to understand a little about modules. Think of a module as something you can plug into the kernel for a special purpose. If you have a small network at home and sometimes want to use it (but not always), maybe you'll want to compile your Ethernet card as a module. To use the module, the machine must be running and have access to the /lib/modules This means that the drive (IDE, SCSI, etc., but could be an ethernet card in the case of nfs), the file system (normally ext2 but could be nfs) and the kernel type (hopefully elf) must be compiled in and cannot be modules. Modules aren't available until the kernel is loaded, the drive (or network) accessed, and the file system mounted. These files must be compiled into the kernel or it will not be able to mount the root partition. If you're mounting the root partition over the network, you'll need the network file system module, and your Ethernet card compiled. Why use modules? Modules make the kernel smaller. This reduces the amount of protected space never given up by the kernel. Modules load and unload and that memory can be reallocated. If you use a module more than about 90% of the time the machine is up, compile it. Using a module in this case can be wasteful of memory, because while the module takes up the same amount of memory as if it were compiled, the kernel needs a little more code to have a hook for the module. Remember, the kernel runs in protected space, but the modules don't. That said, I don't often follow my own advice. I compile in: ext2, IDE and elf support only. While I use an Ethernet card almost all the time, I compile everything else as modules: a.out, java, floppy, iso9660, msdos, minix, vfat, smb, nfs, smc-ultra (Ethernet card), serial, printer, sound, ppp, etc. Many of these only run for a few minutes at a time here and there. The next step is to configure the kernel. Here we have three choices--while all do the same thing, I recommend using one of the graphical methods. The old way was to simply type: make config. This begins a long series of questions. However, if you make a mistake, your only option is to press ctrl-C and begin again. You also can't go back in the sequence, and some questions depend on previous answers. If for some reason you absolutely can't use either of the graphical methods, be my guest. I recommend using either make menuconfig or make xconfig. In order to use menuconfig, you must have installed the ncurses-dev and the tk4-dev libraries. If you didn't install them and you don't want to use the next method, I highly recommend that you install them now. You can always uninstall them later. To run make xconfig, you must install and configure X. Since X is such a memory hog, I install, configure and startx only for this portion of the process, going back to a console while the kernel compiles so it can have all the memory it needs. The xconfig menu is, in my opinion, the best and easiest way to configure the kernel. Under menuconfig, if you disable an option, any subordinate options are not shown. Under xconfig, if you disable an option, subordinate options still show, they are just greyed out. I like this because I can see what's been added since the last kernel. I may want to enable an option to get one of the new sub-options in order to to experiment with it. I'm going to take some space here to describe the sections in the kernel configuration and tell you some of the things I've discovered--mostly the hard way. The first section is the code-maturity-level option. The only question is whether you want to use developmental drivers and code. You may not have a choice if you have some bleeding edge hardware. If you choose ``no'', the experimental code is greyed out or not shown. If you use this kernel for commercial production purposes, you'll probably want to choose ``no''. The second section concerns modules. If you want modules, choose ``yes'' for questions 1 and 3. If you want to use proprietary modules that come with certain distributions, such as Caldera's OpenLinux for their Netware support, also answer ``yes'' to the second question since you won't be able to recompile the module. The third section is general setup. Do compile the kernel as ELF and compile support for ELF binaries. Not compiling the proper support is a definite ``gotcha''. You'll get more efficient code compiling the kernel for the machine's specific architecture (Pentium or 486), but a 386 kernel will run in any 32-bit Intel compatible clone; a Pentium kernel won't. An emergency boot disk for a large number of computers (as well as distribution install disks) is best compiled as a 386. However, a 386 will not run a kernel compiled for a Pentium. Next comes block devices--nothing special here. If your root device is on an IDE drive, just make sure you compile it. Then comes networking. For computers not connected to a network, you won't need much here unless you plan to use one computer to dial-out while others connect through it. In this case, you'll need to read up on such things as masquerading and follow the suggested guidelines. SCSI support is next, though why it doesn't directly follow block devices I don't know. If your root partition is on a SCSI device, don't choose modules for SCSI support. SCSI low-level drivers follow general SCSI support. Again, modules only for devices that don't contain the root partition. The next section takes us back to networking again. Expect to do a lot of looking for your particular card here as well as some other support such as ppp, slip, etc. If you use nfs to mount your root device, compile in Ethernet support. For those lucky enough to be needing ISDN support, the ISDN subsection will need to be completed. Older CD-ROMs may require support from the next section. If you're using a SCSI or IDE CD-ROM, you can skip this one. Next comes file systems. Again, compile what you need, in most cases ext2 and use modules for the rest. Character devices are chosen next. Non-serial mice, like the PS/2 mouse are supported. Look on the bottom of your mouse. Many two-button mice are PS/2 type, even though they look and connect like serial mice. You'll almost certainly want serial support (generic) as a minimum. Generic printer support is also listed here. The penultimate section is often the most troubling: sound. Choose carefully from the list and read the available help. Make sure you've chosen the correct I/O base and IRQs for your card. The MPU I/O base for a SoundBlaster card is listed as 0. This is normally 330 and your sound module will complain if this value is incorrect. Don't worry. One of the nice things about modules is you can recompile and reinstall the modules as long as the kernel was compiled with the hook. (Aren't modules great?). The final section contains one question that should probably be answered as ``no, kernel hacking''. Save your configuration and exit. I have, on several occasions, had trouble editing the numbers in menuconfig or xconfig to values I knew were correct. For whatever reason, I couldn't change the number or config wouldn't accept the number, telling me it was invalid. For example, changing the SoundBlaster IRQ from the config default of 7 to 5, and the MPU base I/O from 0 to 300. If you experience this problem, but everything else went well, don't despair. The file you just wrote when you did a ``Save'' and ``Exit'' is an editable text file. You may use your text editor of choice: Emacs, vi, CrispLite, joe, etc. Your configuration file is in the /usr/src/linux directory and is called .config. The leading dot causes the file to be hidden during a normal directory listing (ls), but it shows up when the -a option is specified. Just edit the numbers in this file that you had trouble with in the configuration process. Next, type make dep to propagate your configurations from the .config file to the proper subdirectories and to complete the setup. Finally, type make clean to prepare for the final kernel build. _________________________________________________________________ Building the Kernel We're now ready to begin building the kernel. There are several options for accomplishing this task: * make zImage: makes the basic, compressed kernel and leaves it in the /usr/src/linux/arch/i386/boot directory as zImage. * make zlilo: Copies the zImage to the root directory (unless you edited the top-level Makefile) and runs LILO. If you choose to use this option, you'll have to ensure that /etc/lilo.conf is preconfigured. * make zdisk: Writes zImage to a floppy disk in /dev/fd0 (the first floppy drive--the a: drive in DOS). You'll need the disk in the drive before you start. You can accomplish the same thing by running make zImage and copying the image to a floppy disk cp /usr/src/linux/arch/i386/boot/zImage /dev/fd0 Note that you'll need to use a high-density disk. The low density 720k disks will reportedly not boot the kernel. * make boot: Works just the same as the zImage option. * make bzImage: Used for big kernels and operates the same as zImage. You will know if you need this option, because make will fail with a message that the image is too big. * make bzdisk: Used for big kernels and operates the same as zdisk. You will know if you need this option, because make will fail with a message that the image is too big. Other make options are available, but are specialized, and are not covered here. Also, if you need specialized support, such as for a RAM disk or SMP, read the appropriate documentation and edit the Makefile in /usr/src/linux (also called the top-level Makefile) accordingly. Since all the options I discussed above are basically the same as the zImage option, the rest of this article deals with make zImage--it is the easiest way to build the kernel. For those of you who wish to speed up the process and won't be doing other things (such as configuring other applications), I suggest you look at the man page for make and try out the -j option (perhaps with a limit like 5) and also the -l option. If you chose modules during the configuration process, you'll want to issue the commands: make modules make modules_install to put the modules in their default location of /lib/modules/2.0.x/, x being the kernel minor number. If you already have this subdirectory and it has subdirectories such as block, net, scsi, cdrom, etc., you may want to remove 2.0.x and everything below it unless you have some proprietary modules installed, in which case don't remove it. When the modules are installed, the subdirectories are created and populated. You could just as easily have combined the last three commands: make zImage; make modules; make modules_install then returned after all the disk churning finished. The ; (semicolon) character separates sequential commands on one line and performs each command in order so that you don't have to wait around just to issue the next command. Once your kernel is built and your modules installed, we have a few more items to take care of. First, copy your kernel to the root (or /boot/ or /etc/, if you wish): cp /usr/src/linux/arch/i386/boot/zImage /zImage You should also copy the /usr/src/linux/System.map file to the same directory as the kernel image. Then change (cd) to the /etc directory to configure LILO. This is a very important step. If we don't install a pointer to the new kernel, it won't boot. Normally, an install kernel is called vmlinuz. Old-time Unix users will recognize the construction of this name. The trailing ``z'' means the image is compressed. The ``v'' and ``m'' also have significance and mean ``virtual'' and ``sticky'' respectively and pertain to memory and disk management. I suggest you leave the vmlinuz kernel in place, since you know it works. Edit the /etc/lilo.conf file to add your new kernel. Use the lines from the image=/vmlinuz line to the next image= line or the end. Duplicate what you see, then change the first line to image=/zImage (assuming your kernel is in the root directory) and choose a different name for the label=. The first image in the file is the default, others will have to be specified on the command line in order to boot them. Save the file and type: lilo You will now see the kernel labels, and the first one will have an asterisk. If you don't see the label that you gave your new kernel or LILO terminates with an error, you'll need to redo your work in /etc/lilo.conf (see LILO man pages). We're almost ready to reboot. At this point, if you know your system will only require one reboot to run properly, you might want to issue the command: depmod -a 2.0.x where x is the minor number of the kernel you just built. This command creates the dependencies file some modules need. You'll also want to make sure you don't boot directly into xdm. For Red Hat type systems, this means ensuring the /etc/inittab file doesn't have a default run level of 5, or that you remember to pass LILO the run level at boot time. For Debian systems, you can just type: mv /etc/init.d/xdm /etc/init.d/xdm.orig for now and move it back later. _________________________________________________________________ Normal Rebooting the New Kernel Reboot your machine using: shutdown -r now While typing reboot or pressing the ctrl+alt+del key combination usually works, I don't recommend either one. Under some circumstances, the file systems won't be properly unmounted and could corrupt open files. At the LILO prompt, if you need to boot the old kernel or pass some parameters for bootup and you don't see the boot: prompt, you can try pressing either the shift or ctrl key, and the boot: prompt should appear. Once you have it, press tab to see the available kernel labels. Type the label and optionally enter any parameters for bootup. Normally, however, the default kernel should boot automatically after the timeout interval specified in the /etc/lilo.conf file. During bootup, you may see a few error messages containing: SIOCADDR or the like. These usually indicate that a module (normally a network module) didn't load. We'll handle this shortly. If you got the error, ``VFS, cannot mount root'', you didn't compile the proper disk or file-system support into the kernel. _________________________________________________________________ Troubleshooting Due to the different ways in which each distribution handles daemon startup from /etc/inittab, it is difficult in this article to cover all the possible reasons your bootup may not have gone smoothly and the reasons why. However, I can tell you where to start looking. First, run depmod -a to ensure you have an up-to-date, module dependency file (it will be created in the appropriate subdirectory). If you get a string of errors about unresolved dependencies, old modules are present in the modules subdirectories, and you didn't configure the kernel with ``Module Versions'' enabled. This is not a fatal error. The modules you compiled and installed are good. Check the /etc/conf.modules file and make sure that any lines pointing to /lib/modules are complete: /lib/modules/`uname -r`/xx (Note: the grave quote on each side of uname -r is located above the Tab key in the upper left corner of the keyboard on a U.S. keyboard). Make sure kerneld is running and that it is loaded early in the bootup process. If it is, then the system doesn't need to explicitly load modules, kerneld will handle it. Be careful about calling kerneld too early in the first rc script. kerneld will stop the bootup process forcing a hard reboot via the reset button or power switch, if it is called before the system knows its host name. If this happens to you, you can reboot passing LILO the -b argument which prevents init from executing any rc scripts. Next, look in /etc/rc.d/ at the rc, rc.sysinit and rc.modules files. One or more may point to a directory such as /etc/modules/`uname -r`/`uname -v` where a list of bootup modules are located. You can just copy the old file over to the new directory; mkdir /etc/modules/`uname -r` ; cp /etc/modules/2.0.xx/g#1 Thu 3 Sep 1997.\ default /etc/modules/`uname -r`/\ `uname -v`.default"" Your system will almost certainly have a different date for the modules file. Your system also may or may not use the default extension. Pay close attention to the use of grave quotes and double quotes in the above example, since both are needed in the proper places. Once you have found the keys to your system, you should be able to reboot into a properly functioning system. If you experience further problems, the best place to get quick, expert advice is on a mailing list dedicated to your particular distribution. Those successfully running a particular distribution usually delight in assisting novices with problems they may encounter. Why? Because they hit the same brick walls when they were novices and received help with many problems. Lurk a few days on a list, and if your question isn't asked by someone else, ask it yourself. Check the mail-list archives first, if any are present. These archives contain answers to frequently asked questions (FAQ). _________________________________________________________________ Conclusion While building a kernel tailored to your system may seem a daunting challenge for new administrators, the time spent is worth it. Your system will run more efficiently, and more importantly, you will have the satisfaction of building it yourself. The few areas where you may encounter trouble are in remembering to rerun LILO after installing the new kernel, but you didn't overwrite your old one (or did you?), so you can always revert to one that worked from the lilo: prompt. Distribution specific problems during bootup may also be encountered during the first reboot but are usually easily resolved. Help is normally only an e-mail away for those distributions that don't come with technical support. _________________________________________________________________ Copyright © 1998, David A. Bandel Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Patch for Beginners By Larry Ayers _________________________________________________________________ Introduction The aim of this article is to introduce new Linux users to an invaluable resource, Larry Wall's patch program. Patch is an interface to the GNU diff utility, which is used to find differences between files; diff has a multitude of options, but it's most often used to generate a file which lists lines which have been changed, showing both the original and changed lines and ignoring lines which have remained the same. Patch is typically used to update a directory of source code files to a newer version, obviating the need to download an entire new source archive. Downloading a patch in effect is just downloading the lines which have been changed. Patch originated in the nascent, bandwidth-constrained internet environment of a decade ago, but like many Unix tools of that era it is still much-used today. In the February issue of the programmer's magazine Dr. Dobb's Journal Larry Wall had some interesting comments on the early days of patch: DDJ: By the way, what came first, patch or diff? LW: diff, by a long ways. patch is one of those things that, in retrospect, I was totally amazed that nobody had thought of it sooner, because I think that diff predated patch by at least ten years or so. I think I know why, though. And it's one of these little psychological things. When they made diff, they added an option called -e, I think it was, and that would spit out an ed script, so people said to themselves, "Well, if I wanted to automate the applying of a diff, I would use that." So it never actually occurred to someone that you could write a computer program to take the other forms of output and apply them. Either that, or it did not occur to them that there was some benefit to using the context diff form, because you could apply it to something that had been changed and still easily get it to do the right thing. It's one of those things that's obvious in retrospect. But to be perfectly honest, it wasn't really a brilliant flash of inspiration so much as self defense. I put out the first version of rn, and then I started putting out patches for it, and it was a total mess. You could not get people to apply patches because they had to apply them by hand. So, they would skip the ones that they didn't think they needed, and they'd apply the new ones over that, and they'd get totally messed up. I wrote patch so that they wouldn't have this excuse that it was too hard. I don't know whether it's still the case, but for many years, I told people that I thought patch had changed the culture of computing more than either rn or Perl had. Now that the Internet is getting a lot faster than it used to be, and it's getting much easier to distribute whole distributions, patches tend to be sent around only among developers. I haven't sent out a patch kit for Perl in years. I think patch has became less important for the whole thing, but still continues to be a way for developers to interchange ideas. But for a while in there, patch really did make a big difference to how software was developed. Larry Wall's assessment of the diminishing importance of patch to the computing community as a whole is probably accurate, but in the free software world it's still an essential tool. The ubiquity of patch makes it possible for new users and non-programmers to easily participate in alpha- and beta-testing of software, thus benefiting the entire community. It occurred to me to write this article after noticing a thread which periodically resurfaces in the linux-kernel mailing list. About every three months someone will post a plea for a split Linux kernel source distribution, so that someone just interested in, say, the i386 code and the IDE disk driver wouldn't have to download the Alpha, Sparc, etc. files and the many SCSI drivers for each new kernel release. A series of patient (and some not-so-patient) replies will follow, most urging the original poster to use patches to upgrade the kernel source. Linus Torvalds will then once again state that he has no interest in undertaking the laborious task of splitting the kernel source into chunks, but that if anyone else wants to, they should feel free to do so as an independent project. So far no-one has volunteered. I can't blame the kernel-hackers for not wanting to further complicate their lives; I imagine it would be much more interesting and challenging to work directly with the kernel than to overhaul the entire kernel distribution scheme! Downloading an eleven megabyte kernel source archive is time-consuming (and, for those folks paying by the minute for net access, expensive as well) but the kernel patches can be as small as a few dozen kilobytes, and are hardly ever larger than one megabyte. The 2.1.119 development kernel source on my hard disk has been incrementally patched up from version 2.1.99, and I doubt if I'd follow the development as closely if I had to download each release in its entirety. Using Patch Patch comes with a good manual-page which lists its numerous options, but 99% of the time just two of them will suffice: * patch -p1 < [patchfile] * patch -R < [patchfile] (used to undo a patch) The -p1 option strips the left-most directory level from the filenames in the patch-file, as the top-level directory is likely to vary on different machines. To use this option, place your patch within the directory being patched, and then run patch -p1 < [patchfile] from within that directory. A short excerpt from a Linux kernel patch will illustrate this: diff -u --recursive --new-file v2.1.118/linux/mm/swapfile.c linux/mm/swapfile.c --- v2.1.118/linux/mm/swapfile.c Wed Aug 26 11:37:45 1998 +++ linux/mm/swapfile.c Wed Aug 26 16:01:57 1998 @@ -489,7 +489,7 @@ int swap_header_version; int lock_map_size = PAGE_SIZE; int nr_good_pages = 0; - char tmp_lock_map = 0; + unsigned long tmp_lock_map = 0; Applying the patch from which this segment was copied with the -p1 switch effectively truncates the path which patch will seek; patch will look for a subdirectory of the current directory named /mm, and should then find the swapfile.c file there, waiting to be patched. In this excerpt, the line preceded by a dash will be replaced with the line preceded by a plus sign. A typical patch will contain updates for many files, each section consisting of the output of diff -u run on two versions of a file. Patch displays its output to the screen as it works, but this output usually scrolls by too quickly to read. The original, pre-patch files are renamed *.orig, while the new patched files will bear the original filenames. Patching Problems One possible source of problems using patch is differences between various versions, all of which are available on the net. Larry Wall hasn't done much to improve patch in recent years, possibly because his last release of the utility works well in the majority of situations. FSF programmers from the GNU project have been releasing new versions of patch for the past several years. Their first revisions of patch had a few problems, but I've been using version 2.5 (which is the version distributed with Debian 2.0) lately with no problems. Version 2.1 has worked well for me in the past. The source for the current GNU version of patch is available from the GNU FTP site, though most people will just use the version supplied with their distribution of Linux. Let's say you have patched a directory of source files, and the patch didn't apply cleanly . This happens occasionally, and when it does patch will show an error message indicating which file confused it, along with the line numbers. Sometimes the error will be obvious, such as an omitted semicolon, and can be fixed without too much trouble. Another possibility is to delete from the patch the section which is causing trouble, but this may or may not work, depending on the file involved. Another common error scenario: suppose you have un-tarred a kernel source archive, and while exploring the various subdirectories under /linux/arch/ you notice the various machine architecture subdirectories, such as alpha, sparc, etc. If you, like most Linux users, are running a machine with an Intel processor (or one of the Intel clones), you might decide to delete these directories, which are not needed for compiling your particular kernel and which occupy needed disk space. Some time later a new kernel patch is released and while attempting to apply it patch stalls when it is unable to find the Alpha or PPC files it would like to patch. Luckily patch allows user intervention at this point, asking the question "Skip this patch?" Tell it "y", and patch will proceed along its merry way. You will probably have to answer the question numerous times, which is a good argument for allowing the un-needed directories to remain on your disk. Kernel-Patching Tips Many Linux users use patch mainly for patching the kernel source, so a few tips are in order. Probably the easiest method is to use the shell-script patch-kernel, which can be found in the /scripts subdirectory of the kernel source-tree. This handy and well-written script was written by Nick Holloway in 1995; a couple of years later Adam Sulmicki added support for several compression algorithms, including *.bz, *.bz2, compress, gzip, and plain-text (i.e., a patch which has already been uncompressed). The script assumes that your kernel source is in /usr/src/linux,, with your new patch located in the current directory. Both of these defaults can be overridden by command-line switches in this format: patch-kernel [ sourcedir [ patchdir ] ]. Patch-kernel will abort if any part of the patch fails, but if the patch applies cleanly it will invoke find, which will delete all of the *.orig files which patch leaves behind. If you prefer to see the output of commands, or perhaps you would rather keep the *.orig files until you are certain the patched source compiles, running patch directly (with the patch located in the kernel source top-level directory, as outlined above) has been very reliable in my experience. In order to avoid uncompressing the patch before applying it a simple pipe will do the trick: gzip -cd patchXX.gz | patch -p1 or: bzip2 -dc patchXX.bz2 | patch -p1 After the patch has been applied the find utility can be used to check for rejected files: find . -name \*.rej At first the syntax of this command is confusing. The period indicates that find should look in the current directory and recursively in all subdirectories beneath it. Remember the period should have a space both before and after it. The backslash before the wildcard "*" "escapes" the asterisk in order to avoid confusing the shell, for which an asterisk has another meaning. If find locates any *.rej files it will print the filenames on the screen. If find exits without any visible output it's nearly certain the patch applied correctly. Another job for find is to remove the *.orig files: find . -name \*.orig -print0 | xargs -0r rm -f This command is sufficiently cumbersome to type that it would be a good candidate for a new shell alias. A line in your ~/.bashrc file such as: alias findorig 'find . -name \*.orig -print0 | xargs -0r rm -f' will allow just typing findorig to invoke the above command. The single quotes in the alias definition are necessary if an aliased command contains spaces. In order to use a new alias without logging out and then back in again, just type source ~/.bashrc at the prompt. Incidental Comments and Conclusion While putting this article together I upgraded the version of patch on my machine from version 2.1 to version 2.5. Both of these versions come from the current FSF/GNU maintainers. Immediately I noticed that the default output of version 2.5 has been changed, with less information appearing on the screen. Gone is Larry Wall's "...hmm" which used to appear while patch was attempting to determine the proper lines to patch. The output of version 2.5 is simply a list of messages such as "patching file [filename]", rather than the more copious information shown by earlier versions. Admittedly, the information scrolled by too quickly to read, but the output could be redirected to a file for later perusal. This change doesn't affect the functionality of the program, but does lessen the human element. It seems to me that touches such as the old "...hmm" messages, as well as comments in source code, are valuable in that they remind the user that a program is the result of work performed by a living, breathing human being, rather than a sterile collection of bits. The old behavior can be restored by appending the switch --verbose to the patch command-line, but I'm sure that many users either won't be aware of the option or won't bother to type it in. Another difference between 2.1 and 2.5 is that the *.orig back-up files aren't created unless patch is given the -b option. Patch is not strictly necessary for an end-user who isn't interested in trying out and providing bug-reports for "bleeding-edge" software and kernels, but often the most interesting developments in the Linux world belong in this category. It isn't difficult to get the hang of using patch, and the effort will be amply repaid. _________________________________________________________________ Last modified: Mon 31 Aug 1998 _________________________________________________________________ Copyright © 1998, Larry Ayers Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Typist: A Simple Typing Tutor By Larry Ayers _________________________________________________________________ Recently a small, ncurses-based typing tutor program appeared on the Sunsite archive site. Typist is a revision of an old, unmaintained Unix program. Simon Baldwin is responsible for this updated version, and he has this to say about the origin of his involvement: This program came from a desire to learn 'proper' typing, and not the awkward keyboard prodding I've been doing for the past 10 years or more. Since I usually run Linux rather than Windows or DOS, I looked around for a tutor program, and surprisingly, found nothing in the usual places. Eventually, I stumbled across Typist - a little gem of a program for UNIX-like systems. The original worked great, but after a while I started noticing odd things - some lessons seemed to go missing, and the programs were apt to exhibit some strange behaviours. After fixing a few bugs it seemed that the time was right for something of a rewrite. Don't expect a Linux version of Mavis Beacon; Typist has a simple but efficient interface without extraneous graphical fluff. Start it up and here is what you will see: 1st Typist screenshot Once a choice of lessons has been made, a series of help screens explain the usage of the program. Here is a lesson screenshot: Typist lesson screenshot The general idea is to type the exact letters or words shown on the screen; if a mistake is made a caret is shown rather than the letter typed. If no mistakes were made, the next section of the lesson appears; otherwise the first section is repeated until there are no errors. After each run through a lesson, a box appears showing typing speed and number of errors. A Dvorak lesson is even included for those willing to swim against the tide in the pursuit of greater typing speed. I've considered learning the Dvorak system, but have refrained due to my family's occasional need to use my machine. I don't want to make the transition between Windows and Linux systems more of a culture shock than it already is! Typist's small size and spartan interface does have the advantages of quick start-up and low overhead, making it ideal for quick usage in the intervals between other tasks, or while waiting for a web-site to load. Typist also exemplifies one my favorite scenarios in the free software world: an old source code archive languishing on an FTP site somewhere is now revived and given new life and new users. At the moment, the only source of the program seems to be the /pub/Linux/Incoming directory at the Sunsite archive site. Presumably Typist will eventually be filed away elsewhere on the site, but I don't know just where it will end up. Incidentally, Typist has now been re-released under the GNU GPL. _________________________________________________________________ Last modified: Mon 31 Aug 1998 _________________________________________________________________ Copyright © 1998, Larry Ayers Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Paradigm Shift By Joe Barr _________________________________________________________________ paradigm shift paradigm shift (pair uh dime shift) 1. a profound and irreversible change to a different model of behavior or perception. 2. an epiphany with staying power. 3. a sea change of such magnitude that it alters the course of all who pass through it. Paradigm shifts. Thinking back over my years in the industry, there haven't been that many. Especially when you consider the thousands of times the term has been used. The move of the center from the glass house to the desktop certainly qualifies. Likewise the rethinking of systems analysis and design, from the physical to the logical, was echoed by structured programming. But programming was to be swept by a second, perhaps even more fundamental change in perspective as we moved from procedural languages to object oriented. And to put it in everyday terms, there is a whole new mindset today when you connect to the internet than there was when you reached out to touch a BBS. There have been plenty of impostors: bubble memory, the death of mainframes, quality management, cold fusion, new Coke and "push content" on the web. It's often impossible to tell the difference between la buzz de jour and the first stirrings of a new-born, honest to baud, as real as the day is long, paradigm shift. The incubation period can last for years. Eventually, though, a thing either emerges and changes everything or it quietly fades away. Only then can you know for sure. I believe we are at the edge of the largest paradigm shift in the history of the industry. This one will smash the current model beyond recognition. Our children and our children's children will look back at the first age, the first 30 years of personal computing, and see it for the barbaric, archaic, self inhibiting, self impeding dinosaur that it is. A paradigm shift does not mean one thing is simply replaced by another. A new force field appears, draws attention to itself, and may coexist with, perhaps even languish alongside for some period of time, the model that it will replace. There may even be a longer period of time during which the original gradually fades away. The shift occurs, quite simply, when you wake up one day and find yourself seeing things in a new way, from a new perspective. The glass house and the personal computer? That one has been underway for many years. Microsoft has eclipsed IBM as the largest seller of software in the quarter just ended. The shift, by definition, never occurs in isolation. There must be related spheres, energizing pulses, co-dependent orbs circling the prime. It is when the catalyst works its magic that you are transported. Suddenly you are "there." Object oriented programming has been around for quite awhile now. I remember in the early 80's my brother asking if I had taken a look at Smalltalk yet. He seemed quite taken with the language and what it was about. I toyed with the turtle and got some inkling of objects and inheritance, but I really couldn't see that much would ever happen in the real world with Alan Kay's brainchild. Years later C++ would begin to move into the mainstream. Not replacing Cobol and C but just establishing its own place in the landscape. OO methodologies began to abound as more and more people crossed the line. But the big push hadn't even happened yet, Oak hadn't even dropped the acorn that became Java. Today, with the wildfire popularity of Java among developers, with its entry into the enterprise not only assured but an established fact, with its continued maturing and fleshing out, it is Java that is carrying the banner of object oriented programing to the dwindling herd of procedural programmers. Of course, in the time between Kay's conceptualization of objects, GUIs and cut-and-paste, and where we are today, it has not always been clear that this was the kind of stuff that would have profound, far-reaching impact on the way we look at software and design, the way we look at the tasks to be done and how we plan to do them. To many of the brightest and the best, at least to many outside of the Learning Research Group at Xerox Palo Alto Research Center during the 70's and 80's, bubble memory was much more likely to be the next big thing. And so it is with some trepidation that I hereby formally and officially predict that we are today awash in the first tides of a sea change that will once again change everything. But keep in mind, my dweebs, that my track record as a Karmac for Computing is something less than perfect. It was in the fall of 1978 that I told Sam Skaggs, then president of Skaggs-Albertsons superstores, the first marriage of drug and grocery emporiums, that scanning technology would never work in a grocery store. And in 1994 I predicted OS/2 would win the desktop from Windows. So don't bet the digital dirtfarm on this just yet. Your narrator is guessing, just as every other pundit who looks out past the breakers for first signs of the swell that will become the next big wave. My hunch is this: free/open source software will emerge as the only sensible choice. Feel the tremors in the Northwest? This one could be killer. There has been much debate over which term ("free software" or "open source") is the best choice, the most descriptive, and the truest to its philosophical roots. I am not going to go there. I will compromise by using both terms interchangeably. But please note that the word free in "free software" applies to a state of being, not to its price. It is about freedom. Also note that the hottest software product in the world today, Linux, qualifies as free software under this definition, whether you download it for free from the internet or pay anywhere from $1.99 to $99.99 for specific distributions. Linux is the only non-Windows operating system in the world that is gaining market share. How hot is it? It's almost impossible these days to keep up with articles in the press about Linux. A mailing list dedicated to Linux News recently had to split into three separate lists in order to handle the load. Linus Torvalds, its creator, is on the cover of the August issue of Forbes. Every major computer trade publication is showering it with attention. Oracle, Ingres, and Informix have just announced they will be porting their database products to Linux. Caldera has just announced (and has available for free download today) a Netware server for Linux. And that's just the news from the past two weeks. Linux has cache, bebe. The roots of Linux-mania began in the early 80's when Richard Stallman founded the GNU Project. Stallman had worked at MIT during the 70's and witnessed the destructive (in terms of group productivity and effort) nature of restrictive licensing of proprietary software. He wanted to create a free, modern operating system that could be used by everyone. In the GNU Manifesto (1983), he explained why he must write GNU: "I consider that the golden rule requires that if I like a program I must share it with other people who like it. Software sellers want to divide the users and conquer them, making each user agree not to share with others. I refuse to break solidarity with other users in this way. I cannot in good conscience sign a nondisclosure agreement or a software license agreement. For years I worked within the Artificial Intelligence Lab to resist such tendencies and other inhospitalities, but eventually they had gone too far: I could not remain in an institution where such things are done for me against my will. So that I can continue to use computers without dishonor, I have decided to put together a sufficient body of free software so that I will be able to get along without any software that is not free. I have resigned from the AI lab to deny MIT any legal excuse to prevent me from giving GNU away." By the time (almost ten years later) Linus Torvalds had a good working Linux kernel available, the GNU project had most of the non-kernel essentials ready. It was a marriage made in free/open source software heaven, and Linus converted the original Linux license to the GPL (GNU's General Public License). After all, it seemed the obvious choice to the young college student who had wanted to create a free version of Unix that everyone could use. Not only is Linus a true code wizard, he is delightfully perfect for his role today as poster boy of the free/open source movement. Every interview, every public appearance, each bit of history about him and Linux unearthed reveals a warm, wise, friendly, candid and particularly unpretentious personality. How else could someone whose views are so diametrically opposed to those of Bill Gates and the money mongers end up on the cover of Forbes? But Linux is not the only success story in the world of free/open source. Netscape rocked the commercial world earlier this year when it announced it would free the source code for its browser and make it available for download to anyone who wanted it. Netscape now claims that the browser has been improved as much over the past couple of months as it would have in 2.5 years in its closed source environment. FreeBSD, a rival for Linux in the UNIX like, free/open source sector, has its own fanatical users and supporters. Just this past week it shattered an existing world record for total bytes transferred from an FTP site in a single day. CRL Network Services, host of the popular Walnut Creek CD-ROM ftp site, announced on July 30th that they had moved over 400 gig of files on July 28, 1998. The previous mark of about 350 gig had been set by Microsoft during the Win95 launch period. Oh, one other thing. The FreeBSD record was set on a single 200Mhz Pentium box. The Microsoft record was set using 40 separate servers. Results like those are probably the driving force behind the emerging model. The performance just blows away what Windows is able to deliver in their closed, sealed, NDA protected, shoot you if you see it source code, proprietary model. Eric S. Raymond, keeper of the tome on internetese called "The Jargon File" and author on the must read essay "The Cathedral and The Bazaar," talks about the success he had with FETCHMAIL using the Bazaar model of development. Lots of eyes on the code: bugs are found more quickly, enhancements made more quickly, design becomes more normalized. But Linus is the candle for the moth. Leo LaPorte had him as a guest on his ZDTV show the night that Win98 was launched. I caught him in chat on the way out and asked him how SMP was looking for the next release. He said it looked very good. It seems he is always this accessible, and that is part of his magic and part of the reason for the success of Linux and shift in thinking about software development. For open software to not only flourish but become the norm, at least for those essential bits, like operating systems, that everyone needs to run, there must be huge successes to attract the rest of the crowd. Linux and FreeBSD are two of those attractions. Linus is the advantage that Linux holds over FreeBSD, not in a technical sense, but in a human sense. To get a sense of what Linus is like, it's interesting to follow his exchange of USENET messages with Andy Tanenbaum, the creator of Minix. Linus began his 386 experience with Minix and began to extend it to create Linux. He and Andy exchanged a series of messages in comp.os.minix over the issues of microkernel architecture, truly free software, and the relative merits of Minix and Linux. It began with a post by Tanenbaum which said in part: "MINIX is a microkernel-based system. The file system and memory management are separate processes, running outside the kernel. The I/O drivers are also separate processes (in the kernel, but only because the brain-dead nature of the Intel CPUs makes that difficult to do otherwise). LINUX is a monolithic style system. This is a giant step back into the 1970s. That is like taking an existing, working C program and rewriting it in BASIC. To me, writing a monolithic system in 1991 is a truly poor idea." To which Linus replied: "True, Linux is monolithic, and I agree that microkernels are nicer. With a less argumentative subject, I'd probably have agreed with most of what you said. From a theoretical (and aesthetical) standpoint Linux loses. If the GNU kernel had been ready last spring, I'd not have bothered to even start my project: the fact is that it wasn't and still isn't. Linux wins heavily on points of being available now. >>MINIX is a microkernel-based system. >>LINUX is a monolithic style system. If this was the only criterion for the "goodness" of a kernel, you'd be right. What you don't mention is that minix doesn't do the micro-kernel thing very well, and has problems with real multitasking (in the kernel). If I had made an OS that had problems with a multithreading filesystem, I wouldn't be so fast to condemn others: in fact, I'd do my damnedest to make others forget about the fiasco." Notice what is missing from the post? Even though his pet project, the fledgling Linux, has been slapped around pretty hard by the man who created its predecessor, Linus did not fall into the trap of name calling and hysterics that too often goes hand-in-glove with online debate. Notice what is present in the post? Concession of valid points made by Tanenbaum. Factual assertions that represent Linux quite nicely, thank you very much. And even for this well behaved defense, Linus closed with this: "PS. I apologize for sometimes sounding too harsh: minix is nice enough if you have nothing else. Amoeba might be nice if you have 5-10 spare 386's lying around, but I certainly don't. I don't usually get into flames, but I'm touchy when it comes to Linux :)" For all his dweebness, Linus is a people person. He is likable. He is brilliant. He is passionate about Linux but not to the point of resorting to bashing its detractors or alternatives to it. Earlier I mentioned an ongoing debate among proponents of the terms "free software" and "open source software." That is really symptomatic of a deeper argument over what type of licensing free/open source software should have. There is the GNU GPL that Linux uses, and there is the BSD model. Listen to Linus the diplomat walk that tightrope (while still making his preference known) in an interview with Linux Focus's Manuel Martinez: "I'd like to point out that I don't think that there is anything fundamentally superior in the GPL as compared to the BSD license, for example. But the GPL is what _I_ want to program with, because unlike the BSD license it guarantees that anybody who works on the project in the future will also contribute their changes back to the community. And when I do programming in my free time and for my own enjoyment, I really want to have that kind of protection: knowing that when I improve a program those improvements will continue to be available to me and others in future versions of the program. Other people have other goals, and sometimes the BSD style licenses are better for those goals. I personally tend to prefer the GPL, but that really doesn't mean that the GPL is any way inherently superior - it depends on what you want the license to do.." His views on the Evil Empire? Strong, perhaps, but certainly not inflammatory or angry. In his words, from the same interview: "I can certainly understand the "David vs Goliath" setup, but no, I don't personally share it all that much. I can't say that I like MicroSoft: I think they make rather bad operating systems - Windows NT is just more of the same - but while I dislike their operating systems and abhor their tactics in the marketplace I at the same time don't really care all that much about them. I'm simply too content doing what I _want_ to do to really have a very negative attitude towards MicroSoft. They make bad products - so what? I don't need to care, because I happily don't have to use them, and writing my own alternative has been a very gratifying experience in many ways. Not only have I learnt a lot doing it, but I've met thousands of people that I really like while developing Linux - some of them in person, most of them through the internet." Three potentially disasterous discussions on red button issues: Linux versus Minix, the GNU GPL license versus that of BSD, and Linux versus Windows. In each he makes his points politely but with utter candor. One last example. There is finally an official Linux logo. It is the cute, fat and friendly Penguin you often see on Linux sites. There was heated debate among the Linuxites on the choice of the logo. Many wanted something other than a cute, fat penguin. Something more aggressive or sleek, perhaps. Linus calmed these waters at the release of Linux 2.0 by saying: "Some people have told me they don't think a fat penguin really embodies the grace of Linux, which just tells me they have never seen an angry penguin charging at them in excess of 100mph. They'd be a lot more careful about what they say if they had." He is completely believable, obviously passionate about the project, and possessed of a contagious good humor. Linux could have no better leader from a technical point of view, and it couldn't have a better poster boy either. Its success more than anything else is pulling the rest of the world's mindset towards the notion of free/open source software. Nicholas Petreley raised the issue of open source software recently in his forum at InfoWorld Electric. It triggered a huge number of responses about the phenomenum. There may even be an Open Source magazine in the works. I credit my rethinking on this software dynamic to the reading I did there. I believe it is what finally made me realize that a paradigm shift has already occurred. That we are no longer discussing a possibility, but simply what is. The conclusion to the Forbes article behind the Linus cover calls for the Department of Justice to take note of the success of Linux in growing market share and to call of the investigation of Microsoft as unregulated monopoly. While I consider that a lame conclusion, the DOJ should be interested in enforcing antitrust law whether Linux is flourishing or not, I can't help but wonder if there's not some truth to the inspiration for that thinking. That it won't be government intervention or regulation that busts up Microsoft, but a revolution in our thinking about software. The Dweebspeak Primer, http://www.pjprimer.com/ _________________________________________________________________ Copyright © 1998, Joe Barr Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Running Remote X Sessions on Windows 95/98/NT/Mac/PPC Clients By Ron Jenkins _________________________________________________________________ Copyright ® 1998 by Ron Jenkins. This work is provided on an "as is" basis. The author provides no warranty whatsoever, either express or implied, regarding the work, including warranties with respect to its merchantability or fitness for any particular purpose. Corrections and suggestions are welcomed by the author. He can be reached by electronic mail at rjenkins@unicom.net. This document came about as a result of a client's problem and my solution. I have since seen this question asked a zillion times on USENET right up there with "Why can't Linux see all my (insert your >64MB number here,) of RAM?" _________________________________________________________________ The original problem One of my clients had a rather classical old-style Unix host to dumb terminal setup, connected through multiple serial termservers. They also had a PC on every desk, also connecting through a "dumb" serial connection. The problem was that they needed to administer the host, as well as run many other programs on the host that required a GUI. To accomplish this, they utilized a couple of Unix workstations. Obviously this was unacceptable, as they had everyone fighting for time on the workstations. The version of Unix they were running, had no CLI other than a network telnet session or the aforementioned serial setup, only administration through their proprietary interface running on top of X. A quick investigation showed an X server running on the host, but not being utilized. A previous consultant from the company they purchased the two systems from had suggested X Terminals as a solution, which by coincidence, they just happened to have handy. They never did tell me what his quote was, but rumor has it was staggering. (Look the price of an X Terminal sometime and you'll see what I mean.) Enter Linux. First, I did away with the serial connections on the PC's and got them on a switched 10 base T network. Next, I setup a couple of 486/100's as file servers and proxy hosts, using ip_masq and Samba. These machine then connected to the external WAN over a 10 base 2 bus. All the suits had quota'd storage, could e-mail and memo the begeezus out of each other, surf the "net, and were happy as clams. _________________________________________________________________ What does this have to do with X sessions and Windows? One word - POLITICS. To convince the suits (the ones with the money) to let me use Linux to solve the problem for the programmers and administrators (the ones who actually do the work to produce the money), I had to impress them first. While they don't understand diddly squat about the technical side of the business, they do understand I gave them e-mail, file services, intranet, and Internet access for just the cost of my time, since they had the 486's setting in a closet collecting dust. Now I had the go-ahead for the X solution I proposed, which was 2 more 486's also already on site, also not being used, upgraded to SCSI-3 Ultra Wide Disks, and honked up the RAM, to serve as X proxies, for reasons I can't go into. This interposes an additional barrier between the Xhost and the clients. You shouldn't need this, so I'm going to pretend everything behind the 486's does not exist. Just to make it really fun, I was also asked to include the web design department on this subnet, who were all on Mac's and Power PC's. After creating a 10 base T subnet with the 486's and the clients wired up and TCP/IP configured on all the clients, it was time to show 'em some magic. From this point forward, the 486 will be referred to as the "X host", and any Windows 95/98/NT/Mac/PPC machine will be referred to as "the client". Step One: On the X host, create a user account for each of the desired clients. Step Two: Acquire X server software for the clients. I am a freeware fanatic, so I chose to use MI/X, available from http://tnt.microimages.com/www/html/freestuf/mix/, or my mirror, ftp.brokewing.com/pub/mix/. An additional factor that led me to choose the MI/X package, is that it runs on all three platforms. Install the MI/X software Note for Windows clients - either install the program in it's own place like C:\mix, or if you put it in Program Files, create a shortcut directly to $BASEDIR\TNTSTART.EXE startmix (note the space) for some reason, on the 95 machines you may get a not enough memory message when you try to run it if you don't. Step Three: Acquire Telnet software for the clients. In my case they were already setup for telnet, from the previous serial thing. All Windows clients should already have telnet, the Mac's may or may not. If not, NCSA produces a telnet client that runs on the Mac platform. Step Four: You should be ready to go. I am sure that this whole thing could be done more elegantly, but here's what I did: * Start MI/X on the client. * Open a telnet session to the Xhost: * telnet 192.162.0.1 * After logging in, you need to tell the Xhost to display the output of a program running on the Xhost on a different machine (the client.) For the bourne shell: DISPLAY=:0.0 For example, DISPLAY=192.0.0.3:0.0 Now you need to tell the Xhost to use this Environment Variable for all subsequent programs. The command to accomplish this is: export DISPLAY For the csh: setenv DISPLAY You should now be able to run any X application you want on the Xhost and have it display on your client machine. In the telnet window, to launch an xterm, type: xterm & After the xterm comes up in the MI/X window, you can close the telnet session. That's all there is to it! _________________________________________________________________ Copyright © 1998, Ron Jenkins Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ "Linux Gazette...making Linux just a little more fun!" _________________________________________________________________ Searching a Web Site with Linux By Branden Williams _________________________________________________________________ As your website grows in size, so will the number of people that visit your site. Now most of these people are just like you and me in the sense that they want to go to your site, click a button, and get exactly what information they were looking for. To serve these kinds of users a bit better, the Internet community responded with the ``Site Search''. A way to search a single website for the information you are looking for. As a system administrator, I have been asked to provide search engines for people to use on their websites so that their clients can get to their information as fast as possible. Now the trick to most search engines (Internet wide included) is that they index and search entire sites. So for instance, you are looking for used cars. You decide to look for an early 90s model Nissan Truck. You get on the web, and go to AltaVista. If you do a search for ``used Nissan truck'', you will most likely come up with a few pages that have listings of cars. Now the pain comes when you go to that link and see that 400K HTML file with text listings of used trucks. You have to either go line by line until you find your choice, or like most people, find it on your page using your browser's find command. Now wouldn't it be nice if you could just search for your used truck and get the results you are looking for in one fail swoop? A recent search CGI that I designed for a company called Resource Spectrum (http://www.spectrumm.com/) is what precipitated DocSearch. Resource Spectrum needed a solution similar to my truck analogy. They are a placement agency for high skilled jobs that needed another alternative to posting their job listing to newsgroups. What was proposed was a searchable Internet listing of the jobs on their new website. Now as the job listing came to us, it was in a word document that had been exported to HTML. As I searched (no pun intended) long and hard for something that I could use, nothing turned up. All of the search engines I found only searched sites, not single documents. This is where the idea for DocSearch came from. I needed a simple, clean way to search that single HTML document so users could get the information they needed quickly and easily. I got out the old Perl Reference and spent a few afternoons working out a solution to this problem. After a few updates, you see in front of you DocSearch 1.0.4. You can grab the latest version at ftp://ftp.inetinc.net/pub/docsearch/docsearch.tar.gz. Let's go through the code here so we can see how this works. First before we really get into this though, you need to make sure you have the CGI Library (cgi-lib.pl) installed. If you do not, you can download it from http://www.bio.cam.ac.uk/cgi-lib/. This is simply a Perl library that contains several useful functions for CGIs. Place it in your cgi-bin directory and make it world readable and executable. (chmod a+rx cgi-lib.pl) Now you can start to configure DocSearch. First off, there are a few constants that need to be set. They are in reference to the characteristics of the document you are searching. For instance... # The Document you want to search. $doc = "/path/to/my/list.html"; Set this to the absolute path of the document you are searching. # Document Title. The text to go inside the HTML tags. $htmltitle = "Nifty Search Results"; Set this to what you want the results page title to be. # Optional Back link. If you don't want one, make the string null. # i.e. $backlink = ""; $backlink = "http://www.inetinc.net/some.html"; If you want to provide a ``Go Back'' link, enter the URL of the file that we will be referencing. # Record delimiter. The text which separates the records. $recdelim = " "; This part is one of the most important aspects of the search. The document you are searching must have something in between the "records" to delimit the html document. In English, you will need to place some HTML comment or something in between each possible result of the search. In my example, MS Word put the $nbsp; tag in between all of the records by default, so I just used that as a delimiter. Next we ReadParse() our information from the HTML form that was used as a front end to our CGI. Then to simplify things later, we go ahead and set the variable $query to be the term we are searching for. $query = $input{`term'}; This step can be repeated for each query item you would like to use to narrow your search. If you want any of these items to be optional, just add a line like this in your code. if ($query eq "") { $query = " "; } This will match relatively any record you search. Now comes a very important step. We need to make sure that any meta characters are escaped. Perl's bind operator uses meta characters to modify and change search output. We want to make sure that any characters that are entered into the form are not going to change the output of our search in any way. $query =~ s/([-+i.<>&|^%=])/\\\1/g; Boy does that look messy! That is basically just a Regular Expression to escape all of the meta characters. Basically this will change a + into a \+. Now we need to move right along and open up our target document. When we do this, we will need to read the entire file into one variable. Then we will work from there. open (SEARCH, "$doc"); undef $/; $text = ; close (SEARCH); The only thing you may not be familiar with is the undef $/; statement you see there. For our search to work correctly, we must undefine the Perl variable that separates the lines of our input file. The reason this is necessary is due to the fact that we must read the entire file into one variable. Unless this is undefined, only one line will be read. Now we will start the output of the results page. It is good to customize it and make it appealing somehow to the user. This is free form HTML so all you HTML guys, go at it. Now we will do the real searching job. Here is the meat of our search. You will notice there are two commented regular expressions in the search. If you want to not display any images or show any links, you should uncomment those lines. @records = split(/$recdelim/,$text); We want to split up the file into an array of records. Each record is a valid search result, but is separate from the rest. This is where the record delimiter comes into play. foreach $record (@records) { # $record =~ s///ig; # Do not display images inside this # doc. if ( $record =~ /$query/i ) { print $record; $matches++; } } This basically prints out every $record that matches our search criteria. Again you can change the number of search criterion you use by changing that if statement to something like this. if ( ($record =~ /$query/i) && ($record =~ /$anotheritem/) ) { This will try to match both queries with $record and upon a successful match, it will dump that $record to our results page. Notice how we also increment a variable called $matches every time a match is made. This is not as much as to tell the user how many different records were found, but more of a count to tell us if no matches were found so we can tell the user that no, the system is not down, but in fact we did not match any records based upon that query. Now that we are done searching and displaying the results of our search, we need to do a few administrative actions to ensure that we have fully completed our job. First off, as I was mentioning before, we need to check for zero matches in our search and let the user know that we could not find anything to match his query. if ($matches eq "0") { $query =~ s/\\//g; print << "End_Again";

Sorry! "$query" was not found!

End_Again } Notice that lovely Regular Expression. Now that we had to take all of the trouble to escape those meta characters, we need to remove the escape chars. This way when they see that their $query was not found, they will not look at it and say ``But that is not what I entered!'' Then we want to dump the HTML to disappoint the user. The only two things left to do is end the HTML document cleanly and allow for the back link. if ( $backlink ne "" ) { print "
"; print "

Go back

"; print "
"; } print << "End_Of_Footer"; End_Of_Footer All done. Now you are happy because the user is happy. Not only have you streamlined your website by allowing to search a single page, but you have increased the user's utility by giving them the results they want. The only result of this is more hits. By helping your user find the information he needs, he will tell his friends about your site. And his friends will tell their friends and so on. Putting the customer first sometimes does work! _________________________________________________________________ Copyright © 1998, Branden Williams Published in Issue 32 of Linux Gazette, September 1998 _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next _________________________________________________________________ Linux Gazette Back Page Copyright © 1998 Specialized Systems Consultants, Inc. For information regarding copying and distribution of this material see the Copying License. _________________________________________________________________ Contents: * About This Month's Authors * Not Linux _________________________________________________________________ About This Month's Authors _________________________________________________________________ Larry Ayers Larry lives on a small farm in northern Missouri, where he is currently engaged in building a timber-frame house for his family. He operates a portable band-saw mill, does general woodworking, plays the fiddle and searches for rare prairie plants, as well as growing shiitake mushrooms. He is also struggling with configuring a Usenet news server for his local ISP. David Bandel David is a Computer Network Consultant specializing in Linux, but he begrudgingly works with Windows and those ``real'' Unix boxes like DEC 5000s and Suns. When he's not working, he can be found hacking his own system or enjoying the view of Seattle from 2,500 feet up in an airplane. He welcomes your comments, criticisms, witticisms, and will be happy to further obfuscate the issue. Joe Barr Joe has worked in software development for 24 years. He has served as programmer, analyst, consultant, and manager. He started writing about the industry in 1994 and his monthly column (Papa Joe's Dweebspeak Primer) became a favorite in Austin's "Tech Connected" magazine. The Dweebspeak Primer exists today in the form of an email newsletter and website. His articles have been reprinted in places like IBM Personal Systems Magazine, the legendary e-zine phrack, and the Manchester Guardian. Jim Dennis Jim is the proprietor of Starshine Technical Services. His professional experience includes work in the technical support, quality assurance, and information services (MIS) departments of software companies like Quarterdeck, Symantec/ Peter Norton Group, and McAfee Associates -- as well as positions (field service rep) with smaller VAR's. He's been using Linux since version 0.99p10 and is an active participant on an ever-changing list of mailing lists and newsgroups. He's just started collaborating on the 2nd Edition for a book on Unix systems administration. Jim is an avid science fiction fan -- and was married at the World Science Fiction Convention in Anaheim. Michael J. Hammel Michael is a transient software engineer with a background in everything from data communications to GUI development to Interactive Cable systems--all based in Unix. His interests outside of computers include 5K/10K races, skiing, Thai food and gardening. He suggests if you have any serious interest in finding out more about him, you visit his home pages at http://www.csn.net/~mjhammel. You'll find out more there than you really wanted to know. Bill Henning Bill runs http://www.CPUReview.com, a computer hardware oriented site. He is a systems analyst who designs real time industrial control software for a custom engineering company in Richmond, B.C. Bill is also the proprietor of a small web design / hosting / consulting business (Web Technologies, http://webtech.door2net.com). Phil Hughes Phil Hughes is the publisher of Linux Journal, and thereby Linux Gazette. He dreams of permanently tele-commuting from his home on the Pacific coast of the Olympic Peninsula. As an employer, he is "Vicious, Evil, Mean, & Nasty, but kind of mellow" as a boss should be. Ron Jenkins Ron is the self taught, fairly unstable, and hopelessly unskilled proprietor of Blackwing Communications. He welcomes your comments, questions, and corrections. When he's not giving out crummy advice, he can usually be found warping young and old minds with what little expertise he has managed to retain. James M. Rogers James, his wife, and their pets have moved to a new home on the Olympic Peninsula In Washington State. I am now a Systems Programmer for the University of Washington Medical Center and Harbor View Medical Center. I work on the interfaces between medical computer systems. Shay Rojansky Shay Rojansky is an 18-year-old high school student about to be drafted into the Israeli Defence Forces (IDF), where he hopes to push Linux as an OS. He sometimes works in his high school as a system administrator (mainly Linux). Vincent Stemen Vincent is a programmer, Unix/network administrator, and avid Linuxer who goes snow skiing every chance he gets. The day he installed Linux version 0.12 approximately seven years ago and saw how well it ran, he bulk erased most of his floppy disks containing software for other operating systems and went out and celebrated. Martin Vermeer Martin is a European citizen born in The Netherlands in 1953 and living with his wife in Helsinki, Finland, since 1981, where he is employed as a research professor at the Finnish Geodetic Institute. His first UNIX experience was in 1984 with OS-9, running on a Dragon MC6809E home computer (64k memory, 720k disk!). He is a relative newcomer to Linux, installing RH4.0 February 1997 on his home PC and, encouraged, only a week later on his job PC. Now he runs 5.0 at home, job soon to follow. Special Linux interests: LyX, Pascal (p2c), tcl/tk. Branden R. Williams Branden is Vice President of I-Net Solutions, Inc. (http://www.inetinc.net/). There he consults with several other companies doing UNIX system and network administration, security management, and system performance tuning. When he is not in the office, he enjoys sailing, camping, and astronomy. _________________________________________________________________ Not Linux _________________________________________________________________ Thanks to all our authors, not just the ones above, but also those who wrote giving us their tips and tricks and making suggestions. Thanks also to our new mirror sites. [INLINE] The news this month and last was gathered by Ellen Dahl. Amy Kukuk put the News Byte column together for me. Thanks to them both for good and needed help. Have fun! _________________________________________________________________ Marjorie L. Richardson Editor, Linux Gazette, gazette@ssc.com _________________________________________________________________ [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back _________________________________________________________________ Linux Gazette Issue 32, September 1998, http://www.linuxgazette.com This page written and maintained by the Editor of Linux Gazette, gazette@ssc.com