|
Table of Contents:
|
||||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. | |||
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1996-2000 Specialized Systems Consultants, Inc. | |||
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Fri, 31 Mar 2000 20:57:04 -0000
From: The Strangemores <sstrange@crrstv.net>
Subject:
Do you know what the Linux kernel split is? If so, can you tell me about it?
Chantil
Sat, 01 Apr 2000 01:03:33 -0600
From: Randall E. Cook, Sr. <Randy@MNCom.Net>
Subject: Help in setting up Red Hat as a dial-up server
I have searched and searched for 2 months now and can not get any info on how to set up a server for customers to dial into and access the internet with mail accounts and such. I have been to every news group and discussion I can find. No one will give any information on how to set this up. The ONLY help or answer I get is...:"why do you want to be an ISP, they are to expensive to set up?" Please have a "How-To" for the beginner to set up an ISP for the first time?
Thanks in advance.
Sun, 02 Apr 2000 15:37:16 -0500
From: Dan Stroock <dws@math.mit.edu>
Subject: linux and DHCP
I have been trying, without success, to hitch my Linux box to a Linksys Etherfast cable router. I set networking configuration to use DHCP, but my machine does not get the information which it needs. Has anyone got a HOWTO page or other source of information about this sort of thing?
Sun, 2 Apr 2000 23:31:20 +0100
From: andrew sprott <andru@btinternet.com>
Subject: sharing filesystems
hi
i have tried searchin your site for 'share', 'sharing filesystem' etc., but nothin came up. basically, i've got 6 networked machines, half of which can't take a full installation of suse 6.3. what i want to do is export the installation on a 20.4gb disk to the other machines. so say, most of /etc can be shared by the other machines.
the thing is how can linux be installed on the other machines without doin a seperate install that takes up all disk space on the local machines. has anybody tackled this and wrote about it? the thing that appeals to me is the prospect of simply loggin onto any machine and access my usual apps and data etc.
Sun, 02 Apr 2000 22:55:05 -0200
From: Rakesh Mistry <rakeshm@REMOVETHIS.netactive.co.za>
Subject: Swing on Linux
Hi
I am having trouble installing Swing1.0.3/1.1.1 on my RH6.0 system. I have managed to setup jdk117_v3 successfully sometime ago.
I have unzipped the tar.Z and placed it in /usr/local/lib/jdk117_v3/swing-1.0.3/ directory. I have added this path to my CLASSPATH. I have also added it to a SWING_HOME variable as well as added a JAVA_HOME variable.
However, everytime I try and compile a java program which tries to import a swing package, I get the following error :
SwingUI.java:4: Package javax.swing not found in import. import javax.swing.*; ^ SwingUI.java:6: Superclass JFrame of class SwingUI not found. class SwingUI extends JFrame ^ SwingUI.java:24: '(' expected. panel.setLayout(new BorderLayout); ^ 3 errors
I have copied this code straight out of a java tutorial.
Any help would be greatly appreciated !!!
Mon, 03 Apr 2000 17:16:28 +0200
From: Silvia Scarpetta <scarpetta@na.infn.it>
Subject: linux and win2000
I have updated winNT to windows2000, and LILO is not able to boot either linux or win2000 , any more (before I had winNT and Linux on the two harddisks and it works!)
I mean LILO starts but when I says to boot windows2000 it says:
NTLDR is missing. ???????Anyone knows if win2000 has been made in order to not to be compatible with Linux? is there a way to solve the problem?
I tryed to do again
sbin/lilo(in case the MBR was damneged) but it did not work either.
Wed, 05 Apr 2000 10:53:22 PDT
From: Paul Grainger <psfgrainger@hotmail.com>
Subject: Interfacing with Novell Netware
Hi there, can you help me with tips on how to interface to a Novell Netware network (3.12 bindery). I am currently running Mandrake 7 and have a 3 Com ethernet adapter (which Linux seems to be able to auto-configure). Whenever I try to configure my card the system requests IP addresses, which is not relevant in this instant. I know that IPX support is provided but what are the steps required to enable use of file and print services on the network? Thanks in anticipation of your help,
Thu, 6 Apr 2000 23:16:16 +0200
From: Andrea <amerini@dada.it>
Subject: LILO
Hi , I 'm a new user of Linux (Great!!) and I have a little problem:
I have 2 HD , the 1st SCSI with Win 98 and th 2nd EIDE with Red Hat 6.1.
I can't start windoze from LILO , (the machine does nothing)and I must switch the boot from BIOS. Could you tell me please , how to solve this little problem ?
Wed, 12 Apr 2000 13:42:43 +0200
From: Otto Wyss <Otto.Wyss@eds.com>
Subject: More than one keyboard with different layout
Sorry I'm not sure if this is the right place to ask, please tell me the right place if I'm wrong.
I have 2 keyboards connected to my PC, one is a old AT-serial keyboard and the other an USB-keyboard with Win98 key assignment. Now I'd like to modify the keymap so I could use the new Command("Windows")-key. Unfortunalty this conflicts with the old keyboard (which I still need in case of an emergency). I have to install 2 different keymaps, each one for each keyboard. but currently the kernel (2.2.14) only allows for one keyboard.
My wish for enhancement: Keyboards should be implemented as ordinary devices in the kernel (like mouses), so an arbitrary count of keyboards where possible. Keymaps, kbdrates, etc. should be attached to keyboard devices as well.
Tue, 11 Apr 2000 17:47:22 EDT
From: <JDGIOVINCO@aol.com>
Subject: shell scripting in a "C++" based shell
I recently read your article about the basic scripting commands in the April issue. However, the bash system and i are like oil and water, because i am more familiar with programing in "C++" After some searching I was able to find some incredibly informative manuals that also contained cdrom packages with libraries, patches and other assorted tools to help learn how to script some of the varieties of "C" based shell. Soon enough, my happiness was brought to a skreaching halt when i glanced down at some of the prices. So I was writing to ask if maybe in your next issue, you could follow up the scripting article with some basic commands in "ksh" or "zsh," or just inform me of any manuals published within a reasonable price range. Thanks
My current goal in writing the column is to concentrate on "bash" until I feel that my readers, by following the column, have reached a high enough level of proficiency that they would be interested in other options - and those may include a look at other shells. Unfortunately for your requirements, this isn't likely to happen for quite a long while. Do be aware, though, that unless you get into somewhat deep scripting stuff (co-processes, async pipelines, etc.), there isn't _that_ much difference between, say, "ksh" and "bash": "ksh" is actually a superset of "sh" which is bash-compatible, and "bash" incorporates a number of "ksh" and "csh" features.
Since I don't know what your level of general scripting/shell expertise may be, let me toss in a perspective from my own experience: the first shell that I ever used was "sh", and it was nothing short of a battle to produce my first script, simple as it was. Later, in my rather brief flirtations with other shells, I found that learning their specific syntax was an *incremental* task - I had already learned 90+% of what I needed to write scripts for them via my experience with "sh". You too may find that it isn't only "bash" that is problematic: there is a learning curve associated with any shell - they all have their quirks. I'm certainly not trying to talk you into switching your shell preference, but you should realize that there's a "cost" associated with entering the "shell game" - and the type of shell is, in my opinion, largely irrelevant to that "cost".
Given the nature of Linux, you'll find that the information that is freely available is copious and of high quality. This implies that any commercially available material will be a) _outstanding_ in quality (this is certainly true in my experience), and b) relatively expensive, since "quality costs". My suggestion for you is to study the free material, look for info on the Net (e.g., "ksh keybindings (vi keys)" in The Answer Guy's column, issue 51 of LG, has some good pointers), and study other people's "zsh"/"ksh" scripts (hint: use AltaVista's Advanced search to look for "#!*/bin/[kz]sh".)
By the time you exhaust those resources, you should either feel justified in your purchase of those "expensive" books - or you may decide that you've lerned enough that you don't need them after all.
Wed, 12 Apr 2000 14:40:44 -0700
From: Anderson, Randy (FRM) <randy.anderson@compaq.com>
Subject: adding pseudo devices in a sunos 4.1.4 environment..
hi, my sunos kernel is already configed for 256 pseudo devices. my users complain about running out of them often...i know they are not using even a fraction of that number, so what gives?? do i need to add /dev device files? recompile the kernel (GENERIC) again??? help!
thanks for any assistance..
Fri, 14 Apr 2000 11:14:17 -0500
From: David K. Daniels <daved@hutchtel.net>
Subject: Is There a Version of PC/NFS for Linux?
I have the O'Reilly book "Managing NFS and NIS" and there is a section in the back of the book called PC/NFS describing a Unix utility that enables a PC DOS machine to access a Unix machine using the NFS file system as an extended DOS file system. I am wondering if there is a Linux version of this available?
I would like to be able to run a Linux server on a TCP/IP network and have the capability of booting a PC using DOS and map a drive letter to the file system running on the Linux server for the purpose of using a utility called "Ghost" and make a ghost image of the DOS/Windows drive and drop it on the Linux server for storage.
Any information or pointers would be appreciated.
Sat, 15 Apr 2000 00:02:45 +0530
From: US Mohalanobish <usmbish@crosswinds.net>
Subject: SiS6215C graphics adapter card
Can anybody help me make my SiS6215c graphics card deliver a resolution more than 640x480 in Linux? On Windows, I get resolutions as high as 1024x768 with 256 colors or 800x600 with 16bit colors.
05 Apr 2000 09:19:10 +0200
From: Andrés Hortigüela García <Andres.Hortiguela@csbs.jcyl.es>
Subject: Graphics card question (Spanish)
Necesito un driver para la tarjeta gráfica integrada en placa base con chipset Intel 810, para configurar mi Linux (EsWare - Red Hat 6.0)
¿Me podéis ayudar? ¿Donde lo puedo obtener?
Muchas gracias, ... Andrés.
Fri, 14 Apr 2000 11:50:22 -0700
From: Ahmad <al-iman@net.sy>
Subject: How to hack a proxy
dear sir
we are in bad need for a program to pass the firewall because our server is filltering the most of hackers sites and all the free Email.
your prompt positive reply is highly appreciated
thanks, best regards.
Sun, 16 Apr 2000 14:39:06 -0400
From: Robin and David Pickens <rdpickens@email.msn.com>
Subject: Downloading X11/ XF86 upgrades
I am new to the alternative OS world of Linux. I recently purchased a (old) beginners version of Caldera Linux 1.3 and, through much frustration on my part as well as the tech reps at Mandrake, have come to realize that my computer's on- board video card is too modern for the XF86Setup v. 3.3.2. I discovered (I think) that XF86Setup v. 3.3.6 is the updated one which supports my card (a Trident Blade 3D/ MVP4). I went to the XF86 web site to download the proper files and uncovered a plethora of files and folders that have left me rather confused as to which ones to use. Can anybody tell me which ones (specifically) from that site to download or, direct me to another mirror site with a bit less confusing archives and easier to follow guidelines for acquiring these most needed programs? P.S. I have looked through "rpmfind.net" and could only find version 4.0 of XF86 for Trident Blade cards. The tech rep at Mandrake said 4.0 would probably not help me. Any further assistance would be greatly appreciated. Thanks, David P.
Mon, 17 Apr 2000 15:34:34 +0530
From: Prakash Nair <nairp@zeenetwork.com>
Subject: Switching from Xchange Server To Linux
Hello.. I hope u can help me with this. We have MS Exchange server with 400 users.We would now like to switch to linux as the mail server(remove Xchange server). How could this be done?
Pl. help as this is to be done urgently.
Mon, 17 Apr 2000 14:39:39 -0700
From: Chetan Gadgil (Work - Linux) <chetan@objectstream.com>
Subject: Porting to a new language
I am interested in porting Linux to "Indic (Indian)" languages. Is there a good place to start? Could anyone please provide a brief outline of how a port to a new language/script is done?
Does Linux use GNU/gettext for the locale specific languages?
Tue, 18 Apr 2000 04:18:11 -0700 (PDT)
From: Phil Coval - RzR.online.FR <philippe_coval@yahoo.com>
Subject: is Debian deadbian ?
When next debian is out ?
i've seen on magazine that it will be out in a few weeks that was on jannuary 2000 ? the site isnt updated
Whats the matter ?
Tue, 18 Apr 2000 09:32:26 -0500
From: Mark Contatore <contatorem@iivip.com>
Subject: Compaq help
I recently acquired a Compaq ProSignia 300, it has the on board NCR53C810 SCSI controller. I have been totally unsuccessful in installing RedHat Linux 6.2, the system indicates the driver is incorrect . I am asking for anyone with the experience of a successful install on this platform to please help!
Wed, 19 Apr 2000 14:50:52 +0200
From: Joseph Simushi <jsimushi@pulse.com.zm>
Subject: LAN Administrator Books.
Help me with information as regards where I can find the above books or if you offer some, please send me some on the address below.
Regards,
Simushi Joseph
LAN Administrator
PULSE Project
P.O. Box RW 51269
Lusaka
Zambia.
Tel: 295642 (W), 250236 (H)
Wed, 19 Apr 2000 15:23:27 +0100
From: Stephen Wileman <Pcrep@mancat44.freeserve.co.uk>
Subject: linux courses / books
I am IT teacher being asked a lot of questions to do with the linux operating systems in particular Linux Red Hat 6 and above.
Please could you help with any suggestions on a good basic book or material which I could use to help my students out with their problems or a any recognised Linux professional qualifications I can undertake to aid my own understanding of the Linux / Red Hat operating system?
Wed, 19 Apr 2000 20:28:49 -0700 (PDT)
From: Venkat Rajagopal <venkat_rajagopal@yahoo.com>
Subject: Command line editing
Hi,
I have been trying to set command line editing (vi mode) as part of my bash shell environment and have been unsuccessful so far. You might think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my start up scripts. I have tried all possible combinations but it JUST DOES NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in my .bashrc etc but I cannot get it to work. How can I get this done? This used to be a breeze in the korn shell. Where am I going wrong?
Thu, 20 Apr 2000 21:59:27 +0200
From: Matej Cepl <CeplM@seznam.cz>
Subject: Other markup languages - LG #27
Hi,
I have found your article on "markup languages and lout" on the website of Linux Gazette. Thank you for it -- as beginer in lout (and emigre from LaTeX), I have greatly appreciated it your open attitude towards something different than TeX.
However, I would be very interested in other articles from series -- on TeX and troff. When I tried to found them on LG site, I have not found either of them. Are they presented anywhere on the Web? If so, would you be so kind and send me URL, please?
Have a very nice day
Matej Cepl
Thu, 20 Apr 2000 16:37:53 -0700
From: Martin Perry <m.perry@dtn.ntl.com>
Subject: Screen Dump of Linux
I am writing to request a screen dump picture of what Linux looks like when it is running.
I am currently doing a HNC in Business Information Technology and I have got to give a presentation on Linux in a weeks time and would like to put a screen dump on the OHP for people to see. From what I understand it can either look like windows or the Mac equivalent.
I have been searching for days to find this on the net with no luck as yet.
Sorry for any inconvience.
Maria Perry. m.perry@dtn.ntl.com
[I think several distributions have such images on their web sites, usually in a section called "Screenshots". Go to www.linuxjournal.com, "How to Get Linux" and follow the links from there.Also, the GUI interfaces (KDE and GNOME) and window managers have screen shots on their web sites, to give you a preview of what the program will look like. www.kde.org, www.gnome.org, www.enlightenment.org, www.windowmaker.org, http://www.plig.org/xwinman/fvwm95.html, etc. -Ed.]
Fri, 21 Apr 2000 11:41:19 +1000 (EST)
From: Russ Pitman <russ@tasman.net.au>
Subject: BU backup utility
This was the subject of an excellent article in Issue 32 of the Gazette.
My only hard copy is missing and the site (http://www.crel.com) is not reachable. Also mail to vstemen@crel.com is undeliverable.
Web searching has not, for me, found any other address for bu so I came here hoping that the Gazette can help.
Does any one know where a copy of Vincent Stemens 'bu' can be obtained. Thanks for your time.
Fri, 21 Apr 2000 00:21:56 -0700
From: MVE <getsome@mcsi.net>
Subject: Please Help
Please help me. I am at my wits end.
I have VERY recently installed Linux, so I am very new to all of this, and pehaps I am going about this the wrong way. I want to install Oracle8i on my system. ALL the information says I have to install a JRE (JRE 1.1.6v5) in order to get Oracle to work. (NOT JDK...JRE).
I CAN NOT find it for the life of me!!!! PLEASE PLEASE PLEASE do not send me to Blackdown.org. They do not have it either. Do not send me to Sun, because they do not have it either. Nor does Susie, nor does Red Hat...NOBODY!! I can NOT find it. What is up with this??? Is this usual? Why would a company sell me an operating system (I know, it's free), and the SAME company (Mandrake) sell me the Oracle81 program, and NOT include it in their package??? (They don't have JRE 1.1.6v5 either)!
Where can I find it???? I am becoming very discouraged with all the support I have heard about concerning Linux...(there doesn't seem to be ANY).
Fri, 21 Apr 2000 03:30:51 -0700 (PDT)
From: belahcene abdelkader <belahcene@yahoo.com>
Subject: troubling with ftp , telnet
hi, someone can help me! I have a lab with several PCs, pentium II, running under linux redhat 6.0. The installation is complete on each one, with ftp, http, telnet etc.... The ping is correct for all machines. I can use internet with netscape from each one. I use one machine as server with proxy. Clearly : I have machine 1 with proxy package, this machine is connected to internet via modem, the machines 2 and 3 are connected in LAN and can access to internet via machine 1. My probleme is: when I want to get file from one machine in another, via ftp, the system refuse with non permission. Sometimes it is possible in one sens and not in the other. I have the same probleme with telnet. I have login and passwd in all the machines and i can enter as root. Thank you.
Fri, 21 Apr 2000 18:48:03 BST
From: Ben Parsons <ukbenz@hotmail.com>
Subject: Help with email
Hello. I've only just really started out with linux Mandrake (call it redhat) and I wanted to know if I can get my hotmail email into say, Pine or Elm, I looked through all the docs but it dosen't mention it and in any case I don't know where to start. Cheers in advance to anyone who can help.
Tue, 25 Apr 2000 00:36:59 +0200
From: Gonzalo Aguilar <gad@reymad.com>
Subject: XFree 4.0 and internationalization
Hello, I'm an Spanish linux user and XFree 4.0 is having problems with the "special characters" of my keyboard.
I cannot write letter with "´" on the top (is very important for my languaje) or cannot put "¨" also.
Those work after in XFree 3.3.5 but now...
Do you know any tips on this. Nobody seems to know cause a lot of people has the same problem. Thanks
Tue, 25 Apr 2000 10:43:12 +0200
From: Dominic STEUR <dominic.steur@belgacom.be>
Subject: GUI
Hello, I am kind of a beginner in the linux world and I have little knowledge about unix and even less about linux. I have installed the Linux Redhat 6.1 recently, and that went quite smooth, it is on an intel machine with now a lilo boot and a win2000 boot menu, in which I can choose 98 or 2000. Here comes my problem: when booting the linux i end up in the bourne-shell login screen, but this is not quite what I had in mind for interface... i had performed the gnome workstation installation, so it should end up in a x-windows environment if I am correct. When i did an upgrade installation, it became clear that there were no interface ( or similar) packages installed, so I selected the lot of them and installed it. But after rebooting the bourne-shell was back, and I am at a loss. It probably is a stupid problem with a simple solution (I hope), but I fancy some help.
Tue, 25 Apr 2000 18:19:37 +0100
From: <saqib@saqib-shaikh.freeserve.co.uk>
Subject: A Problem
Dear Editor,
My name is Saqib Shaikh and I live in the UK. I have been reading Linux books for a few years now, and decided to put my knowledge into practice. I got out my CD of Slackware 3.6, and my old computer. My computer's specifications are: 486, 33 mhz processor, 4 MB RAM, 80 MB hard drive. The Slackware guide said that 4 MB was enough RAM, so gave it a go. I made the boot and root floppies. I inserted the boot floppy, powered up, inserted the root floppy when asked. It stood there, blank screen, doing nothing. I thought on such an old computer it must just be taking its time. 25 minutes later I pressed ctrl+alt+del. This has resulted in my computer, whenever turned on, giving the error "Cannot find ROM basic"!. It does not even check the floppy disk. One last thing to mention is that before starting the install I used fdisk to remove all partitions, and used fdisk /mbr to remove the mbr. I would be extremely greatful for your help. I do not mind throwing the computer away, but rather would like to learn the cause for the future.
REgards, Saqib Shaikh
Tue, 25 Apr 2000 18:19:37 +0100
From: Linux Gazette <gazette@ssc.com>
Hai,
I want to take backup on HP 5GB Dat Drive.
Could u please help me any body how to configure it.
Thanks in advance.
[I inadvertently cut off the querent's name and e-mail address. Please send answers to the Gazette. -Ed.]
Tue, 25 Apr 2000 18:19:37 +0100
From: Linux Gazette <gazette@ssc.com>
The following questions received this month are answered in the
Linux Gazette FAQ:
Wed, 26 Apr 2000 10:52:35 +0800
From: Kana Krishna <Kana_Krishna@netcel360.com>
My name is Kanagaraj and I'm from Malaysia . Currently I'm doing my degree in Computer Science in a local university here . I need help in creating a script that can log into telnet or ftp server ( with user name and password ) to copy a file(log file) and send it to a pc that is connected to the network . What I really need to do is :-
I need the automate the process by scripting for one of my projects and I'm really having a tough time doing it . I'm have to connect using MS-Dos enviroment. As I was looking for some information or somebody to help me , I found your e-mail address in one of the web sites . It would nice if you could help me .Bye
Wed, 26 Apr 2000 18:42:50 -0400
From: Aurelio Martínez Dalis <aureliomd@cantv.net>
My Name is Aurelio Martínez, from Venezuela, Latin America. I speak english just a little, and I am a Linux begineer. I would like to know if exists any other video system for linux other than X11, free or comercial, under development or stable. Can you help me ? Thanks.
Fri, 28 Apr 2000 14:42:50 -0400
From: Roland Glenn McIntosh <roland@sunriselabs.com>
Subject: Two problems - change password with Eudora, how to APOP?
I'm using the IMAP package, whichever version comes with Red Hat Linux 6.2. I'd like to be able to change my mail password on the server from the client, using Eudora's "change password" feature.
I'd also like to be able to use APOP authentication, though I haven't seen how to set this up anyplace on the server side. Please help!
Fri, 28 Apr 2000 15:05:17 -0700
From: Julio <axios@dccnet.com>
Subject: leading distributions
hello there folks,
thank you for the quality publication
would you please help me find information on the leading distributions of Linux?
I have looked everywhere I can think of, including linuxorg, linux this and linux that. Also IDC and IDG. Internet.com, cnet...
where o where can I find a simple explanation of the top distributions, what is their market share, how many copies each has sold and if it is broken down by continent then all the better.
sorry to bother you but after 3 days of fruitless searching, I just came to the conclusion that I should start asking people who are likely to know
I am another new convert - I am having a dual processor Linux machine built as I type this and will deep six Windows for good. Finally. And good riddance.
please help if you can, even if you don't know that answer, please direct me to somebody that does know the answers to the above questions.
Mon, 3 Apr 2000 23:27:54 -0400
From: Pierre Abbat <phma@oltronics.net>
Subject: Best Linux and BestCrypt
Best Linux is at Technology Center Hermia, Tampere. BestCrypt is by Jetico, which is on Hermiankatu, Tampere. Any connection?
phma
Tue, 11 Apr 2000 15:23:46 GMT
From: Harry <harryw@thegenstore.com>
Subject: Good work.
Hi
I read the Linux Gazette regularly, but I hadn't read it for the few months just passed. I read the new cartoon Helpdex, and really liked it. So much in fact that I decided it was worth e-mailing you to tell you that I think it's a great addition to a great 'Zine.
Keep up the good work.
Mon, 24 Apr 2000 17:45:25 GMT
From: Michael Williams <iamalsogod@hotmail.com>
Subject: Newbie installation tips and reorganizing the tech support columns
[These are excerpts from a long conversation. My proposal is near the end. Readers: please send in your suggestions or offers to help! -Ed.]
How about setting up a regular section where people email their problems with setting up Linux, especially on a machine that already runs windows (along with the solutions of course). I'm suggesting this because of the problems that I found when installing Linux - I believe that this is a major factor in stopping people from using the operatins system. It wouldn't be immediate, but I would be happy to put it all together if you would just mention it in the next issue.
How would this be different from the Mailbag/2-cent Tips and The Answer Guy? Do you wish the installation questions moved all together under their own heading? Or what is it you're looking for?
Okay....
1. It is different from the answer guy/2 cent tips as it allows the readers to offer their solutions for problems. As good as he is, the answer guy cannot answer every problem that may arrise simply because of the range of hardware available. If enough people responded, it would end up as a pretty comprehensive list of problems that may arrise during the Linux installation. I see your point, it is fairly similair to two cent tips. However, it would be purely based around installation issues.
2. Yes, I believe that the installation issues should be moved under a seperate heading. This is by far the most difficult/traumatic part of Linux (in my view), especially when there's another OS involved (ie windows). This put me off installing Linux for almost two whole years. Those were two wasted years - there should definately be a comprehensive and detailed guide to installing Linux (yes, I know they do exist, but I haven't seen any that allow user contribution on this scale).
You are very right about people putting off installing Linux because of potential installation problems, and how this is "wasted time" when they could be learning the OS. Unfortunately, even the most comprehensive book will not cover all situations.
I'm cc'ing Jim Dennis and Heather Stern (the Answer Guy and HTML Gal) and Margie Richardson (the Ruler of the Gazette) to get their input on this.
A good portion of our questions are indeed first-time user issues, and another good portion deals with adding hardware. I'm unsure whether trying to separate those questions out would be worthwhile. The thing is, the technical details regarding first-time installation also come back when you add new hardware, install Linux on `a different computer, etc.
Would you be interested in coordinating the installation- and hardware- related questions in the Mailbag and 2-Cent Tips?
You could also build something like a knowledge-base index based on subject with links to the letters, if you're ambitious. This would be something to help newbies find the information they need.
The LG FAQ also has a section for questions that come up so frequently (like Winmodems) that Jim, Heather and I get tired of repeating them, so we just point people to the FAQ. If you'd like to augment that section of the FAQ, it might help some readers. (Now if readers would only realize the FAQ exists. It seems that links on every page and even a link in the blurb about how to submit a question doesn't help....)
Also with many querents it's not clear if they'd fall into the space of "first tine user" - they didn't say, so we'd be presuming to say so. And a very experienced person is often new to the one aspect they're asking about.
The readers already are supposed to be putting forward their own problems and solutions with 2cent Tips; but we're getting an increase of people sending raw Tips to The Answer Guy, usually inspired by a previous letter in his column.
My personal inclination would lean toward, if Michael's willing to coordinate it, sprouting Tips (short answers only), Answer Guy, and the Clueful Hoard (i.e. answerguy like answers from the readership, to technical questions to the Mailbag) into its own meta-section, with the FAQ and/or sorted best answers prominently bulleted below these. For such an endeavor I'd be happy to throw together some extra graphics, and send him the current edition of the AnswerGuy preprocessing script, with some docs on how to use it effectively :)
This would mean some things that presently get pubbed as Tips, and some messages that come to The Answer Guy, would be moved to the Clueful Hoard.
Jim had originally (way back in the teen issues) figured the Answer Guy would someday become an Answer Gang. This is one way to do it. Another way to do it would be to turn Answer Guy into a moderated list where the querent gets a consensus or best answer from the Gang. I have on occasion piped in an editorial comment as well...
It's hard to tell who uses Past Answers since if they got their answer, they generally don't email us...
I made an attempt at sorting Past Answers into topics and Michael, you're welcome to look at them, even, to become their maintainer (I'm usually a month or two behind on them), and for the FAQ too. Deciding on where to split the topics can be difficult, even if you're willing to link a question multiple times. It's behind (3 months I think ;P) but -All- the answers the Answer Guy wrote up to the last maintenance visit I made to them, are in the Past Answers.
I'm hearing lots of good suggestions. Let's think about it for a month before deciding what to do. We have two requests: (1) giving Linux newbies better access to information about configuring their hardware, and (2) a general reorganizing of the tech support columns.
It's clear that the Answer Guy column is better organized than the Mailbag/2-Cent Tips in regard to finding the messages that deal with your question, so I'd like to consider moving all the tech support questions to that framework. The Mailbag would then be just for general mail (which usually means mail about the Gazette), and 2-Cent Tips would be for standalone tips: nifty shell scripts, cool .bashrc settings, etc. (I really enjoy doing the standalone tips, so nobody's taking that part away from me. :)
How about creating a regular section entirely devoted to newbie's? It would almost be like a 'sub magazine' in its own right, with its own sections. To go into more detail....www.linuxnewbie.org and www.linuxstart.com (multilingual)--both aimed at newbies--already exist.
It would be split up into 5 seperate sections:Just another article type, really.1. Distrubution Reviews (which would have an archive of distribution reviews as well as new ones)
2. A newbie version of the answer guy (all newbie questions would go here)
The Answer Guy is popular because he is (1) an ordinary person answering ordinary people and (2) he will chase down a lot of weird answers (his experience can lead him to give much better than a newbie knows to ask for).
I think it is harmful rather than helpful to suggest that newbies should somehow get shoved into a corner (what, they're not "allowed" to speak to the answer guy? the same one who actually -answers- when he gently flames the poor weener who is not quite on-topic, or has asked in a creatively misunderstanding way? I don't think so) And lest you suggest that I have no experience with them... I teach on Mondays, to a few people who are newbies to Linux and computers themselves at the same time. If you'd be assuming that they even know what an A: is... you'd do them a terrible disservice.
I have no objections to a transformation into a Answer Gang (multiple gurus in the column, maybe more bubble types?) or a Clueful Hoard (someone edits answers from the multitude into a similar column, while the wizardly Answer Guy answers his as well) but I have a *serious* objection to completely restructuring the whole webzine.
3. Reader's Tips (this is basically my original idea, concentrating mainly on installation and compatibility issues. It too would have an archive section split up into easily findable topics)
Tips already exists as a column. If you're interested in becoming a maintainer for it that would probably be great.
4. Programming for Newbies (programming is an -extremely- important part of Linux. It would not concentrate on more complex and specific issues. It would deal with more general and introductory topics and contain links to reference material.
This could easily become a longterm column of its own, the transformation of one unfamiliar with programming into a script wizard and junior programmer. Good idea.
5. Feature artical (each month it would contain a different feature e.g setting up Linux under windows etc....)
You're welcome to contribute ordinary articles to the Gazette during any month whatsoever, and if you can encourage others to do so also, more power to you.
Didn't we used to have a "weekend mechanic" section?
Of course, I would be happy to moderate and design this with a little help. It would not be a huge, certainly not the size of the magazine itself. If you want to encourage people to use Linux and get the most out of it, a section like this would be great. I know it is a lot more than I originally suggested, but I for one certainly believe it would be a good idea.
On the one hand I want to encourage the enthusiasm. On the other, I'd like to note, it's a lot of work merely to corrdinate the answerguy letters into one column. I think at one point it was about half the work in the whole magazine, and that I took it over from Marjorie both made TAG look better, and allowed Marjorie some breathing room to make the Gazette better. I do not honestly believe that one person can do all of this that you describe without ramping up to it. Though you claim it'd be smaller than the zine, it sounds bigger than the early issues of it, and Marjorie had her hands full every month back then too.
Take over the FAQs and Past Answers and mush them together nicely, or start writing articles regularly. Heck, if you can manage to do both of those every month without going completely bonkers, maybe a "section for newbies" will be completely and utterly unnecessary, because they will tend to find what they are looking for.
I would be happy to help out in any way that I can, just tell me what to do :-). Your comments were justified - it would involve a huge remake of the overall layout and a considerable ammount of work. Thanx for your time :) (No hard feelings by the way).
I think it is harmful rather than helpful to suggest that newbies should somehow get shoved into a corner
I agree with Heather here. Everybody is a veteran at some things and a newbie at others.
Here's a proposal:
Now to elaborate.
This would help newbies (and veterans) find the articles/letters relevant to their problem. We'd have to decide on categories (e.g., Network/PPP, Hardware/Video Cards, XWindows).
The back end for this is partially covered: each article and TAG answer already has its own URL, and some tips (not in recent issues) have their own anchor links as well. Somebody just needs to categorize the items and create the entry links in the index. For tips without their own anchor link (=all the recent issues), we'd just have to link to the page.
If we can build a framework that allows contributions from home, then readers can submit, say, a text file containing all the index entries for issue X (category, link title, URL), and a script can merge these into the index. I can categorize the articles for each current issue, and the Answer Gang can do the same for the tips, and volunteers can do the back issues gradually one by one.
We need to get more people working on this before we all get burned out.
Heather, can you and Jim propose some logistics on how we could coordinate keeping the Gang together and getting each question to the Hoard and moderating the answers? We first need to know what needs to be done, then we can figure out who will do what.
This will have to wait until the Answer Gang is ready to take it on.
This will take care of itself as potential authors propose things. We can list in the Mailbag what series are missing and desired.
This could easily become a longterm column of its own, the transformation of one unfamiliar with programming into a script wizard and junior programmer. Good idea.We have two articles this issue on shell scripting. If the authors would like to put their heads together, perhaps they can come up with some ideas and manpower for a series or two. Programming of course covers several areas: shell, scripting languages (Python, Perl, etc.), C-like languages, assembly/low-level stuff, and others. One series would be able to cover probably only one of those.
Didn't we used to have a "weekend mechanic" section?We did, but the author John Fisk is no longer available. If somebody wishes to revive it, that would be great.
Contents: |
The May issue of Linux Journal is on the newsstands now. This issue focuses on Programming and includes a Python supplement.
Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue73/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.
For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/
SAN JOSE, Calif. - March 6, 2000 - Lynx Real-Time Systems, Inc., today announced delivery of Hewlett-Packard Company's (HP) ChaiVM, embedded virtual-machine technology on Lynx' BlueCat Linux operating system (OS). BlueCat users can now field soft real-time Java applications in a wide range of products using the Java-compliant embedded virtual machine from HP.
BlueCat offers a fast, interpretive byte code execution and a Java native interface to bind Java threads to BlueCat pthreads for deterministic scheduling. BlueCat features a reduced run-time footprint, as small as 600K, and concurrent, incremental garbage collection contributing to predictable soft real-time performance.
Following the launch of Corel LINUX OS in November 1999, Corel's U.S. retail market share for Linux increased more than eight times to 19.3 per cent as of February 2000. Prior to the release of Corel LINUX OS, the company held 2.3 per cent of the retail market share based on sales of Corel WordPerfect 8 for Linux. Corel also released WordPerfect Office 2000 for Linux in March.
The free Corel LINUX OS Download is NOW available! Check it out at http://linux.corel.com/
Debian has recently added another machine to its computing resources. The
system is an UltraSPARC 60 with dual 360Mhz CPU's and 512Megs of RAM. It
was donated by Sun Microsystems
The system is currently running Debian's frozen SPARC GNU/Linux
distribution, potato (aka, Debian 2.2), utilizing a 2.2.15-pre11 SMP
kernel. The machine is being hosted at VisiNet
Debian wishes to thank all of the contributors of the system and hosting site as well as the developers who invested time and effort into testing and configuring the system for Debian's network. Thank you!
INDIANAPOLIS - March 27, 2000 - Macmillan USA, the Place for Linux, (www.placeforlinux.com), announced Secure Server 7.0 for professional server administrators. Macmillan's new product is a secure Linux web server built within the new Linux®-Mandrake(tm) 7.0 operating system. With Secure Server 7.0, managers of mid-level traffic web sites will have a secure server solution.
Secure Server 7.0 provides graphical tools for easy Linux installation and disk partitioning. The Apache-based web server utilizes RSA's BSAFE® SSL-C technology - the best technology available for encryption and security. Secure Server 7.0 is designed for the Linux professional responsible for managing an e-commerce, intranet or any web site requiring security. Additional tools, utilities, and documentation round out the product, providing more value and functionality.
Secure Server 7.0 is available now at an MSRP of U.S. $149.95.
NUREMBERG, Germany -- March 13, 2000 -- SuSE Linux today announced that it has designated VA Linux Systems' SourceForge, the world's largest Open Source development center, as a primary mirror for ftp.suse.com, to help improve the availability of SuSE Linux via download.
SuSE Linux is available for download at ftp://download.sourceforge.net/pub/suse/, which carries the full FTP version of SuSE Linux, as well as ISO images of the evaluation version, updates and fixes.
Further, SuSE Linux is now offered as a platform on the SourceForge CompileFarm, a unique service that gives Open Source developers a convenient way to build and test applications on multiple versions of the Linux and BSD operating systems over the Internet. The SourceForge CompileFarm enables Open Source developers to automatically create packages that can be installed on SuSE Linux using SuSE's YaST installation tool, without having to compile the programs manually.
SuSE announced a deal with Fugitsu Siemens Computers, Siemens Business Services and Siemens IT Service to deliver SuSE Linux-based systems with complete customer and sales support. With a global reach, the Fujitsu/Siemens and SuSE agreement allows the above partners to deliver an encompassing enterprise Linux solution to thousands of potential customers.
SuSE will disperse a free distribution CD within the June 2000 issue of MacTech Magazine. This CD will be a fully working distribution of Linux on the 2.2.14 kernel, with SuSE's powerful installation tool as well as other open-source software. This CD does not expire and can be distributed freely.
SAN MATEO, CA and HANNOVER, GERMANY - March 14, 2000 - SuSE will now budle Enlighten Software Solutions, Inc.'s Linux System Monitoring and Reporting technology with the SuSE Linux 6.4 for Intel distribution. When enabled, the Enlighten Linux Monitoring Agent will be able to monitor and report on critical Linux system and operating conditions including processor, memory utilization, changes in hardware and software configuration and increases in network errors.
Forum of Free Software |
May 4-5, 2000 Porto Alegre, RS, Brazil English: http://www.softwarelivre.rs.gov.br/welc_ing.html Portuguese: http://www.softwarelivre.rs.gov.br/welc_port.html
|
HPC Linux 2000: Workshop on High-Performance Computing with
Linux Platforms |
May 14-17, 2000 Beijing, China www.csis.hku.hk/~clwang/HPCLinux2000.html (In conjunction with HPC-ASIA 2000: The Fourth International Conference/Exhibition on High Performance Computing in Asia-Pacific Region)
|
Linux Canada |
May 15-18, 2000 Toronto, Canada www.linuxcanadaexpo.com
|
Converge 2000 |
May 17-18, 2000 Alberta, Canada www.converge2000.com
|
SANE 2000: 2nd International SANE (System Administration and
Networking)
Conference |
May 22-25, 2000 MECC, Maastricht, The Netherlands www.nluug.nl/events/sane2000/index.html
|
ISPCON |
May 23-25, 2000 Orlando, FL www.ispcon.internet.com
|
Strictly Business Expo |
June 7-9, 2000 Minneapolis, MN www.strictly-business.com
|
USENIX |
June 19-23, 2000 San Diego, CA www.usenix.org/events/usenix2000/
|
LinuxFest |
June 20-24, 2000 Kansas City, KS www.linuxfest.com
|
PC Expo |
June 27-29, 2000 New York, NY www.pcexpo.com
|
LinuxConference |
June 27-28, 2000 Zürich, Switzerland www.linux-conference.ch
|
"Libre" Software Meeting #1 (Rencontres mondiales du logiciels libre), sponsored by ABUL (Linux Users Bordeaux Association) |
July 5-9, 2000 Bordeaux, France French: www.abul.org/rmll1-fr.html English: www.abul.org/rmll1-uk.html
|
Summer COMDEX |
July 12-14, 2000 Toronto, Canada www.zdevents.com/comdex
|
O'Reilly/2000 Open Source Software Convention |
July 17-20, 2000 Monterey, CA conferences.oreilly.com/convention2000.html
|
Atlanta Linux Showcase |
October 10-14, 2000 Atlanta, GA www.linuxshowcase.org
|
Web 2000 |
November 1-3, 2000 (Location unknown at present) (URL unknown at present)
|
Fall COMDEX |
November 13-17, 2000 Las Vegas, NV www.zdevents.com/comdex
|
USENIX Winter - LISA 2000 |
December 3-8, 2000 New Orleans, LA www.usenix.org
|
Linux Expo |
(Dates unknown at present) San Jose, CA (URL unknown at present)
|
ISPCON |
(Dates unknown at present) San Jose, CA www.ispcon.internet.com
|
[Magic Software sent in these beautiful penguin photos from Antarctica.
The pictures were taken when Mike McMillin won the Magic for Linux Really Cool Contest and embarked on his prize--an 18-day cruise around Antarctica and its surrounding islands.
Thanks, Magic! -Ed.]
Magic introduced eService, the Company's new Web-based, enterprise-level customer service management system that allows companies to manage their service departments as profit centers. The new product, which marks the debut of Magic's new customer relationship management (CRM) suite, streamlines service workflow and provides companies a comprehensive picture of their service departments in real time.
In addition, Magic eService reduces costs by making it possible for companies to employ cost-effective "virtual support centers," where service agents can work from their own homes around the world. Virtual support centers also allow the organization to easily provide 24-hour, seven day a week support through "follow-the-sun" service that utilizes the availability of the Internet.
Magic has also signed a deal to deliver MiTAC Europe Ltd. Powered by Magic's eMerchant, the web site will provide consumers easy access to a wide range of mall-type stores through a three-dimensional interface that helps simulate a true store-to-store mall shopping experience for the visitor. It is a site that MiTAC expects will revolutionize the design and convenience of online shopping sites. eMerchant is available for Linux.
(STOCKHOLM,SWEDEN) (April 17, 2000) In an effort to address some of the barriers that are limiting the spread of Beowulf class cluster-based supercomputer, Pileofpcs.Org has been created. It is dedicated to the proliferation of cluster-based supercomputing (i.e. Beowulf class computers) by creating, promoting and sponsoring the Open Source development of distributions, applications and tools comparable to those sold by traditional supercomputer vendors. All three of our initial projects are being hosted at Sourceforge.net.
"PileofPcs.Org hopes to hasten the day that clusters of cheap computers become more useful. Our intention is to apply all the technology, resources and volunteers we can to help the growth of cluster-based high performance computing as fast as it possibly can. Furthermore, the PileofPcs.Org is imbedded and wedded to the Open-Source Philosophy and community," Dr. Terrence E. Brown, Founder/Executive Director.
We are doing this in a number of ways. First, we are creating a new Linux distribution (and tools) that will allow anyone to easily create a general purpose supercomputer - a Vanilla Beowulf without being a linux programming expert. Second, given that a supercomputer is worthless unless it does something; therefore, perhaps more importantly, we are also, sponsoring the development of wide range of useful applications both parallel and parametric. Additionally, one of the biggest problem with deploying scalable production-class superclusters is the lack of mature and tested management tools comparable to what the traditional supercomputer vendors provide. PilesofPcs.Org aims to change this as well.
Current Projects 1. A new linux distribution that will allow those with limited technical knowledge create a vanilla Beowulf class supercomputer. SuperClustor Linux is hosted here. http://sourceforge.net/project/?group_id=4302
2. An Open source version of an base application that would delivers the high level of performance required for parametric executions by distributing the jobs over a computer cluster and/or network. This application will allow users of cluster systems to using already existing programs with little of no rewriting. Commercial products that do similar tasks include Clustor and EnFuzion. This would be an open source alternative to them. OpenClustor is hosted here. http://sourceforge.net/project/?group_idC03
3. An open source version of a base application that would enable users with no particular knowledge in Linux, to setup, configure and manage a linux cluster. With ease of use so that simple click is enough to add or remove nodes, monitor processors loads and temperatures. Users will no longer be required to anymore to allocate specific resources to build and maintain their cluster supercomputer. A commercial product of this nature include Alinka's Raisin. SuperCluster Manager is hosted here. http://sourceforge.net/project/?group_id=4304
PileofPcs.Org is looking to support and encourage the development many other application and/or tool projects including two special types: those requiring parallelization and those supporting parametric execution. We want to spur the development applications for all types of situations including, but not limited to science, finance, multimedia, bioinformatics, statistics, weather, data mining, design, neural networks, modelling, etc.
1. Parallelized applications - Although efficient parallelization is a property of the specific beowulf computer, we believe that we can promote the creation of pre-parallelized applications. While this will be a challenge, creating a repository for pre-parallelized and parallelized code with help speed the development of useable applications for all.
2. Applications that support parametric execution - Parametric executions require that the same application is executed numerous times. A single application is run under a wide range of input conditions and the results of these different runs are collected together. Parametric executions are ideally suited to run on large computing clusters, since they produce a lot of jobs, often exceeding thousands. These applications will allow users to tap power of distributed computing.
PileofPcs.Org is looking for Project leaders, developers, html designers, and all others interested in the PileofPcs.Org and Open Source movement. PileofPcs.Org is also looking for software and hardware sponsors to facilitate the rapid development and deployment of our efforts.
Contact: Dr. Terrence E. Brown
drbrown@pileofpcs.org
www.PileofPcs.Org
+46 8 790 6174
SCOTTS VALLEY, CA and MOUNTAIN VIEW, CA - April 3, 2000 - Seagate Technology, Inc., announced the first of a suite of Linux-based server appliances targeted at Internet and Application Service Providers (ISPs and ASPs), through a strategic partnership with Cobalt Networks, Inc. The first Seagate Server Appliance Solution, scheduled to be available this summer, is an easy-to-use, cost-effective solution that provides scablable storage and reliable data protection for ISPs and ASPs. The Seagate Server Appliance Solution enables ISPs and ASPs to generate incremental revenue through "vending" storage and data protection applications to their small and medium-sized customers.
http://www.seagate.com
http://www.cobalt.com
FREMONT, Calif., March 30 -- Agate Technologies, Inc. today announced the release of its popular HotData Shuttle(TM) hot swap IDE Plug & Play solution, designed to support the Linux operating system on Intel based workstations and Internet Server Appliances.
IDE hot swap is an industry term used to describe technology, which allows a component of a computer, such as a hard drive, to be attached or detached physically and electrically without impeding the performance and state of the computer. Agate provides this hot swap data-transfer capability by integrating its proprietary ASCII HotChip(TM) with an interface Printed Circuit Board (PCB) mounted in a generic device bay. The end result is the only true hot swap solution available today.
Priced at $49.95 (U.S.), HotData Shuttle for Linux comes with Shuttle bay/Tray, software driver, and mounting kit. HotData Shuttle(TM) for Linux will be marketed and distributed through its fully owned subsidiary, ei Corporation.
NORTH QUINCY, MASS: Wednesday, April 5, 2000: Jeff Morris, President of the Xensei Corporation (http://www.xensei.com), announced today that they have reached a service level milestone that will allow them to begin rolling out high-availability web hosting, an emerging service level that has now become business-critical for eCommerce. High-availability hosting is required by successful eCommerce sites to ensure that their site will be "up", or open for business for a pre-determined percentage of time-often guaranteed in a service-level agreement (SLA).
He commented that, "Our customers have certainly appreciated the increased reliability of our hosting service as network availability has increased to 99.982% over the past year, and 100.000% over the last 90 days."
There are multiple grades of high-availability hosting-they are named according to the number of "nines" in the uptime percentage. 90% is one nine or Class I, 99% is two nines or Class II, 99.9% is three nines or Class III, and so on. Class V is 99.999% and means that the site is down only 5 minutes per year as opposed to a Class 1 which means that your site is down for 876 hours per year (or 73 hours per month). Most hosting providers are currently providing Class I or lower hosting. Few offer the guarantee of an SLA.
Class 1 hosting is most appropriate for companies interested in maintaining an Internet presence that is informative in nature. Class II hosting is recommended for companies who are outsourcing their e-mail system or doing P.O.S. retailing over the Internet. Class IV hosting is required by companies involved in Manufacturing, Utilities, Telecommunications Customer Service, or whose business is strictly eCommerce. Class V hosting is business critical to health systems, satellite navigation, reservation systems, banking (EFT and ATM transactions) and financial securities trading. Class VI (99.9999%) hosting is used by defense systems in launch readiness.
DENVER - The first Linux product orders from LinuxMall.com's web site are being filled through Frank Kasper & Associates' Minneapolis, Minn. warehouse. The fulfillment operation is a key component of the recently announced merger between LinuxMall.com and Frank Kasper & Associates.
LinuxMall sponsored all four Linux Business Expo Community Hubs. Linux Community Hubs provide free booth space to non-profit organizations that make significant contributions to the Linux and Open Source movement.
DENVER - Tux the penguin and Dust Puppy, the hottest celebrities in the IT technical community, have announced their partnership agreement. Geeks from around the Internet are gathering to support the cuddly mascots in the joint venture. The partnership between Tux, the Linux mascot and User Friendly's Dust Puppy clears the way for a cross-marketing agreement between LinuxMall.com and Userfriendly.org.
"I'm thrilled about the Little Guy and my new friends at User Friendly," said Tux in response to the agreement. "My favorite thing to do after eating a few gallons of raw herring, is to curl up with Dust Puppy and the cast of User Friendly. Now I'll get some laughs first-hand."
"Tux and the whole Linux community are way-cool," the Canadian Dust Puppy remarked. "I'm looking forward to some major appearances together in the future."
Montreal, April 18, 2000 - The first ever Linux Expo North America was a marked success. Close to 4,500 visitors made their way through the snow storm to attend the Expo at Palais des Congrès. The slight shortfall in the number of visitors due to Mother Nature's whimsy was more than made up for in the discernible quality of visitors. The success of this first edition set the tone for future shows in Toronto in October and in Montreal in April 2001.
"The comments we've received from the majority of our 102 exhibitors confirm that our choice of quality versus quantity of visitors, was indeed the best strategy for positioning our event as one of the great shows in the Linux North America circuit" declared Stéphane Labrouche, V.P. and Director General of Sky Events, show organizers.
Linux Expo North America is already scheduled to take place in Montreal from April 10 to 12, 2001. In the meantime, the event will be held in Sao Paulo June 20 and 21, 2000 and at Toronto's Metro Convention Center October 30, 31 and November 1, 2000.
FRESNO, CA., February 14 ( Yosemite Technologies, Inc. today announced an agreement with Hewlett-Packard Company to bundle Yosemite's storage management software, TapeWare, with the HP SureStore family of DAT, and DLT tape drives.
HP gains a powerful, comprehensive, yet intuitive backup technology that has been often rated superior to other industry leading backup applications and the bundled solution will provide additional support for the innovative new HP One-Button Disaster Recovery (OBDR) feature, for integrated full-system restoration. HP's OBDR offers a fast, simple solution to return a server or desktop system to its normal operational state following a crash.
"Introducing TapeWare into HP SureStore products offers a complete backup solution and One-Button Disaster Recovery for Windows NT and NetWare" says Peter Doughty, marketing manager for HP Computer Peripherals Bristol. "It also provides HP SureStore customers with the ability to backup Linux servers and workstations."
OTTAWA, Canada - April 5, 2000 - Rebel.com Inc. announced the availability of an upgrade to its OfficeServer software - NetWinder OfficeServer 1.5.
NetWinder OfficeServer 1.5 is an all-in-one Internet gateway server appliance which provides small and medium-sized businesses with full Internet and local area network support. Based on the Linux operating system, NetWinder OfficeServer 1.5 is configured with a broad range of network services such as firewall/VPN, Web site hosting, Web access, file and printer sharing and e-mail.
Features and enhancements to NetWinder OfficeServer 1.5 include:
* PPPoE (PPP over Ethernet) support - PPPoE is a relatively new protocol that specifies how a computer interacts with a broadband modem (ie. xDSL, cable, wireless, etc.) to achieve access to the growing number of highspeed data networks;
* Third-party plug-in to allow developers to add applications independently to the OfficeServer;
* Dynamic Host Configuration Protocol (DHCP) client has been upgraded to support the @home cable modem service.
The security features of the OfficeServer have been significantly enhanced in order to keep pace with industry-standard security criteria. This includes more secure default firewall rules and increased privacy protection in the mail server. In addition, NetWinder OfficeServer 1.5 now allows users to securely connect to their OfficeServer-protected LAN anywhere in the world using the Virtual Private Network (VPN) software. The VPN software is bundled with the OfficeServer at no additional charge and includes 3 free client licenses. http://shop.rebel.com/netwinder/officeserver.cfm
COLUMBUS, OH, (April 11, 2000) Progressive Systems, Inc., has implemented its Phoenix Adaptive Firewall onto Cobalt Networks' platform. Designed for the many small and medium sized businesses (SMBs) that are implementing full-time Internet access but have growing security concerns, the new firewall appliance can be quickly installed and configured to provide very effective protection that is transparent to the network. This new implementation joins Progressive's existing firewall appliance based on the Cobalt Qube and Cobalt RaQ server appliances.
The fully featured product will sell for $4495 for unlimited users.
http://www.progressive-systems.com
Mountain View, Calif., April 24, 2000- Cobalt Networks, Inc. announced today that it was named as the worldwide leader in unit market share for server appliances in the 1999 Server Appliance report published by Dataquest on April 3, 2000. The Dataquest report names Cobalt as the unit share leader for the total market, as well as for the entry-level and midrange market segments.
Server appliances are application-specific devices. Cobalt's products are affordable, easy to use, and designed to support one or a few applications well. Cobalt's product line includes server appliances focused on Web and e-commerce hosting, e-mail, firewall, caching, and many other Internet-based applications. In addition, the Cobalt Developer's Network, launched in the first quarter of 2000, has attracted over 600 application developers to the Cobalt server appliance platform.
SPRING INTERNET WORLD -- Apr. 5, 2000 -- DataDirect Networks, Inc., a leader in SAN network infrastructure solutions, today is releasing the SAN DataDirector™, the world's first storage area network (SAN) appliance. The SAN DataDirector is an intelligent network infrastructure device incorporating the functionality of next generation SANs into a single integrated, reliable, plug and play appliance that makes it easier to build and manage a SAN.
The SAN DataDirector allows UNIX, Linux, clustered Linux, Sun, SGI, AIX, Mac and Windows NT servers and workstations to access shared storage resources, permitting workgroups and clusters within a heterogeneous computing environment with incompatible operating systems to simultaneously share storage resources. The SAN DataDirector easily plugs into existing servers and storage, increasing their capabilities while providing investment protection. This capability enables data access between SAN and client-server users bridging the SAN and NAS environments, while also upgrading legacy storage into intelligent storage resources.
netjammer.com is looking for a Linux-capable webmaster that could handle the administrative / webmaster / maintenance side of the site.
It's a very uncorporate "musician" and internet environment.
The Technical Manager's desired skills include all aspects of server administration and webmastering, HTML, Javascript, Perl, rich media (audio/video) development and delivery, writing ad copy, tech support, etc.
Pay is negotiable. Company is located in Hollywood.
PLEASE CONTACT: Chapin Hemphill at jobs@netjammer.com
LinuxDevices.com articles re embedded systems:
TeamLinux is a professional services organization that provides customers completely integrated solutions enabled by open source / Linux technology. TeamLinux offers consulting, design, integration, migration, training and support services for many business applications, including: e-commerce, Internet-enabled business- to-business, Computer Aided Design (CAD), Electronic Design Automation (EDA), embedded systems and Internet appliance applications.
http://userlocal.com is a site with information for new Linux users. (from comp.os.linux.announce)
Linux NetworX offers a reliable and cost-effective clustering alternative to the "super computer" for organizations demanding high performance and extremely low failure rates.
San Jose, CA. -- March 8, 2000 -- Loki Entertainment Software, the leading publisher of commercial games for the Linux operating system, announced a multi-company project to create and distribute OpenAL, an open-source, cross-platform 3D-Audio library.
3D-Audio greatly improves the immersive quality of a game. It allows games and other applications to take advantage of powerful spacialized sound effects, including distance and direction attenuation, panning and reverb. With these features, gamers can, for example, determine by sound the distance and direction of an explosion in a 3D-gaming environment.
"OpenAL represents a milestone for Linux and for the game industry in general," said Scott Draeker, president, Loki Entertainment Software. "Until now, games running on Linux have not had access to the advanced 3D-Audio features available on other platforms. OpenAL provides those advanced features with an open-source, nonproprietary implementation which is available not just for Linux, but for Windows and MacOS games as well. What SGI's OpenGL has done for 3D-Video, OpenAL will do for 3D-Audio."
Creative Technology, plans to release Linux drivers that will work with OpenAL, and which natively support the advanced 3D-Audio effects which OpenAL enables. In addition, Creative is evaluating MacOS and Windows implementations of the OpenAL standard
Loki is already incorporating OpenAL into its growing product line of AAA Linux games. In March, Loki will release the Linux version of Activision's Heavy Gear II, the first Linux game to support 3D-Audio using OpenAL.
The source code for OpenAL for Mac, Windows and Linux is freely available for download and is offered under the GNU Library Public License (LGPL). Visit www.openal.org for more information.
Seattle, Washington--March 3, 2000-- Virtualtek/Joydesk.com announces the release of the wireless version of the popular Joydesk software for March 15, 2000. The software enables users to access all the functionality of their web-based collaboration applications from the minibrowser of their Internet ready cellular phone.
Joydesk is a fully featured information management suite of applications with its own built-in mail server. Users can send or receive e-mail, check their schedule, access contact information, manage tasks, share information or receive urgent e-mail notification from the web at anytime, from any location and through any browser, web-enabled PDA, and now, from any Internet ready phone.
A free 30 day trial version is available for download.
BOULDER, CO - April 10, 2000 - Rogue Wave Software announced the release of its comprehensive collection of C++ components for Linux. The C++ Toolkit for Linux enables developers to easily create applications on the Linux platform.
This special edition Linux-Only CD includes ports of Rogue Wave's most popular cross-platform C++ products to the Linux platform, including Standard C++ Library, Tools.h++, Threads.h++, Tools.h++ Professional, DBTools.h++ and Money.h++. This new offering provides Linux developers with basic data structures, threading classes, classes for accessing relational databases and classes for business analysis and currency conversion.
Pricing starts at $545 and includes a license and one year of Silver Technical Support and product updates.
IBM has recently developed a WYSIWYG HTML editor for Linux (beta) TopPage is an award-winning WYSIWYG HTML editor which allows you to create dazzling Web pages in minutes without any HTML knowledge or programming skills. It is suitable for beginners and experts. It includes all the tools necessary to create Web pages, including a WebArt Designer which lets you create logos and buttons, and a Web Animator which lets you create animation GIF files with just a few simple steps. The program includes up-to-date technology -- such as Cascading Style Sheets, Java applets and Dynamic HTML. TopPage gives you the capability to build lively pages with state-of-the-art Web technology. TopPage brings together everything you need to build pages and publish your site in one package. Now the Linux version (beta) is available. You can download it free from http://www.ibm.com/jp/toppage/ and use it until December 31, 2000.
HancomLinux, Inc., a subsidiary of Haansoft which holds 80% of Korea's word processor market, has completed development of a Linux version word processor targeted on overseas markets and will kick off on sales this month.
The Chinese version "Wenjie" is still undergoing tests for supplying the program to significant Linux PC corporations and has yet to select distributors. The Chinese version is divided into two versions; version for users in mainland China, and version for users in Hong Kong and Taiwan. The general users' version will also include a Windows version.
"HancomWord," which has its target on English-speaking regions, will begin sales immediately after beta testing is completed. HancomWord is reported to be particularly convenient in that the selection of various languages, not only English or French, but also languages such as German, Greek, Russian, Spanish, Japanese and many more is possible without the nuisance of extra procedures. The company currently hold the beta testing event by free downloading (http://www.hancom.com/english) prior to official release. This new word processor is prospected to surface as a new contestant in the Linux Office market with its compatibility with HTML, text, and MS Word documents.
The Japanese version will be under the name "Are-A Hangul 2000" and will be exhibited at the Tokyo Linux Convention to be held next month.
The successful porting of LinuxHangul from Windows Hangul, which has undergone continuous improvement for the past 10 years, was made possible by the successful application of the 'Wine' technology. Wine is a developing tool which facilitates the use of Windows application programs in a Linux environment. This project has been openly operated with Linux programmers from all over the world.
HancomLinux has planned to seek overseas markets through cooperation with local distributors and is reported to first finish development of programs including spread sheet, graphics, presentation, etc. and then start seeking new markets full scale with an competitive Office suite.
A new release of Artstream, version 2.0b11-3, has now been posted. This version has several enhancements and fixes.
All the patches needed for the Mesa library are now including in the Mesa developer distribution of version 3.2 and later at http://mesa3d.sourceforge.net/devel.html#branches. This version will soon become the official release. If desired an rpm of this release is still available from the Mediascape site.
Both the new Artstream and the Mesa rpms are available at: http://www.mediascape.com/linuxrpm.html
Documentation remains at: http://www.mediascape.com/mediaEscape/guide.html
In this release some text functionality is still omitted until we complete our licensing of certain spelling and hyphenation dictionaries for Linux. However all illustration tools should be intact.
Ottawa, Canada - April 11, 2000 - CorelDRAW 9 for Linux will be available in July, two months earlier than scheduled. In addition, Corel VENTURA Publisher 8.5 for Linux and Windows will be available by the end of this year and the free download of Corel PHOTO-PAINT 9 for Linux will be available in June.
The first beta of CorelDRAW 9 for Linux was sent to beta sites April 7.
Ottawa, Ontario - March 10, 2000 - Corel Corporation today announced it will offer a free download of Corel PHOTO-PAINT® 9 for Linux. The download version of Corel PHOTO-PAINT 9 for Linux, Corel's photo-editing, image composition and painting application, will be available in early summer.
A retail version of CorelDRAW 9 Graphics Suite for Linux which includes Corel PHOTO-PAINT 9 will ship late summer 2000 with comparable pricing to the Windows version of this product.
Cambridge, MA - March 6, 2000 - Helix Code, Inc. today unveiled a preview version of its Helix GNOME desktop -- a collection of more than 80 industry-leading software applications designed to meet every need of the Linux user.
"As open source software, GNOME is a significant step forward technologically; we have effectively leap-frogged the legacy problems which are keeping software development in a state of little progress. Innovation will finally be brought back to software. And the Helix GNOME desktop means that all of this is available to end-users with very little effort," said Miguel de Icaza, co-founder of Helix Code, who recently was named as one of the "50 Leaders of the New Millennium" by Time Magazine and CNN.
For more information about Helix GNOME, and to download the preview release version of the Helix GNOME desktop, visit the Helix Code, Inc. website at www.helixcode.com.
Helix Code Inc. is an open source software company that produces high-quality productivity applications under the terms of the GNU GPL. Helix Code, Inc., is devoted to improving GNOME, the leading desktop environment for Linux.
[An interview with Miguel de Icaza is in this issue of Linux Gazette. -Ed.]
Spiderweb Software and Boutell.com proudly present Exile III: Ruined World! Exile III: Ruined World is an epic fantasy role-playing game for Linux, featuring a fascinating plot, detailed and enormous world, and an elegant game system and interface.
What makes Exile III exceptional? Well, it features ...
http://www.spiderwebsoftware.com/exile3/linuxexile3.html
Chicago, IL (April 13, 2000)-Tripp Lite, a manufacturer of power protection products, has established another milestone in its support for the Linux development community. Tripp Lite's PowerAlert UPS Management Software has been tested and approved by Red Hat®, Inc. to install and run flawlessly using the Red Hat Linux operating system.
"Receiving Red Hat Ready certification capped a thorough testing process which proved that both Red Hat and Tripp Lite are dedicated to providing Linux users with the most comprehensive power protection solutions available," said Mike DelGrosso, Tripp Lite's Director of Software Development. "Although other UPS software has been tested by Red Hat, Tripp Lite is the only UPS manufacturer that provides Red Hat's customers with a complete UPS software source code. This allows developers to customize not only basic UPS shutdown functions, but the full range of intelligent UPS control as well."
Greenwich, CT -- Web Development with JavaServer Pages, by Duane K. Fields and Mark A. Kolb, is the the first book to systematically cover everything a developer needs to know to create effective web pages and web-based applications with JSP.
Unlike older technologies such as ASP and CGI scripts, JSP provides full access to all the Java APIs, enabling a web developer to tap the power of one of the largest and most refined libraries of reusable code in existence--and all within the simple, familiar HTML format. But until now, web developers had few places to turn to learn how to take full advantage of this new technology.
Web Development with JavaServer Pages covers the entire JSP development process from start to end, from an enterprise perspective.
Contents
Chapters 1 and 6
(PDF format)
The book is also available electronically (14 MB PDF file), in color and searchable, for $13.50 (70% less than the hard copy price). The cost is applicable to the on-line purchase of the printed book later.
First-round entries in the Software Carpentry design competition
ChangingPages 3.0 is a web authoring tool released under the GNU Public License. It requires Perl 5 and MySQL. (Psand)
Leafwa is a web-based administration program for Leafnode, a small nntp news server.
Easy Software Products's ESP Print Pro v4.0.4 is a complete printing solution for UNIX. It prints PostScript, PDF, GIF, TIFF, PNG, JPEG, SGI RGB, etc, to over 1600 printers via serial, parallel and network.
Zend Optimizer for PHP 4.0 betas 1 and 2 are available for download.
iHTML Merchant e-commerce transaction service "now offers more payment processors than any of their competitors" according to their press release. (Inline Internet Systems, Inc.)
[Note to Inline: Linux does not have a stock symbol! "LNUX" refers to only one Linux-using company among many.. -Ed.]
LinkScan Enterprise & Enhancements 7.0 is a scalable, industrial-strength tool for doing link checking, HTML validation, web site management and creating site maps. Three other mutually-comptatible LinkScan products are available for workstations and servers. (Electronic Software Publishing Corporation (Elsop))
Axis Communications has open-sourced its Linux drivers for Bluetooth, a technology for wireless communications between mobile phones and other portable devices. Axis also produces a Journaled Flash File System (JFFS) for Flash-ROM's. Axis has also released a Linux-based Axis 2100 Network Camera. It does not require a PC; it connects directly to the network and is controlled from a web browser. http://developer.axis.com
iServeris a platform-independent application/web server written entirely in Java. 90-day free preview at http://www.servertec.com. (Servertec)
Aladdin Expander beta uncompresses/decodes files from a variety of Unix, Windows and Macintosh formats. Linux/Intel version is available at http://www.aladdinsys.com/expander/expander_linux_login.html. Linux/Macintosh version is expected soon.
Voodoo3 3D graphics-card drivers are available from http://www.xig.com for US$29.
Aestiva 1.8 is a web-based operating system now with improved scalability, including the ability for dynamic sites to operate across multiple servers (called "server-jumping").
Chili!Soft Active Server Pages for Linux. Also available preinstalled on Cobalt's RaQ 3 server appliance.
NetLedger is a web-based accounting solution for small businesses. Its new Data Center represents the largest deployment of Linux on Oracle worldwide.
Active Perl 5.6 is a binary distribution of Perl for Linux, Windows and Solaris that is faster to install, includes a Perl Package Manager (PPM) for installing modules, and runs up to 48x quicker than the standard Perl. (ActiveState)
Parasoft has three programs that run on Linux. SiteRuler detects HTML files for bad links, spelling errors, orphaned files, and non-standard HTML. CodeWizard checks C/C++ programs for coding-standard violations. Insure++ is an automatic runtime error detection tool for C/C++.
SecureNet PRO 3.0 is an enterprise-scalable network monitoring and intrusion detection platform. (MimeStar, Inc.; MicroNetics, Inc.)
Omnis Studio 2.4 makes learning 4GL and OO easier. A demonstration copy can be downloaded from the web site. (Omnis Software)
Enhydra, an open-source Java/XML application server has been selected to power AnywhereYouGo.com, a community site for wireless Internet developers and IT managers. (Lutris Technologies, Inc.)
Vividata, Inc. has reduced the prices of its OCR Shop, ScanShop and PostShop scanning and printing software for personal and non-profit users.
From J. David Peet on Thu, 30 Mar 2000
Hi,
I just ran across your article
www.linuxdoc.org/LDP/LG/issue50/tag/26.html that talks a (tiny) bit about Win4Lin.
FYI, Win4Lin is now available. And if you are interested, the full documentation is on-line on the TreLOS web site. www.trelos.com. You can also order it via this web site.
In case you did not know, the Win4Lin technology has a long history as "Merge" for SCO Unix. SCO has been an OEM of our Merge technology for years. Win4Lin is the Linux version of the existing current technology.
I didn't know that. I thought DOS/MERGE was from a company called "Locus" or something like that.
One minor point -- Win4Lin is not a "clone" of VMWare as such. They both provide a virtual machine to run Windows in on Linux, but there are significant differences. Refer to the new "white-paper" document: http://www.trelos.com/trelos/Trelos/Products/Win4Lin_Whitepaper.htm Near then end are two paragraphs that compare and contrast Win4Lin WINE and VMWare.
-Thanks -David Peet david.peet@trelos.com
I probably shouldn't have used the word "clone" --- though it isn't all that precise. Obviously, in light of Win4Lin's heritage it might be more appropriate to say that VMWare is a "clone" of Win4Lin's predecessor. MERGE is the grandaddy of MS-DOS emulators for UNIX.
Anyway, I'll let people make up their own mind based on their own reading and experience.
I haven't actually used any DOS or MS Windows software in years (only the occasional, blessedly brief trifle to help someone out here or there). So even if you were to send a copy to me for my evaluation I can't promise that I'd ever get around to trying it. (I think I have a VMWare CD around here somewhere -- an eval copy or some such). Heather, my editor and wife, still uses MS-Windows occasionally. I know she's installed DOSEMU, and WINE and used them a bit (DOSemu extensively). I've installed and played with DOSemu (helped someone with it at an installfest a couple weeks ago, too). However, I've never even tried WINE!
Anyway, good luck on you're new release.
Answered By J. David Peet on Thu, 30 Mar 2000
Jim Dennis wrote:
In case you did not know, the Win4Lin technology has a long history as "Merge" for SCO Unix. SCO has been an OEM of our Merge technology for years. Win4Lin is the Linux version of the existing current technology.
I didn't know that. I thought DOS/MERGE was from a company called "Locus" or something like that.
Yes, I was there at Locus at the very start of Merge. It's been a long path since then with some odd twists. First Locus merged with Platinum, and Merge continued to be developed, including the current SCO Merge 4 version with win95 support. Then right before CA digested Platinum, a company in Santa Cruz, DASCOM, bought (rescued!) the Merge technology out from Platinum and hired some of us old-time Merge developers to form a company named "TreLOS" to take the technology forward including porting it to Linux. (Insert danger music here.) Then before TreLOS could be spun off as it's own company, IBM bought DASCOM, for reasons having nothing at all to do with Merge/TreLOS. Then in February IBM finished spinning TreLOS off as it's own company. We are currently a (very small) privately held company with NO affiliation with IBM and NO IBM technology. (IBM for some reasons wanted that to be clear.) Once we escaped from IBM it took a bit more than a month to set up the infrastructure to be able to release the product. It was getting caught up in the IBM acqusition of DASCOM that prevented us from releasing the product last fall as we had originally planned. The Win4Lin 1.0 product has actually been ready for months now. All that time was not completely wasted because IBM let us have an extended semi-secret beta program so it's actually been in real use for quite a while for a "1.0" version product.
So that's the history to this point. Perhaps more than you wanted to know.
... Anyway, good luck on your new release.
-Thanks -David
P.S. Now that we are launching Win4Lin 1.0, having reviews done is a Good Thing. So if you or Heather would like to do a review of it that is extremely easy to arrange.
From Tim Moss on Thu, 30 Mar 2000
I'm trying to extract a block of text from a file using just bash and standard shell utilities (no perl, awk, sed, etc). I have a definitive pattern that can denote the start and end or I can easily get the line numbers that denote the start and end of the block of text I'm interested in (which, by the way, I don't know ahead of time. I only know where it is in the file). I can't find a utility or command that will extract everything that falls between those points. Does such a thing exist?
Thanks
awk and sed are considered to be "standard shell utilities." (They are part of the POSIX specification).
The sed expression is simply:
sed -n "$begin,${end}p" ...
... if begin and end are line numbers.
For patterns it's easier to use awk:
awk "/$begin/,/$end/" ...
... Note: begin and end are regexes and should be chosen carefully!
However, since you don't want to do it the easy way, here are some alternatives:
------------------ WARNING: very long -------------------------
If it is a text file and you just want some lines out of it try something like:
#!/bin/sh # shextract.sh # extract part of a file between a # pair of globbing patterns [ "$#" -eq "2" ] || { echo "Must supply begin and end patterns" >&2 exit 1 } begin=$1 end=$2 of="" ## output flag while read a; do case "$a" in "$begin") of="true";; "$end") of="";; esac [ -n "$of" ] && echo $a done exit 0
... this uses no external utilities except for the test command ('[') and possibly the 'echo' command from VERY old versions of Bourne sh. It should be supported under any Bourne shell derivative. Under bash these are builtin commands.
It takes two parameters. These are "globbing" patterns NOT regular expressions. They should be quoted, especially if they contain shell wildcards (?, *, and [...] expressions).
Read any good shell programming reference (or even the rather weak 'case...esac' section of the bash man page) for details on the acceptable pattern syntax. Note because of the way I'm using this you could invoke this program (let's call it shextract, for "shell extraction") like so:
shextract "[bB]egin|[Ss]tart" "[Ee]nd|[Ss]top"
... to extract the lines between the any occurrence of the term "begin" or "Begin" or "start" or "Start" and the any subsequent occurence of "end" or "End" or "stop" or "Stop."
Notice that I can use the (quoted) pipe symbol in this context to show "alternation" (similar to the egrep use of the same token).
This script could be easily modified to use regex's instead of glob patterns (though we'd either have to use 'grep' for that or rely on a much newer shell such as ksh '93 or bash v. 2.x to do so).
This particular version will extract all regions of the file that lie between our begin and end tokens.
To stop after the first we have to insert a "break" statement into our "$end") ...;;; case. To support an "nth" occurence of the pattern we'd have to use an additional argument. To cope with degenerate input (cases where the begin and end tokens might be out of order, nested or overlapped) we'd have to do considerably more work.
As written this example requires exactly two arguments. It will only process input from stdin and only write to stdout. We could easily add code to handle more arguments (first two are patterns, 'shift'ed out rest are input file names) and some options switches (for output file, only one extraction per file, emit errors if end pattern is found before start pattern, emit warnings if no begin or subsequent end pattern is found on any input file, stop processing on any error/warning, etc).
Note: my exit 0 may seem superfluous here. However, it does prevent the shell from noting that the program "exited with non-zero return value" or warnings to that effect. That's due to my use of test ('[') on my output flag in my loop. In the normal case that will have left a non-zero return value since my of flag will be zero length for the part of the file AFTER the end pattern was found.
Note: this program is SLOW. (That's what you get for asking for it in sh). Running it on my 38,000 line /usr/share/games/hangman-words (this laptop doesn't have /usr/dict/words) it takes about 30 seconds or roughly only 1000 lines per second on a P166 with 16Mb of RAM. A binary can do better than that under MS-DOS on a 4Mhz XT!
BUG: If any lines begin with - (dashes) then your version of echo might try to treat the beginnings of your lines as arguments. This might cause the echo command to parse the rest of the line for escape sequences. If you have printf(1) evailable (as a built-in to your shell or as an external command) then you might want to use that instead of echo.
To do this based on line numbers rather than patterns we could use something more like:
#!/bin/sh # lnextract.sh # extract part of a file between a # line numbers $1 and $2 function isnum () { case "$1" in *[^0-9]*) return 1;; esac } [ "$#" -gt "2" ] || { echo "Must supply begin and end line numbers" >&2 exit 1 } isnum "$1" || { echo "first argument (first line) must be a whole number" >&2 exit 1 } isnum "$2" || { echo "second argument (last line) must be a whole number" >&2 exit 1 } begin=$1 end=$2 [ "$begin" -le "$end" ] || { echo "begin must be less than or equal to end" >&2 exit 1 } shift 2 for i; do [ -r "$i" -a -f "$i" ] || { echo "$i should be an existing regular file" >&2 continue } ln=0 while read a ; do let ln+=1 [ "$ln" -ge "$begin" ] && echo $a [ "$ln" -lt "$end" ] || break done < "$i" done exit 0
This rather ugly little example does do quite a bit more checking than my previous one.
It checks that its first two arguments are numbers (your shell must support negated character class globs for this, ksh '88 and later, bash 1.x and 2.x, and zsh all qualify), and that the first is less than or equal to the latter. Then it shifts those out of the way so it can iterate over the rest of the arguments, extracting our interval of line from each. It checks that each file is "regular" (not a directory, socket, or device node) and readable before it tries to extract a portion of it. It will follow symlinks.
It has some of the same limitations we saw before.
In addition it won't accept it's input from stdin (although we could add that by putting the main loop into a shell function and invoking it one way if our arg count was exactly two, and differently (within our for loop) if $# is greater than two. I don't feel like doing that here --- as this message is already way too long and that example is complicated enough.
It's also possible to use a combination of 'head' and 'tail' to do this. (That's a common exercise in shell programming classes). You just use something like:
head -$end $file | tail -$(( $end - $begin ))
... note that the 'tail' command on many versions of UNIX can't handle arbitrary offsets. It can only handle the lines that fit into a fixed block size. GNU tail is somewhat more robust (and correspondingly larger and more complicated). A classic way to work around limitations on tail was to use tac (cat a file backwards, from last line to first) and head (and tac again). This might use prodigous amounts of memory or disk space (might use temporary files).
If you don't want line oriented output --- and your patterns are regular expressions, and you're willing to use grep and dd then here's a different approach:
start=$(grep -b "$begin" ... ) stop=$(( $( grep -b "$end" ... ) - $begin )) dd if="$file" skip=$begin count=$stop bs=1b
This is not a shell script, just an example. Obviously you'd have to initialize $begin, $end, and $file or use $1, $2, and $3 for them to make this into a script. Also you have to modify those grep -b commands a little bit (note my ellipses). This is because grep will be giving us too much information. It will be giving a byte offset to the beginning of each pattern match, and it will be printing the matching line, too.
We can fix this with a little work. Let's assume that we want the first occurrence of "$begin" and the last occurence of "$end" Here's the commands that will just give us the raw numbers:
grep -b "$begin" "$file" | head -1 { IFS=: read b x echo b } grep -b "$end" "$file" | tail -1 | { IFS=: read e x echo e }
... notice I just grep through head or tail to get the first or last matching line, and I use IFS to change my field separator to a ":" (which grep uses to separate the offset value from the rest of the line). I read the line into two variables (separated by the IFS character(s)), and throw away the extraneous data by simply echoing the part I wanted (the byte offset) back out of my subshell.
Note: whenever you use or see a pipe operator in a shell command or script --- you should realize that you've created an implicit subshell to handle that.
Incidentally, if your patterns might have a leading - (dash) then you'll have problems passing them to grep. You can massage the pattern a little bit by wrapping the first character with square brackets. Thus "foo" becomes "[f]oo" and "-bar" becomes "[-]bar". (grep won't consider an argument starting with [ to be a command line switch, but it will try to parse -bar as one).
This is easily done with printf and sed:
printf "%s" "$pattern" | sed -e 's/./[&]/'
... note my previous warning about 'echo' --- it's pretty permissive about arguments that start with dashes that it doesn't recognize, it'll just echo those without error. But if your pattern starts with "-e " or -n it can effect out the rest of the string is represented.
Note that GNU grep and echo DON'T seem to take the -- option that is included with some GNU utilities. This would avoid the whole issue of leading dashes since this conventionally marks the end of all switch/option parsing for them.
Of course you said you didn't want to use sed, so you've made the job harder. Not impossible, but harder. With newer shells like ksh '93 and bash 2.x we can use something like:
[${pattern:0:1}]${pattern:1}
(read any recent good book on shell programming to learn about parameter expansion).
You can use the old 'cut' utility, or 'dd' to get these substrings. Of course those are just as external to the shell as perl, awk, sed, test, expr and printf.
If you really wanted to do this last sort of thing (getting a specific size substring from a variable's value, starting from an offset in the string, using only the bash 1.x parameter expansion primitives) it could be done with a whole lot of fussing. I'd use ${#varname} to get the size, a loop to build temporary strings of ? (question mark) characters to of the right length and the ${foo#} and ${foo%} operators (stripping patterns from the left and right of variable's value respectively) to isolate my substring.
Yuck! That really is as ugly as it sounds.
Anyway. I think I've said enough on the subject for now.
I'm sure you can do what you need to. Alot of it depends on which shell you're using (not just csh vs. Bourne, but ksh '88 vs. '93 and bash v1.14 vs. 2.x, etc) and just how rigit you are about that constraint about "standard utilities"
All of the examples here (except for the ${foo:} parameter expansion) are compatible with bash 1.14.
(BTW: now that I'm really learning C --- y'all can either rest easy that I'll be laying off the sh syntax for awhile, or lay awake in fear of what I'll be writing about next month).
Here's a short GNU C program to print a set of lines between one number and another:
/* extract a portion of a file from some beginning line, to * some ending line * this functions as a filter --- it doesn't take a list * of file name arguments. */ #include <stdio.h> #include <stdlib.h> #include <errno.h> int main (int argc, char * argv[] ) { char * linestr; long begin, end, current=0; ssize_t * linelen; linelen = 0; linestr=NULL; if ( argc < 3 ) { fprintf(stderr, "Usage: %s begin end\n", argv[0]); exit(1); } begin=atol(argv[1]); if ( begin < 1 ) { fprintf(stderr, "Argument error: %s should be a number " "greater than zero\n", argv[1]); exit(1); } end=atol(argv[2]); if ( end < begin ) { fprintf(stderr, "Argument error: %s should be a number " "greater than arg[1]\n", argv[1]); exit(1); } while ( getline(&linestr, &linelen, stdin ) > -1 && (++current < end ) ) { if (current >= begin) { printf("%s", linestr); } } exit(0); return 0; }
This is about the same length as my shell version. It uses atol() rather than strtol() for the argument to number conversion. atol() (ASCII to long) is simpler, but can't convey errors back to us. However, I require values greater than zero, and GNU glibc atol() returns 0 for strings that can't be converted to longs. I also use the GNU getline() function --- which is non-standard, but much more convenient and robust than fussing with scanf(), fgets() and sscanf(), and getc() stuff.
Tim, I've copied this my Linux Gazette editor, since it's a pretty general question and a way detailed answer. Unless you have any objection it will go into my column in the next issue. The sender's e-mail address and organizational affiliation are always removed from answer guy articles unless they request otherwise.
From jashby on Sun, 02 Apr 2000
Hello ,
My name is Jason Ashby i work for a computer company and am really new to Linux i have been given the task to make a zip drive visible accross a network, it is loaded on a linux machine and i can get the AIX machine to mount it but we can not copy files to or from the zip drive on AIX could you see it within your power to tell me why .
Thanks Jason Ashby
Unfortunately your question is unclear. You don't tell me which system is supposed to be the server, what sorts systems are intended to be the clients, nor what type of filesystems will be contained on the Zip media.
"make a zip drive visible accross [sic] a network"
... presumably you mean via NFS or Samba. If the client systems are UNIX or Linux you'd use NFS, if they are MS-Windows or OS/2 you'd use Samba. (If they were Apple Macs running MacOS you'd look at the netatalk or CAP packages, and if they were old MS-DOS machines you might try installing Netware client drivers on those and mars_nwe or a commercial copy of Netware on the Linux box).
Let's assume you mean to mount the Zip disks on your Linux box, and "export" them (NFS terminology) to your AIX systems. Then you'd modify your /etc/fstab to contain an entry appropriate to mount the Zip media into your file hierarchy. Maybe you'd mount it under /mnt/zip or under /zip. (You might have multiple fstab entries to support different filesystems that you might have stored on your Zip media. In most cases you'd use msdos, or one of the other variants of Linux' MS-DOS filesystem: umsdos, vfat, or uvfat).
Then you'd edit your /etc/exports file to export that to your LAN (or to specific hosts or IP address/network patterns).
Try reading the man pages for /etc/fstab and /etc/exports and perusing the following HOWTOs:
- Zip Drive Mini-HOWTO
- http://www.linuxdoc.org/HOWTO/mini/ZIP-Drive.html
- NFS HOWTO
- http://www.linuxdoc.org/HOWTO/NFS-HOWTO.html
And the excellent new:
- Filesystems HOWTO
- http://www.linuxdoc.org/HOWTO/Filesystems-HOWTO.html
by Martin Hinner.
If that doesn't do the trick, try clarifying your question. It often helps to draw a little map (ASCII art is good!).
From David Buckley on Wed, 05 Apr 2000
I am new to linux and am wondering if there is an easy way to access my Win98 disk from within linux. i have lots of files (mp3s, etc.) that i would like to use in linux. what is the easiest way to get them?
Thanks, David Buckley
I'm guessing you're talking about accessing files that are on you local system (that you have a dual-boot installation).
In that case use the 'mount' command. For example the first partition on your first IDE drive is /dev/hda1 (under Linux). If that's your C: drive under MS-DOS/Windows then you can use a command like:
mkdir /mnt/c && mount -t vfat /dev/hda1 /mnt/c
... (as the 'root' user) to make the C: directory tree appear under /mnt/c.
Once you've done that you can use normal Linux commands and programs to access those files.
That will only mount the filesystem for that duration of that session (until your reboot or unmount it with the 'umount' command). However, you can make this process automatic by adding an entry to your /etc/fstab (filesystem table).
For more info on this read the appropriate sections of the Linux Installation & Getting Started Guide (*), the System Administrator's Guide (*) (both part of the LDP at http://www.linuxdoc.org) and the mount(8), and fstab(5) man pages with the following command:
man 8 mount; man 5 fstab
(Note, in the first case you do need to specify the manual chapter/section number, 8, since there is a mount() system call which is used by programmers, particularly for writing programs like the 'mount' command itself). When you see references to keywords in this form foo(1), it's a hint that foo is documented in that chapter of the man pages: 1 is user commands, 2 is system calls, 3 is library functions, 4 is for devices, 5 is for file formats, etc).
- *( LIGS: Chapter 4 System Administration
- http://www.linuxdoc.org/LDP/gs/node6.html#SECTION00640000000000000000
LSAG: Filesystems http://www.linuxdoc.org/LDP/sag/x1038.html )
To access your MS-DOS formatted floppies it's often easier to use the mtools commands. Look at the mtools(1) man pages for details on that.
Here are a couple of other HOWTOs to read through:
- From DOS/Windows to Linux HOWTO
- http://www.linuxdoc.org/HOWTO/DOS-Win-to-Linux-HOWTO.html
- Filesystems HOWTO
- http://www.linuxdoc.org/HOWTO/Filesystems-HOWTO.html
In general you want to look through these to find answer to most common Linux questions. As you might imagine, you've asked a very common one here). In fact it's number 4.2 in the FAQ http://www.linuxdoc.org/FAQ/Linux-FAQ-4.html#ss4.2
You can also search the Linux Gazette at:
- Full search on archive Linux Gazette Search
- http://www.linuxgazette.com/wgindex.html
Although I can see how you might not know what terms to search on until you've covered some of the basics in the LDP guides, or any good book on Linux.
There are also ways to access your Win '9x "shares" (network accessible files, or "exported" directories) from Linux using smbfs.
From Paul Ackersviller on Wed, 05 Apr 2000
Jim,
I believe I forgot to say thanks for having written the original answer as it was. I've programmed shells for ages, but have never had occasion to use co-processes. Seeing examples of how it's done are alway a good thing.
-- Paul Ackersviller
You're welcome. I've never actually used them myself. However, I was jazzed to learn how they actually work when someone I was working with showed me an example.
Sometimes I take advantage of being "The Answer Guy" and grab any pretense to show of some need trick that I've discovered or been shown (I usually try to give credit where credit is due --- but sometimes that's pretty ambiguous and doesn't fit into the flow of what I'm typing).
Anyway, I'm a firm believer in having a full toolbox. You often won't know what tool would do the trick unless you've seen a wide enough variety of tools to recognize a nail vs. a screw and can associate one with a hammer and the other with a screwdriver.
From Ranone7 on Wed, 05 Apr 2000
At this web site http://www.linuxmall.com/product/01462.html I see the title "Red Hat Linux Deluxe for Intel" Is there a Linux for AMD out there? or can I use the above linux version with an AMD-Athlon.
Thank you
The packaging is suffering from a compromise. It's trying not to sound too technical. Red Hat Linux for Intel should work on any x86 and compatible CPUs. Note that Mandrake requires at least a Pentium (it won't work on old 486 and 386 systems).
What Red Hat Inc was trying to do which this verbiage is distiguish that box from the versions that they have available for SPARC and Alpha based systems. Eventually they'll also probably have a PowerPC package available as well.
Many other distributions are similarly available on several platforms.
Answered By Martin Pool on Thu, 06 Apr 2000
On Wed, 5 Apr 2000, Jim Dennis wrote:
>At this web site > >http://www.linuxmall.com/product/01462.html I see
>the title "Red Hat Linux Deluxe for Intel" Is there a Linux for
>AMD out there? or can I use the above linux version with an
>AMD-Athlon.
>Thank you
The packaging is suffering from a compromise. It's trying not to sound too technical. Red Hat Linux for Intel should work on any x86 and compatible CPUs. Note that Mandrake requires at least a Pentium (it won't work on old 486 and 386 systems).
Good explanation. IIRC Athlons are only supported in 2.2.something, so they'll also need a recent distribution. I guess any RedHat version on sale these days will be OK, but notably Debian slink/stable will not boot.
Thanks for that note [from one of the guys on the Linuxcare list that now receives answerguy responses].
I remember hearing about Athlon problems, but I didn't ever get the full story. I was spoiled by the fact that most x86 compatible chips really are x86 COMPATIBLE. I still don't know what the whole deal with that Athlon chip is. I'll BCC someone on this to see if he can clue me in.
Answered By David Benfell on Thu, 6 Apr 2000
The story, as I was able to piece it together, is that the problem was found and fixed in the 2.3.19 kernel. The correction had to do with Memory Type Range Register (MTRR) code. This patch was backported to, possibly the 2.2.12 kernel, and, almost certainly, the 2.2.13 kernel.
However, it still seems to have been an issue with the Mandrake 6.5 distribution, which had a 2.2.12 kernel. On the other hand, my neighbor just installed Red Hat 6.2, with, I think, a 2.2.12 kernel (but the site won't tell), on an Athlon. So I'm confused.
David Benfell[
So, if you know more about the Athlon MTRR mystery, enlighten us please!
-- Heather. ]
From Le, Dong, ALNTK on Fri, 07 Apr 2000
Hello "The Answer Guy",
My name is Dong Le. I'm quite new to Linux. Since I come from Unix world, I try to use Unix concepts to apply on Linux. Some times it works, most of the time does not.
Anyway, I have Redhat 6.1 installed on my 2 PC intel-based. I tried to use rcp to remote copy files from one PC to another. I got the error: "permission denied" from other PC. I have a file ".rhosts" setup to give permission to other PC. I use "octet format" in all of files/commands so DNS/NIS are not involved at all.
My questions are:
- Why do I have this error?
- Later on I found out that Linux is using PAM to do authentication. For rcp, it is using /etc/pam.d/rsh.conf to authenticate. However, I can not find any information about PAM modules (pam_rhosts_auth.so, for example) regarding how it works. Do you know where I can obtain information about particular PAM module?
Thanks a lot, Dong Le,
Short answer: Use ssh!
There are a few problems here. First, I've seen versions of rshd (the rsh daemon) that would not seem to accept octet addresses. More importantly many Linux distributions are configured not to respect your ~/.rhosts files.
You are correct that you have to co-ordinate your policy using the PAM if your system has the "Pluggable Authentication Modules" suite of programs installed. The configuration file would be /etc/pam.d/rsh. Here's the default that would be installed by Debian:
#%PAM-1.0 auth required pam_rhosts_auth.so auth required pam_nologin.so auth required pam_env.so account required pam_unix_acct.so session required pam_unix_session.so
Yours would be pretty similar.
In addition you might find that you need to also modify the arguments on the in.rshd line in your /etc/inetd.conf file. For example if there's a -l option it may be causing your copy of in.rshd to ignore user ~/.rhosts files. A -h option will force it to ignore the contents of any /etc/hosts.equiv file.
(The new Debian rshd package ignores these additional options and requires that you configure your policy through the /etc/pam.d/ files. I don't know if Red Hat has modified it's packages in this way for versions 6.1 or 6.2. In 6.0 I'm pretty sure that I was still able to use the command line arguments on the in.rshd entry in the /etc/inet.conf file for this.)
Of course you can use ssh as a resplacement to rsh, and have much better security as well.
From Cleary, James R. on Fri, 07 Apr 2000
Jim,
I just clean installed Redhat 6.0 on my box. I can ping the
box from another machine, but I can't telnet to it, the default configuration should provide for that, shouldn't it? Any help you'd have would be great.
Sincerely, "J.C."
When you say "you can't telnet to it" what do you mean? Does the telnet client seem to just sit there for a long time? Do you get an error message that says something like "connection refused?" Does that come back immediately, or does it take a minute or two? Are you trying to telnet to it by name, or by IP address? (That basically doesn't matter as long as you're using the same form for your ping command).
I disagree with your assertion that the "default configuration should provide for that?" Linux appeals to a much broader range of users than traditional, professionally managed UNIX systems. It is not appropriate to assume that all of your users what to be "telnet hosts" (servers or multi-user workstations). In addtional telnet is an old and basically depracated means of remote access.
(Well, it should be deprecated).
You should probably use ssh, STEL, ssltelnet, or install a Kerberos or the FreeS/WAN IPSec infrastructure to provide you with an encrypted, unspoofable, unsniffable connection between your client and your server.
Please don't respond with "but I'm behind a firewall" or "this is just my home system." Those are "head in the sand" attitudes that make for a brittle infrastructure (one little crack and the whole wall collapses).
Anyway, if you've termined that telnet is really what you need, that it matches your requirements and enforces your policies to your satisfaction, then here's some pointer to troubleshooting common failures. These also apply to ssh, STEL, etc.
You said that 'ping' is working. Assuming that you are using the commands from the same host and using the same form of addressing/naming for your 'ping' and your 'telnet' commands here are the most likely problems:
* You're session might not actually be failing. It might just be taking a very long time. Search the answer guy back issues for the phrase "double;reverse;dns" and you'll find a number of my previous explanations about a common cause of this delay (and some pointer on what to do about it) Here are a couple of them:
- Issue 45: More "Can't Telnet Around My LAN" Problems
- http://www.linuxgazette.com/issue45/tag/11.html
- Issue 38: Telnetd and pausing
- http://www.linuxgazette.com/issue38/tag/32.html
- Issue 30: tv cards and dual monitor
- http://www.linuxgazette.com/issue30/tag_tvcard.html
* You might not have the telnet daemon package installed
on your target host. It might be installed but not properly configured in /etc/inetd.conf. That should contain a line that looks something like:
telnet stream tcp nowait telnetd.telnetd /usr/sbin/tcpd /usr/sbin/in.telnetd
* You might not have inetd running. (It's the daemon, service
program, that reads the /etc/inetd.conf, listens for connections on those ports, and dispatches the various service programs that handle those services).
(An obscure possibility is that you might have something broken in your name services handling. You system would normally match service/protocol names to IP port numbers and transport layer protocols (TCP, UDP, etc) using the /etc/services file. If that's corrupted, or if your /etc/nsswitch.conf is pointing your NSS libraries to query some really bogus and corrupted backend it would be possible that inetd would end up listening to the wrong ports for many services. I've never seen anyone mess that up -- but I'm sure it's possible).
* There may be a firewall or packet filtering system between
your client and your target. That might let ICMP ('ping' traffic) through while blocking your TCP ('telnet' on port 23) traffic.
* It's possible that you're telnet client program, or one
of the client libraries is broken, or that you have some degenerate values in your environment or even in your own .telnetrc file. The 'telnet' client exchanges a number of key environment variables with the daemon to which it connects. This is to configure your terminal type, set your username and your DISPLAY values, your timezone, and some other stuff. It's possibly (though unlikely) that you could be tripping over something that the 'in.telnetd' on your target really doesn't like).
Hopefully that will help.
When asking about these sorts of problems it's important to be quite specific about the failure mode (the symptoms). It is VERY important to capture and quote any error messages that you get and to explain exactly what command(s) you issued to elicit those symptoms.
Unfortunately crafting a good question is sometimes harder than answering them. (In fact I have managed to come across answer on many occasions while I was writing up the question I intended to post. The process or rigorously describing the problem has often led me to my own answers. Sometimes I post the message with my solution anyway).
One tip for troubleshooting this. Staring with 'ping' is a good idea. It basically eliminates a number of possible problems from the low-level "is the network card configured and is a cable plugged into it?" parts of your problem. It's also good to do a 'traceroute' to your target. This might show that your packets are being routed through some unexpected device that is filtering some of your traffic.
If you have console access to the target server (including a "carbon proxy" --- a person on the phone in front of it) then you can run (or have your proxy) run the 'tcpdump' command. This can show you the headers of every packet that comes across a given network interface. 'tcpdump' has a small language for describing the exact sorts of traffic that you want to see and filtering out all the other traffic that you don't want. If you search the LG AG archives on 'tcpdump' you should find a number of examples of how to use it. You might go for something like:
tcpdump -i eth0 -n host $YOURCLIENT and port 23
... for example. (TCP port 23 is the standard for telnet traffic).
If that doesn't work, you might consider temporarily replacing your 'in.telnetd' with an 'strace' wrapper script. Basically you just rename the in.telnetd file to in.telnetd.real and create a shell script (see below) to monitor it:
#!/bin/sh exec strace -o /root/testing/telnet.strace /usr/sbin/in.telnetd.real
I've described this process before as well. Here's a link to one of those:
- Issue 20
- http://www.linuxgazette.com/issue20/lg_answer20.html
- Issue 17
- http://www.linuxgazette.com/issue17/answer.html
(use your browswer's "search in page" -- [Alt][F] in Netscape and the / key in Lynx to search on 'strace' to find the messages I'm talking about. Those older issues were back before Heather was doing my HTML for me, and splitting each message/thread into separately HTML pages like I should have been doing all along).
That 'strace' trick is surprising handy. At Linuxcare we use it all the time, and it often helps us find missing config files, directories where files should be, files where directories should be, mangled permissions, and all sorts of things. There's another tools called 'ltrace' which gives similar, though slightly higher level information.
Using 'tcpdump' and 'strace' you can troubleshoot almost any problem in Linux. They are like the "X-Ray" machines and CAT/PET scanners for Linux tech support people. However, I don't recommend them lightly. Go through the list of common ailments that I listed first, consider using ssh instead, and then see if you need "surgical diagnostics."
From Patricia Lonergan on Fri, 07 Apr 2000
How would I find the following on the version of Unix I am using:,A (B OS type and release, node name, IP address, CPU type, CPU speed, amount of RAM, disk storage space, number of users who have ids, number of hosts known.,A (B Thanks Answer Guy
The comamnd:
uname -a
Should give you the UNIX name (Linux, SunOS, HP-UX, etc) and the kernel version/release, architecture, and some other info. (Might also include the kernel compilation date and host)
The command:
ifconfig -a
... should give the the IP address, netmask and broadcast address of each interface in the system.
The command:
hostname
... should give you the the DNS hostname that this system "thinks" it has. Looking that up via reverse DNS using a command like:
dig -x
... might be possible if you have the DNS utils package installed.
From there things start to get pretty complicated depending on which flavor of UNIX you're on, and how it's configured. (In fact there are exceptional cases where the preceding commands won't work):
I'll confine the rest of my answers to Linux.
You can get the CPU type and speed using the command:
cat /proc/cpuinfo
(assuming that your kernel is compiled with the /proc filesystem enabled and that you have /proc mounted. Those are the common case).
Linux provides a 'free' command to report on your RAM and swap availability and usage. Many UNIX systems will have the 'top' command installed. It can also provide that information (though it defaults to interactive mode --- and thus is less useful in scripts).
Any UNIX system should provide the 'mount' and 'df' commands to generate reports about what storage devices are attached and in use (mounted) and about the amound of free space available on each. Note you should track not only your free space (data blocks) but your free inodes (management data) so use both of the following commands:
df df -i
The 'mount' command will also report the filesystem types and any options (readonly, synchronous, etc) that are in effect on these. You might have to use the 'fdisk -l' command to find any unmounted filesystems (that might not be listed in your /etc/fstab file) under Linux. Solaris has a similar command called prvtoc (print volume table of contents).
Asking about number of user accounts is straightforward on a system that is just using local /etc/passwd and /etc/group files (the default). You can simply using the following:
wc -l /etc/passwd
... to get a number of local users. Note that many of these accounts are purely system accounts, used to managed the ownership and permissions on files and system directories. If you read though that file a little bit it should be obvious which ones are which. In general Linux distributions start numbering "real" users (the ones added after the system was installed) at 500 or 1000 so all of the names with a UID above that number are "real" (or were added by the system administrator).
However, it's possible (particularly in UNIX system that are installed on corporate networks) that your system(s) are using a networked account system such as NIS or NIS+. You might be able to get some idea of the number of users on such a network using the 'ypcat' command like so:
ypcat passwd | wc -l
The questions of "number of hosts known" is actually a bit silly. "Known" in what sense? Most system use DNS for mapping host names to IP addresses. Thus any Internet connected system "knows" about millions of hosts. It is possible for a sysadmin to provide the system with a special list of hosts and IP addresses using the /etc/hosts files, but this is pretty rare these days. (It's just too likely that you'll get those files out of sync with your DNS).
I suppose you should also look for commands with the letters "stat" in their name. Read the man pages for 'vmstat', 'netstat' 'lpstat' etc. Many versions of UNIX also include a 'sar' command though that isn't common on Linux. 'rpcinfo' and 'route' are other useful commands.
This whole set of questions has a "do my homework" tone to it. (particularly since it's common from a .edu domain). Keep in mind that I've just barely scratched the surface of the information that's available to a skilled sysadmin who needs to become familiar with a new machine. There are hundreds of other things to know about such a system.
Most of the information you care about it under /etc. On a Linux system there is also quite a bit under /proc (most of forms of UNIX that support /proc only but process information thereunder, while the Linux kernel uses it as an abstraction to provide for all sorts of dynamic kernel status information out to user space).
From Carlos Ferrer on Thu, 13 Apr 2000
Do you know how to connect an NT box with an OS/2 box using null modem?
Thanks, Carlos Ferrer
Yes. You plug one end of the null modem cable into a serial port on one of the boxes, and the other into a serial port on the other box. Then you install some software on each, configure and run it.
Before you ask:
NO! I don't know what NT or OS/2 native software you should use. That's your problem. I answer Linux questions. I'm the Linux Gazette Answer Guy.
So, why don't you ask the technical support from IBM and/or Microsoft. They sold you the software. They should provide the support. The Linux community gives us software, so I give away alot of support.
Meanwhile, you might have some luck with plain old MS-DOS Kermit. NT and OS/2 are supposed to support running DOS programs, and they should allow you to configure their DOS "boxes" (virtual machines, whatever) to have access to their respective serial ports. You can also get Kermit '95 which should work on Win '9x, NT, and OS/2. This is a commercial package. It is not free.
The C-Kermit for UNIX and Linux is also not free; though it can be freely downloaded and compiled. You should read its license to determine if you can use it freely or whether you are required to buy the C-Kermit book. (Of course you could support their project by buying the books regardless). There is also a G-Kermit which is GPL'd.
You can learn about Kermit at:
- Columbia University Kermit Project Home page
- http://www.columbia.edu/kermit
From James Knight on Thu, 13 Apr 2000
If I have an interactive program running on a VT, say tty1, can i temporarily "control" that VT from another, say tty2, or better yet, through a telnet connection (pts/n)?
For instance, i have naim running on tty1, I've been logging in via telnet, and killing that process, and start it again so they don't interfere with each other. Can I just pretend I'm at the console somehow, then when I logout, i'll still be connected to naim?
Thanks, Jay Knight
The easiest way to do this is to run 'screen'
Instead of starting interactive programs directly from your VT login shell, run 'screen' and start the program thereunder. Now you can "detach" the whole screen session (with up to 10 interactive programs running under it) and re-attach from any other sort of terminal login.
I do this routinely. I'm doing it now. Currently I'm working in an xterm which is 99 characters wide and 35 lines tall. Earlier I had connected to my system via ssh, and I "yanked" my 'screen' session over to that xterm (80 characters by 50 lines) using the following command:
'screen -r -d -e^]]'
... the -d option tells my new 'screen' command to look for another 'screen' session and detach it from wherever it is, the -r is to re-attach it to my current terminal or psuedo-terminal, and the -e option let's me set alternative "escape" and "quote" characters (more on that in a moment).
I've described 'screen' in previous LG issues. However, it is hard to find. For one thing the desired features are difficult to describe and the keywords that do cover it are far too general. For example, so far the keywords we've used are:
You: temporarily control VT Me: attach re-attach detach screen session yank
... see?
Anyway, here's the VERY short intro to 'screen':
First 'screen' just starts an extra shell. So, if you just type 'screen' (most distributions include 'screen') that's pretty much all you'll get. (You might get some sort of copyright or other notice). Now you can run programs as usual. The only big difference is that there is one key ([Ctrl]-[A] by default) which is not captured by 'screen'. That one "meta" key is your trigger to fire off all of 'screen"s other features. Here are a few of them (listed below as [Meta]+(key)):
[Meta] [a] -- send a literal [Meta] to the current session [Meta] [c] -- create an a additional shell session under this 'screen' [Meta] [w] -- display/list current sessions (windows) [Meta] [A] -- (upper case 'A') set this session's (window's) title [Meta] [Esc] -- go into "scrollback" and "copy" mode (keyboard cut & paste) [Meta] [Space] -- cycle to the next session [Meta] [Meta] -- switch to most recent session [Meta] []] -- (right square bracket) paste copy of "cut" buffer [Meta] [?] -- Quick help page of other keystrokes [Meta] [d] -- Detach [Meta] [S] -- (upper case 'S') split the screen/display (like 'splitvt') [Meta] [Q] -- (upper case 'Q') unsplit the screen/display [Meta] (digit) -- switch directly to session number (digit)
There are many others. There are many features to 'screen.' It is the UNIX/Linux terminal power tool. You also get the ability to share your session(s) with another user (like the old 'kibitz' package). That's very handy for doing online tutorial and tech support. You get a scrollback buffer and keyboard driven cut and paste (with 'vi' inspired keybindings, you can even search back through the current text and backscroll buffer).
Most of the URLs you see in the "Answer Guy" are pasted in from a 'lynx' session using 'screen.'
If you forget to detach, you can use the -d option (shown above) to remotely detach a session. You can use other options to select from multiple 'screen' sessions that you have detached. You can also run 'screen' commands to start up programs in their own screen windows.
Oddly enough I've even found that I occasionally start or re-attach to one 'screen' session on a remote system from within a local 'screen' session. When I do this I use the -e option to give that other (remote) screen session a different meta key. (That's what I did in the sample command up there, with the '-e^]]' setting it up so that the [Ctrl][Right Square Bracket] was the meta key for that session. I did that while I was at work. Before I left there I detached it. When I got home I re-attached it to this 'xterm' (where I'm typing right now). At first I just re-attached it with '-r' --- but then I realized that it was using my other meta key. So a detached again and use '-r^aa' to reset those to the defaults (to which I'm more accustomed).
Since I've introduced people at Linuxcare to this meme, I've found that many of them have come to view their "sessions" in a way that's similar to me. We maintain our state for weeks or months by detaching, logging out, going elsewhere (into X, out of X, from work, from home, etc), and always re-attaching to our ongoing sessions. It's a whole different way of using your computer.
So, try it. See if it does the trick for you.
From FRM on Fri, 14 Apr 2000
hi,
my sunos 4.1.4 kernel is already configed for the max 256 pty's (pseudo devices), but my users complain about running out of them often. do i need to add files to the /dev directory or recompile the kernel again...or????
any help much appreciated,
Randy A Compaq Computer Corp.
SunOS 4.1.4??? Hmm. Maybe you need an upgrade.
If 256 is the max for SunOS then I don't know what you'd do to get around that. Under Linux the max is about 2048. I suppose you could try making a bunch of additonal device nodes and re-writing/compiling a bunch of your apps to open the new group of nodes rather than the old ones.
I'd say that SunOS 4.1.4 is showing its age. You might want to consider switching to OpenBSD, NetBSD, or Linux. (Note: SunOS was a BSDish UNIX, so you might be more comfortable with it than you would be with Linux. I don't know about binary compatability for your existing applications).
(Obviously I don't know much about SunOS. I'm the LINUX Gazette Answer Guy and my experience with other forms of UNIX is too limited and crufty to help you more than that).
From Alain Toussaint on Sun, 16 Apr 2000
Hello Answerguy,
last week,i installed debian (a really base installation) on a factory
fresh disk and then set out to compile Xfree86 4.0 (i did not have X previously),it did compile and work fine and i've been using it daily with the startx command but wenesday this week,the hard disk on my mother's computer died so i set out to build a linux boot disk containing an X server so she could log in my system and continue to do her work,i then tried xdm tonight (locally on my box first),xdm loaded,took my credential but it did not open a session both as a user (alain) and as root,i looked over in the .xession-errors file but i've came to no conclusion,here's the content of the file:
> /home/alain/.xinitrc: exec: xfwm: not found > /home/alain/.xinitrc: xscreensaver: command not found > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0' > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0' > Xlib: connection to ":0.0" refused by server > Xlib: Client is not authorized to connect to Server > xrdb: Can't open display ':0'
the first 2 errors don't worry me much (i have xfce installed and for xscreensaver,i don't want it,since i'll install kde soon,i'm not pressed to fix the xfce script that much),but the Xlib errors worry me quite a bit,i then downloaded debian's xdm package and uncompressed it in a temporary directory to compare the content of both our /etc/X11/xdm directory (mine as well as the debian one) but i didn't find the root of the problem,could you please help me ??
Thanks a lot Alain Toussaint
Hmmm. It sounds like a problem with your .Xauthority file. You said you were using 'startx' before, and you're now trying to use 'xdm'. What happens if you go back and try 'startx' again?
'xdm' has a different way of handling the 'xauth' files (using the 'GiveConsole' and 'TakeConsole' scripts). Do a 'ps' listing and see if you have a X server with arguments like:
X :0 -auth /var/xdm/Xauthority
There's supposed to be a "GiveConsole" script that does something like:
xauth -f /var/xdm/Xauthority extract - :0 | xauth -f ~$USER/.Xauthority merge -
(Which extracts an MIT Magic Cookie or other access token from xdm's Xauthority file and merges it into the "cookie jar" of your user. This can, in principle, allow multiple accounts on a host or across a network to access the same display server).
Anyway, there are many other tricks that you can use to troubleshoot similar problems.
I sometimes will start the X server directly (bypassing 'xinit', 'startx', and 'xdm'); then switch back to one of my text mode consoles (usually when I'm doing this I slap the old & on the end of the X server's command line, if I forget then I do the old [Ctrl]+[Z], 'bg' key and command). Next I 'export DISPLAY=:0' (or :1, or whatever), and start an 'xterm &'
At that point I switch back to the X virtual console, and use the resulting 'xterm' to work more magic. I may need to run my own 'xrdb' commands to merge in my own entries into the "X resources database" (think of that as being your X server's "environment" --- a set of name/pattern and value pairs which are used by X client programs to determine their default appearance, behaviour, etc).
I might also run a number of 'xset' commands to add to my font path and play with other settings.
Doing this sort of "worm's eye" inching through the labyrinthine X initialization process will usually isolate any problems that you're having. It's playing with X enough to realize that it's going through all of these steps that's so difficult.
I presume that you already know some of that (since you've already fetched your own XFree 4.0 sources and built them). It's clear that you're not a novice. Anyway, trying looking for .Xauthority files. Allegedly if you simply delete them it opens the X server wide open. I don't know if that's still true in XFree 4.0 but it seemed to work on XFree 3.x the one time I tried it.
Good luck on that new X server. I haven't grabbed it to play with it yet. I may wait until someone has a first cut of a Debian binary posted to "woody" (the current development/experimental branch of the Debian project).
Answered By Carl Davis on Mon, 17 Apr 2000
Thanks Jim, but I have solved the mystery................
The problem was that lilo does not like multiple "append" statements in /etc/lilo.conf. I fixed this by putting all the statements on the one append line, separated by commas and of cotatement2, statement3" You may wish to add this snippet to the list of 2cent tips.
Regards
Carl Davis
-----Original Message----- From: Carl Davis Sent: Thursday, April 13, 2000 9:12 AM To: 'answerguy@ssc.com' Subject: Linux
Hi Jim,
My compliments on a great column. I am running Linux (Mandrake 7) on a Celeron 466 with 128 Mb RAM. My problem is I cannot persuade Linux to recognise more than 64 Mb. I have tried adding the following to lilo.conf: append="mem=128M", to no avail. It still comes up with only 64 Mb. Various flavours of Windoze can see the full 128 Mb. Any ideas on what's going on here ?
Carl Davis
From Scott on Mon, 17 Apr 2000
Hello Answer guy,
The company I work for is going to start developing products for Linux soon. Part of my preparation for this is to find out about Linux file systems. One thing I haven't been able to find is how to find out what file system each filesystem is using. Is there a command line utility that shows this? How do I accomplish this programatically?
Here's a simple shell script that will parse the output from the 'mount' command and isolate the device name and type for each mounted filesystem:
mount | { IFS=" (,)"; while read dev x mpoint x type opts; do echo $dev $type; done }
Notice at this is one of my common "data mill loops" --- you pipe the output of some command into a 'while read ...; do' loop and do all your work in the subprocess. (When I'm teaching shell scripting one of the first points I emphasize about pipes is that a subprocess is implicitly made on one side of your pipe operator, or the other).
We also see that I'm using the variable "$x" to eat extra fields (the words "on" and "type" from 'mount"s output). Finally, I'm using the shell-special IFS (inter-field separator) shell variable to add the characters "(,)" to the list of field separators. This means that each of the mount options --- read-only vs read/write, nodev, nosuid, etc --- will be treated as a separate value. I could then, within my 'while' loop, nest a 'for' loop to process each option on each filesystem.
Creative use of IFS and these 'while read ...; do' loops can allow us to do quite a bit directly in shell without resorting to 'awk' and/or 'sed' to do simple parsing. Creative use of the 'case' command (which uses glob patterns to match shell variable values) is also useful and can replace many calls to 'grep'.
To get filesystem information from within a C program you'd use the 'statfs()' or 'fstatfs()' system calls. Read the 'statfs(2)' or 'fstatfs(2)' man pages for details. Fetch the util-linux sources and read the source code to the 'mount' and 'umount' commands for canonical examples of the use of these and related sytem calls.
Any help is appreciated!
Scott C
From Andrew T. Scott on Mon, 17 Apr 2000
Jim Dennis wrote: .....
and do all your work in the subprocess. (When I'm teaching shell scripting ...
Where can I sit in on this class?
-Andrew[
Luckily for Linuxcare, its training department has a whole bunch of people in it (wave hi, everybody!) because they've got Jim assigned to Do Cool Stuff so he's not teaching right now. To be fair, they are only one among many training providers for Linux; you can see a decent listing at http://www.lintraining.com which redirects to Linsight's directory by location and news on the subject.
-- Heather. ]
From vg24 on Tue, 18 Apr 2000
Hi Answer Guy,
I had a few small questions about my Slackware Linux Box...
> (1) How do I get applications (like xmms) to startup automatically when I > start FVWM95 with a 'startx' command? I'm hoping to achieve something > similar to the "StartUp" menu in Win98.
Normally the 'startx' command is a shell script which looks for a ~/.Xclients file. That file is normally just another shell script. It consists of a series of command that are started in the background (using the trailing '&' shell operator), and one command that is 'exec"d (started in the foreground, and used to replace the shell script's interpreter itself.
That foreground command is usually a window manager. In any event it becomes the "session manager" for the X server. When the program exits, the X server takes that as an indication that it should shutdown.
So, the answer to your question is to add the appropriate commands to the .Xclients script in your home directory.
If you are logging in via 'xdm' (a graphical login program) then it may be that your system is looking for an ~/.Xsession script instead. I usually just link to two names to one file. However, you certainly could have completely different configurations based on whether you logged in via 'xdm' or used the 'startx' command.
Of course this is just a matter of convention and local policy. As I said, 'startx' itself is often a shell script. At some sites you use 'xinit' instead of 'startx' --- and others there are different ways to launch the X server and completely different ways to start the various clients that run under it and control it.
You mentioned fvwm95. This is one of several variants of the fvwm window manager. That's a traditional window manager. It just gives you a set of menus (click on the "root window" which other windowing systems call a "wallpaper" with each of your mouse buttons to see those), and a set of window decorations (resizing bars, corners, and title bars and buttons).
In recent years the open source community has created somewhat more elaborate and "modern" graphical user environments like: KDE, GNOME, and GNUStep. These are whole suites of programs which can be combined to provide the sort of look, feel and facilities that people have come to expect from MacOS, MSWindows, etc.
If you really want something "Like the StartMenu" in Win'9x then you may want to look at KDE or GNOME. These have "panels" which provide a much closer analogue to the environment that you are used to.
(Note: It is also possible to make either of these environments look completely different than MS Windows. They both support "themes" which are collections of settings, graphics, textures, icons, even sounds, that customize the appearance and operation of a Linux GUI. For more information and some nice screen shots of the possibilities, take a look at http://www.themes.org).
> (2) I recently upgraded my kernel and filesystem binaries from a 2.034 kernel > to a 2.2.13 kernel. I have XFree86 3.3.5 installed. I also upgraded > my motherboard from an Intel P75 to an AMD K6-450. I kept the 32 Megs > of RAM the same (a SIMM). However, now I notice that Netscape (and > others?) grind my hard drive more when I attempt to open new > browsers. I'm pretty sure I'm low on memory, but since I'm low in > cash, I'd rather not invest in a DIMM. I didn't have any swap space > set up, and don't now. I actually upgraded from netscape 4.1 to 4.6. > Could this be the problem?
Hmmm. Certainly it is likely that Netscape 4.6 is taking up more memory than 4.1. However I note an inconsistency here. You say you didn't have any swap space. If that was true then your shortage of memory should have caused failures when trying to launch programs --- rather than the increased disk thrashing. I think it's likely that you actually do have some swap space. You can use the following command to find out what swap partitions and files are active on your system:
cat /proc/swaps
... which should provide a list of any swap space that is in use. Of course the 'free' command will also summarize the available and used swap space. However, the /proc/swaps "psuedo-file" (node) will actually tell you "where" the swap is located.
Get the extra memory. It's not that expensive and it is the best performance upgrade that you can make for your system.
> (3) I was running GNOME/enlightenment, but the > GNOME panel would never come up automatically. How can I get the > GNOME panel to initialize, along with the GNOME file manager, (so > I can have the cool desktop icons)?
Hmmm. I'm not much of a GNOME or KDE person. Do you have the rest of GNOME installed? enlightenment is a window manager. It was the default window manager for GNOME --- but they are separate projects. So, do you have GNOME installed? Are you starting 'gnome-session' (running it from your .Xclients/.Xsession script as described above)?
Try that. I think there are now a couple of window managers that implement the GNOME hints and APIs --- so you don't have to use enlightenment.
> (4) Lastly, I wanted to trim my syslog and wtmp files. Is there any > way I can do this? Can I just tail -30 the last 30 lines into a > new file? I think the wtmp is binary, so any ideas?
You are correct. the wtmp and utmp files are binary. They cannot be trimmed with simple shell scripts and text utilities. The utmp file shouldn't grow (by much), however the wtmp will grow without bound. However, the usual way of dealing with wmtp is to simply rename the current one, 'touch' a new one and forget about it.
That's fine for wtmp.
However, DON'T try that with the /var/log/messages or other syslog files. Those are held up. If you rename them or delete them, they continue to grow.
Yes! You read that correctly, if you remove a file while some process has it open, then you haven't freed up any disk space! That's because the 'rm' command just does an 'unlink()' system call. When the last link to a file is removed, AND THE FILE IS NOT OPEN, then the filesystem drivers perform some housekeeping to mark the associated inode(s) as available, and to add all the associated data blocks to the filesystem's "free list." If the file is still open then that housekeeping is deferred until it is closed.
So the usual way to trim syslog files (since syslog stays running all the time, and keeps it's files open under normal circumstances) is to to 'cp /dev/null' or 'echo "" > ' to truncate them. Another common practice is to remove the files and use the 'kill -HUP $(cat /var/run/syslog.pid)' command to force the syslogd to re-read its configuration file, close all its files, and re-open them.
However, none of that should be necessary. Every other general purpose distribution has some sort of log rotation scripts that are run out of 'cron.' I'm pretty sure that Patrick (Volkerding, principle architect of Slackware) didn't neglect that.
(I should point out that I haven't used Slackware in several years. Nothing against it. I just have too few machines and too little time).
Thanks for any help you can provide! Vikas Gupta
Well, I think this should nudge you in the right directions.
From Deepu Chandy Thomas on Tue, 18 Apr 2000
Sir,
I wanted to use the kermit protocol with minicom. I use rz sz for zmodem. Where do I get the files for kermit?
Regards, Deepu
Look at http://www.columbia.edu/kermit for canonical information about all the official Kermit packages, and at: http://www.columbia.edu/kermit/gkermit.html for information specifically about the GPL kermit package (which implements the file transfer protocol without the scripting, dialing and other features of C-Kermit).
(Note: C-Kermit can also be used as a 'telnet' or 'rsh' client with a scripting language, and many other useful features. It is a full featured communications package. Recent versions have even added Kerberos support!)
From Alex Brak on Fri, 14 Apr 2000
I'm having a problem with my linux box, and can't for the life of me figure out what's wrong. Here are the symptoms:
> server:~/scripts$ whoami > alex > server:~/scripts$ ls -al ./script > - -rwxr----- 1 alex home 43747 Apr 10 22:31 ./script* > server:~/scripts$ ./script > bash: ./script: No such file or directory
(note: the '*' at the tail end of the file listing is merely a symbol specifying that its an executable file. this is not part of the filename)
Technically that "file type marker" is the result of using the -F option to 'ls'
The most likely cause of this problem is the #! (shebang) line that should be at the beginning of your script. If that is incorrect then it is common to get this error, your shell is telling you that it can't find the script's interpreter.
If './script' was a binary executable thenI'd also suggest looking for missing shared libraries. In fact, it's possible that your shebang line is pointing to some interpreter (/usr/local/ksh or something) which is present, and executable, but is missing some shared library. It is even possible that a shared library depends on another, and that this is what is missing.
As you can see from the above, I'm the owner of the file in question, and have execute permission on it. The file exists. Yet bash claims the file is not there. I've tried with shells other than bash (every other shell available on my system, including csh, tcsh, ash, and zsh). I've even tried executing the command as root, to no avail.
This exact same problem has arisen before with another script I wrote. I couldn't fix it then, either.
Check your shebang line. It should read something like:
#!/bin/sh
Note: there are NO SPACES in this line. Do NOT put a space between the #! and the interpreter's name.
I'd like to also note that this problem arises intermittently: just after finishing ~/scripts/script I created another script named "test", did chmod u+x on it and it executed just fine. ~scripts/script still refuses to execute, though :( Please note that I've tried renaming the file. I've also tried moving it to another location on the directory tree. None of these have helped.
A text file without any shebang line, which is marked as executable will be executed through some magic that is dependent on the shell wfrom which it is being invoked.
I'll probaby get this wrong in the details but the process works something like this:
You issue a command. The shell tries to simply exec() it (after performing the command line parsing necessary to expand any file "globs" and replace any shell variables, command substitution operators, parameter expansion, etc). If that execution fails the shell may attempt to call it with $SHELL -c (or it might do something a bit different: that seems to be shell dependent).
Notice that the behaviour in the first case is well-defined. Linux has a binfmt_script module (usually compiled/linked statically into your kernel) which handles a properly formatted shebang line.
I have not experienced any other problems with my system that I'm aware of. Does anyone know what could be causing this, or how to fix the problem?
I'm running linux 2.2.14 on my Pentium 120, with a Slackware distribution. The file in question exists on the root partition, in an ext2 file system, which the kernel supports. If there's any other relevant information I have provided, please don't hesitate to ask.
If you were getting "operation not permitted" I'd suggest checking your mount options to see if the filesystem was mounted 'noexec' (which would be a very badd idea for your root fs). If you were getting a message like "cannot execute binary" then I'd think that maybe you had an old a.out binary and a kernel without the a.out binfmt support.
But I'm pretty sure that you're having a problem with your shebang line.
Thanks, Alex
From Alex Brak on Sun, 16 Apr 2000
Sport on. Many thanks.
Alex
-----Original Message----- From: Jim Dennis [mailto:jimd@starshine.org] Sent: Saturday, 15 April 2000 9:07 AM To: Alex Brak Cc: star@starshine.org; Linuxcare.com" >jdennis@Linuxcare.com; Linuxcare.com" >tdavey@Linuxcare.com; bneely@linuxcare.com; sg@linuxcare.com Subject: Re: shell cannot see an existing file
I'm having a problem with my linux box, and can't for the life of me figure out what's wrong. Here are the symptoms:
> server:~/scripts$ whoami > alex > server:~/scripts$ ls -al ./script > - -rwxr----- 1 alex home 43747 Apr 10 22:31 ./script* > server:~/scripts$ ./script > bash: ./script: No such file or directory
(note: the '*' at the tail end of the file listing is merely a symbol specifying that its an executable file. this is not part of the filename)
Technically that "file type marker" is the result of using the -F option to 'ls'
The most likely cause of this problem is the #! (shebang) line that should be at the beginning of your script. If that is incorrect then it is common to get this error, your shell is telling you that it can't find the script's interpreter.
If './script' was a binary executable thenI'd also suggest looking for missing shared libraries. In fact, it's possible that your shebang line is pointing to some interpreter (/usr/local/ksh or something) which is present, and executable, but is missing some shared library. It is even possible that a shared library depends on another, and that this is what is missing.
As you can see from the above, I'm the owner of the file in question, and have execute permission on it. The file exists. Yet bash claims the file is not there. I've tried with shells other than bash (every other shell available on my system, including csh, tcsh, ash, and zsh). I've even tried executing the command as root, to no avail.
This exact same problem has arisen before with another script I wrote. I couldn't fix it then, either.
Check your shebang line. It should read something like:
#!/bin/sh
Note: there are NO SPACES in this line. Do NOT put a space between the #! and the interpreter's name.
I'd like to also note that this problem arises intermittently: just after finishing ~/scripts/script I created another script named "test", did chmod u+x on it and it executed just fine. ~scripts/script still refuses to execute, though :( Please note that I've tried renaming the file. I've also tried moving it to another location on the directory tree. None of these have helped.
A text file without any shebang line, which is marked as executable will be executed through some magic that is dependent on the shell wfrom which it is being invoked.
I'll probaby get this wrong in the details but the process works something like this:
You issue a command. The shell tries to simply exec() it (after performing the command line parsing necessary to expand any file "globs" and replace any shell variables, command substitution operators, parameter expansion, etc). If that execution fails the shell may attempt to call it with $SHELL -c (or it might do something a bit different: that seems to be shell dependent).
Notice that the behaviour in the first case is well-defined. Linux has a binfmt_script module (usually compiled/linked statically into your kernel) which handles a properly formatted shebang line.
I have not experienced any other problems with my system that I'm aware of. Does anyone know what could be causing this, or how to fix the problem?
I'm running linux 2.2.14 on my Pentium 120, with a Slackware distribution. The file in question exists on the root partition, in an ext2 file system, which the kernel supports. If there's any other relevant information I have provided, please don't hesitate to ask.
If you were getting "operation not permitted" I'd suggest checking your mount options to see if the filesystem was mounted 'noexec' (which would be a very badd idea for your root fs). If you were getting a message like "cannot execute binary" then I'd think that maybe you had an old a.out binary and a kernel without the a.out binfmt support.
But I'm pretty sure that you're having a problem with your shebang line.
Thanks, Alex
From Credit Future Commercial MACAU Company on Wed, 5 Apr 2000
Hello sir
I installed red hat 6.1 on my system but it does not display more =
then 256 colours although my vga card is a 16 MB vodoo why is that can = you help me out here ???I have tried startx -bpp16 but still my pics = "jpegs , bmp " qualities isnt fine !!! & are displayed in dots same pic = in windows look good .
thanksfaisal
From Heather on Wed, 5 Apr 2000
Hello sir
Heather isn't a masculine name in the U.S. I'll assume this is intended for the Answer Guy column, and give a first shot at answering it.
I installed red hat 6.1 on my system but it does not display more then
256 colours although my vga card is a 16 MB vodoo why is that can you help me out here ???I have tried startx -bpp16 but still my pics "jpegs , bmp " qualities isnt fine !!! & are displayed in dots same pic in windows look good . thanksfaisal
You have not specified what resolution under MSwin had the qualities you seek. Under X, you must run the correct video server to match your card if you want best performance, but you can nearly always get the screen working with a lesser server.
The VGA16 server only provides 256 color service. If this is what you are stuck at, you might be using this. Or, your /etc/X11/XF86Config file may be telling it to default to this level - the command to change the default is startx -- -bpp 16
with the space. Also, startx is a shell script, and launches a particular server... usually /usr/X11R6/bin/X which itself is a link to the real one... and so, you may be running a server you didn't intend.
Good luck with your JPEGs.
From fai, Answered By Heather Stern on Fri, 7 Apr 2000
Thanks for the help it worked Finally !!!!
Glad to hear it worked for you. Sorry I wasn't able to reply in email in a timely fashion - publishing deadlines, you know.
Can you help me a bit more while telling me that i want to start this command by default startx -- -bpp 32 how can i do this ????
thanks Faisal
One way would be to creat .xserverrc in your own home directory; you'd have to specify your X server, but then you could pass it parameters too:
/usr/X11R6/bin/XF86_SVGA -bpp 32
Assuming that's the right server for you, of course. If startx is just plain "getting it right" except for that itty bitty detail of colordepth, you could instead create a bash alias or put a one-liner shell script in your path. I like to keep such personal scripts in ~/bin (that's bin under my home directory). name it something much shorter like myX and save some typing too.[
So where was Jim on this one? Well, he liked my answer, and was busy with other questions and stuff to do.
-- Heather. ]
Answered By Nadeem Hasan on Mon, 03 Apr 2000
Hi,
This is in reference to above question in "The answer Guy" and its answer. The use of ipchains/ipfwadm is a bit of an overkill to achieve this. The better way is to simply use the following as root:
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
This should cause the kernel to ignore all the ping ICMP requests.
Cheers, -- Nadeem
Just when you think you know everything.
From Nadeem Hasan on Tue, 11 Apr 2000
Hi,
The Gazette still has the old description about disabling ping echo responses. Does that mean its better than what I suggested?
Nadeem
I don't have the power to change what I've published in previous months. Your (better) suggestion on how to disable the Linux kernel's ICMP echo responses (to 'ping' requests) should appear in next month's issue.
Now, what was that magic /proc node again?
Ahh, here it is:
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
... I'd never remember that, but the node is there and I'd recognize the meaining from the name. (So it's in my passive rather than active vocabulary).
There are some other interesting nodes there --- and I think the one about "icmp_echo_ignore_broadcasts" looks useful.
It would be neat if someone wanted to write up a HOWTO on "Useful /proc Tips and Tricks" (hint, hint). I've done some performance tuning by tweaking and playing with some of the entries under /proc/sys/vm (the virtual memory sysctl's), and I know others have even done better than I could (back at Linuxcare, I had to call on our real experts to help me out awhile back for one gig).
I guess the tips would fall into two or three general categories: robustness, security, and performance. For example the /proc/sys/kernel/cap-bound (bounding capabilities set) can be modified to secure some facilities even from a subverted 'root' process (like the BSD securelevel features), and I guess that /proc/sys/vm/overcommit_memory might allow one to prevent the system from overcommit (providing for more robust operation at the expense of reducing our capacity to run multiple concurrent "memory hogs" that ask for more core than they actually need).
A good HOWTO would be organized by objective/situation (Increasing File Server Performance, Limiting Damage by Subverted and Rogue Processes (Crackers), etc) and would include notes on the tradeoffs that each setting entails. For example one might disable ICMP response (for security?) but one should be aware that anyone who has a legimate reason to access ANY other service on your system might want to 'ping' it first to ensure that it is reachable before they (or their programs) will attempt to access any other service on it. (In other words it makes no sense to disable ICMP responses on a web, mail, DNS, FTP or other public server).
Unfortunately I don't have the time, nor nearly enough expertise to write this. There are already some notes in the Linux kernel source trees under /usr/src/linux/Documentation/sysctl/ and I remember that someone is working on a tool to automate some of this PowerTweak/Linux (http://linux.powertweak.com/news.html) comes to mind.
Anyway, enough on that.
From Apichai T. on Mon, 03 Apr 2000
Dear sir,
May I ask for your advice , the steps to set up linux box to be possible to remotely execute graphical applications.
Thanks and Best regards, Jing
Here's a couple of HOWTO and mini-HOWTO links:
- Remote X Apps mini-HOWTO
- http://www.linuxdoc.org/HOWTO/mini/Remote-X-Apps.html
(I've copied it's author, Vincent Zweije, on this reply).
I don't recommend using his example shell script from section 6.2:
!/bin/sh cookie=`mcookie` xauth add :0 . $cookie xauth add "$HOST:0" . $cookie exec /usr/X11R6/bin/X "$@" -auth "$HOME/.Xauthority"
The problem here is that the cookie variable is exposed on these command lines (which is world readable via /proc and the 'ps' command). It may also be exposed if it is exported to the environment. Safe handling must be done through pipes or files (or file descriptors). Note that the window of exposure is small --- but unnecessary. Read the 'xauth' man page for more details.
Better yet: Use ssh! (Read Vincent's HOWTO for more on that).
I also notice that Vincent doesn't distinguish between the session manager and the window manager. In practice they are almost always the same program. However here's the difference:
The session manager is the one program that is started in the foreground during the startx or xinit process. The X server tracks this one process ID. When it dies, the X server takes that as a signal to shutdown. Any program (an 'xterm', a copy of 'xclock' or whatever) can be the session manager.
The window manager is the program that receives events for the "root window" (the X Windows System term for what other windowing systems call the "wall paper" or "desktop" or "backdrop"). There's also quite a bit more to what the window manager does. You can only run one window manager on any X server at any time. Window managers (can?) implement a number of APIs that are unique to them --- so you can just use "any" X program as your window manager.
It's a subtle distinction since almost everybody uses their window manager as their session manager.
Note: If you're troubleshooting X connections keep in mind that the client must be able to connect to the server via the appropriate socket. For example, to connect to the server on :0 (localhost/unix:0) the program must be able to access the UNIX domain socket (usually in sockets that are located in /tmp/.X11-unix/) Obviously chroot() jails could interfere with that (though localhost:0, which is the same as localhost/tcp:0 should still work).
A subtle and rare problem might be if someone were to try running X after building a kernel without support for UNIX domain sockets. It's possible to build a Linux kernel with full support for TCP/IP and yet leave out the support for UNIX domain sockets.
Obviously when looking at Internet domain sockets (TCP/IP) any of the usual routing, addressing, and packet filtering issues can interfere with your clients attempts to connect to port 6000 (or 6001, 6002, etc) on the X server host.
For a little more on remote access to X server look at VNC (Virtual Network Computing from AT&T Laboratories Cambridge: http://www.uk.research.att.com/vnc) (VNC was originally developed at the Olivetti Research Laboratory, which was later acquired by AT&T).
You don't need this to just run X clients on your X server. However, it's useful to learn about VNC in case you need some of the special features that it provides.
Another good site for finding links to lots of information about X is at Kenton Lee's "X Sites" (http://www.rahul.net/kenton/xsites.html) There are about 700 links located there!
Note that while X is currently the dominant windowing system for Linux there are other efforts out there including "Berlin" (http://www.berlin-consortium.org) and the "Y Window System" (http://www.hungry.com/products/Ywindows). I don't know how these projects are going. I see that the Berlin home pages have been updated recently while the Y Window System pages seem to have been stale since March of 1998.
Anyway, good luck.
This configuration may not be immediately obvious to the unseasoned Linux user, therefore, I thought it would make a good two cent tip. Furthermore, the information does not seem readily available (at least I had trouble finding it).
To enable the mouse wheel, place the following line in XF86Config under the pointer section
ZAxisMapping 4 5
This will allow emacs, xterm, ... to receive mouse wheel events. For netscape, also add the following lines to .Xdefaults (be sure there is only a newline after the "\" that ends a line.
Netscape*globalTranslations: #override \ <Btn5Up>: LineDown() \n\ <Btn4Up>: LineUp() \n
Hello, For those who are changing HDDs very often, here is small ugly but working utility which I wrote. It detects filesystem types of all accessible partitions and checks/mounts them in folders named after device (hda7,hdb1,hdb3,sd1,...).
So you will never have to write sequences of fdisk,fsck,mount,df...
Hello,
You maybe interested in checking the site "Tracerote Lists by States. Backbone Maps List" http://cities.lk.net/trlist.html
You can find there many links to the traceroute resources sorted by the next items:
Other thing is the List of Backbone Maps, sorted by Geographical Location, also some other info about backbones.
Hi
Recently there was an article in LG, about a big polish hotel which did that and pretty happy about it. Check it out.I would really like to see case studies on switching to Linux form other platforms.
You'll have to learn a little. On NT you click several times, use some wizards, reboot from time to time and everyhitng it's fine. If it isn't, reinstall and it will be. Using Linux means you also know what you are doing. You'll click more times, even use the keyboard :), ask on irc/mail list/news about something which you can't figure out etc. But you'll have a strong, performant, secure platform.We currently use Windows NT Terminal Server Edition. How hard would it be to go to Linux?
sounds to me like a perfect case for Linux :)- We have two TSE servers with approximately 30 users each logged in on average. In total, we have about 130 users but it is a manufacturing plant and many people share terminals.
Ah ! The Windows world, with lots of _great_ products, which improve performance of your system, maintain it etc etc.- We use Citrix Metaframe, for Load Balancing and failover. Is there a product for Linux that offers this option?
You don't need these on Linux, because it does it by itself. With under 200 users, I don't think you'll need more than one server. Sure, that server won't be a Pentium PC. I'm not into hardware, can't say too much here, but there are many options.
I've heard that on Windows 2000 presentation by Bill Gates, he outlined that there were machines running for even 90 days without a reset. WOW ! ( not to mention the _new_ micro$oft _invention_, the symbolic link )- Dependability. I have to reboot my TSE servers once a week. Last week a new HP printer driver caused about 40 blue screens of death before we figured out what was going on.
You bet. See more articles about introduction to Linux etc. Once you start using it, you'll love it. Try and see ;)Will Linux be better?
Recently we installed ( at www.lug.ro ) php, postgresql, configured the web server (apache) without rebooting and remotely. And no wizards, just the shell, from an old university text terminal. It took about 15 minutes untill I had it running. I'd say that's cool, isn't it ?
About the uptime, here's the output of the uptime command on a sever here in ROmania :
avva:~$ uptime 10:22am up 435 days, 17:27, 7 users, load average: 1.16, 1.10, 1.02 avva:~$
That ain't much, check www.uptimes.net for better results
Better :) I use it from time to time to make some charts, short paper works etc. If you're really into publishing, LaTeX is the answer. And don't forget about the free great database engine, PostgreSQL. (you don't keep data in an Excel sheet, don't you ?) How about a web/desktop interface for it ? No problem !- Office productivity software. If we are used to MS Office, what will it be like going to something like star-office?
If you use Linux and no windows at all, you won't need that. Besides being harder to infect a Linux system because of file perimisions and ownership, there's a problem with anti-viruses. The viruses must appear first, and _later_ the anti-viruses. I can't afford to wait that delta t, both at home and at work, so I use Tripwire, an 'utility' which scans your filesystem and based on the rules you define, makes a database of their CRCs or something like that. If a file get's modified you are notified acording to the rules. There's a free version of it and a better commercial one. (see their website for more info). I use it as an antivirus, but it's for network security.- Anti-virus programs? Is there an antivirus program to scan mail stores (sendmail POP server)?
Linux is Unix-like, which was designed from ground up for network, multi-user, multitasking etc. So it's extremly good at it. On my PC at home, I made more accounts for my relatives (they didn't know how to use a computer initially), and I can happily leave them alone. Besides the graphical interface being in my native language (and I'm proud I contribute to that) in the worst case they can delete their personal files. They don't have the rights for the rest :), not even each other's. You can also make groups for different departments. Each user will have his/her own account, which could be part of one/more departments.-Security. How good is Linux at keeping users honest? With TSE you can delete or overwrite files in the system directories as a user. Can't delete a system file?
As I said, once you get to know Linux better, you'll delete even your windows backups :) You can always find help in the Linux community (but make sure you Read Those Fine Manuals first), or even pay for commercial support.
See ya around !
Marius
--
Your mouse has moved. In order for the change to take effect,
Windows must be restarted. Reboot now ? [OK]
http://www.linuxstart.com/~marius
Another place to look for speed improvements is custom math libraries. A lot of FORTRAN is concerned with linear algebra, and for speeding up things like LAPACK the place to start is probably getting some kind of BLAS. There does exist a couple of had tuned BLAS for some Intel-386 family processors running Linux, but what looks like the best place to get going is something called ATLAS. This is a package which tries to calculate an optimal BLAS Level 1, 2 and 3 library for your machine. It comes pretty close to hand tuned BLAS in assembler, and can work from C or FORTRAN calling.
Gord
I used the "free" command on both of my computers, and found that they were using only 64 MB of memory in Linux. I had to use the "append" addition in LILO to get Linux to see all of it. It appears to be a bios problem, and I like this simple solution. As an aside, I never noticed a shortage of available memory problem before the fix. Linux worked quite well with 64 MB and a 64 MB swap partition. The swap partition was never used, to my knowledge, but the manuals threatened death, doom, and destruction if I omitted it.
A quick note: The memory used by the on-board video with the AMD K6-2 processor cannot be used by Linux. I have 160 MB of memory on that computer. Attempting to use all of it caused a kernel panic at boot time. I use 8 MB for graphics. Changing the append statement to use 152 MB worked. I am sure happy about saving that boot floppy.
I used to have to use the 'append "mem=8M"' statement, but now I don't have to. Which kernel are you using? It may have been something that changed in the 2.2 kernels.
Two computers in use: (1) 300 MHz K6-2, 160MB memory, S.u.S.E. 6.1, upgraded to 6.3 with a giveaway CD-ROM, 1999 Bios that switches to an Adaptec 1520 Bios during boot. (2) 100MHZ Pentium, 96 MB memory, Sam's version of Red Hat 6.0, 1996 Bios that switches to an Adaptec 2940 bios during boot.
I like my computers, and am not concerned about needing an extra tweak to reach all the memory. Both machines are Lilo dual boot to Linux and M$. They are (ab)used frequently, inside and out. Additional operating systems that are now on a bookshelf include OS/2, MS-DOS since 1.0, and Coherent 2 through 4. I will probably upgrade Someday Soon, and look forward to having "free" see all of my memory without an "append" line in Lilo.
Which kernel is it though? "uname -a" will tell you.
1. K62, 2.2.13 #1 kernel ( remember, this updates S.u.S.E. 6.1, with a lower 2.2 kernel)
2. 586, 2.2.5-13 kernel
Sir,
After chastising Linux Gazette about not needing to put an: append "mem=xxM" statement in lilo.conf, it had to happen! I have been using socket seven motherboards for several years and all currently have 128meg. They all run various versions of Linux: Redhat 5.2, Mandrake 6.1 and Suse 6.3. I have not had to put an append statement in lilo.conf or use a mem= at the Lilo prompt to get all 128meg recognized.
That is until now! I just bought a FIC SD-11 motherboard, Athlon 700 and a new case. I took all the components: memory, drives, nic, video card, from one of my socket seven systems and installed them in my new Athlon system. (Please note: same drive with Suse 6.3 already installed.) Everything seem fine until I ran: free and discovered I only had 64meg of memory! So now, I had to put an append statement in lilo.conf to get all 128meg recognized. It's definitely a "bios thing".
BTW: When I upgrade a motherboard, I NEVER need to reinstall Linux but almost always need to reinstall Win 9x.
This isn't exactly a burning question but I'd be interested in knowing if anyone in the Gazette readership knows of a free ISP that supports Linux. All of the ones I've checked out so far require Windows and/or Internet Explorer.
Check out FreeWWWeb (http://www.freewwweb.com). They are the only free ISP I'm aware of which uses a stock PPP dialup and has no specific browser requirement.
Although they have a link for downloading software, you want the link for signing up if you already have a browser.
Try the following link: http://www.linux.ie/misc/oceanfree.html
Thus spoke Nick Bailey <N.J.Bailey@leeds.ac.uk>:
I have a question about connectivity with/via Linux. I need to pull a load of stuff of the Psion, and this is done by getting the files converted to tab-separated values by some utilities I've got. I have read and (I think) understood how to access a palm database from a program running on it, but there's not a lot of stuff on how to get a file full of TSVs onto the Palm. I've read the connectivity howto, and I also understand how to upload a palm database, but its the format conversion between plain text and palm database which looks hard to me. There's no obvious tool to do it, and I can't see how in the docs or this book I bought. Maybe it will become obvious when I unpack the dev tools?
Nah, this is a traditional DB/Spreadsheet conversion issue. You need to go from TSV's to CSV's (comma seperated values). The pilot-link software can upload some items as long as they are in CSV's. Getting from TSV's to CSV's is the hard part. If you're moderately good at regular expression handling, you could probably whip up a perl script to do it. The CSV's have commas between fields and any fields that have embedded commas are enclosed in double quotes. I think you can actually get away with using double quotes around all fields, but I'm not sure about that. I only did enough work with them to get xnotes working properly (and I use the term "properly" very loosely here).
I'm intending to use gcc to make port over a psion app, Vorg http://www.polonius.demon.co.uk/Nick/Psion/software.html (this page disappears soon when I change ISP). I'm also wanting to put a CD database on the Palm Pilot which has a strange and complex structure, so I'll be writing a application to support that. I've already bagged the gcc port and XCoPilot; I was wondering what else you would recommend?
To be honest, I don't write apps for the Pilot. It was something I wanted to do, but there just wasn't enough time in the day to get seriously involved in it. One has to pick certain specialties in todays world - mine turned out to be graphics. Ah well. Maybe someday.
Thanks a lot for your help, however short. Even "look at the xnotesplus source here" would be a help.
Ick - the xnotes source probably wouldn't help much. It's just an ugly wrapper around pilot-link tools (yep, a bunch of execv's and the like). But I think if you can write your parser to go from TSV's to CSV's you'll be on your way.
Thus spoke Jon D. Slater <JSlater@qualcomm.com>
I've successfully downloaded and install pilot-link-0.9.0-8. I didn't find much documentation. I'm having trouble communicating w/ my palm. I have both a serial cradle and a USB cradle, and I can't get either one to work. I'm using RedHat 6.1 on a P-II/333 machine. Do you know of a good resource for connecting to my PC?
There is a fair amount online. I know I wrote an article for Linux Journal a year or two back about how to connect use the pilot-link software with your Pilot's serial cradle. Search the Linux Journal site (www.linuxjournal.com) - I know it's online there somewhere. I think they had another article on the same subject earlier this year or maybe late last year.
I can't speak for the USB cradle, since I've never tried one of those, but the serial cable is pretty straight forward. First make sure you know which serial port you're connecting to. Then set a couple of environment variables from the command line:
% export PILOTPORT=/dev/ttyS0 % export PILOTRATEW600
The first one tells the pilot-link software what serial port to use. The second tells it what speed to transfer data at. I think you can set this to a higher baud rate. I just happen to use this one because I know it works for me.
Now you can just run the pilot link commands:
% pilot-xfer -b /tmp/pilot-backup
That would backup your pilot to a directory called "/tmp/pilot-backup". I can't remember if it will create the directory for you, so you're better off making the directory yourself before hand.
I have just installed version 6.1 and set up my modem to dial out to my ISP. However, when I log on as a user and press KDE>Internet>kppp a pop-up box opens up and wants me to enter the root-password! This does not seem right. is there a way to avoid having to enter the root pass word when logged on as a non-root user?The way I've solved this one was to modify the /etc/pam.d/kppp file to include the line at the beginning of the file:
auth sufficient /lib/security/pam_console.so
[This survey of 3D graphics programs is taken from a series by Paulo Baptista of OLinux. You will see that some of the raytraced images are breathtaking. To preserve the integrity of the article, I have left it in its original format. Please disregard the "Part II" and the citations of other articles: we will have more articles by Mr Baptista, but not necessarily in the same order as the original series.More articles can be found on the OLinux site, http://www.olinux.com.br (Portuguese), which also has many other resources for Linux users. Work is underway on an English-language site; the Interviews section (http://www.olinux.com.br/interviews) is already operational. -Ed.]
This document is intended to be a tutorial, showing how to write a simple assembly program in several UNIX operating systems on IA32 (i386) platform. Included material may or may not be applicable to other hardware and/or software platforms. Document explains program layout, system call convention, and build process. It accompanies Linux Assembly HOWTO, which may be of your interest as well, though is more Linux specific.
v0.3, April 09, 2000
Copyright © 1999-2000 Konstantin Boldyshev. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.1 or any later version published by the Free Software Foundation.
The latest version of this document is available from http://linuxassembly.org/intro.html. If you are reading a few-months-old copy, please check the url above for a new version.
You will need several tools to play with programs included in this tutorial.
First of all you need assembler (compiler).
As a rule modern UNIX distribution includes gas
(GNU Assembler),
but all examples specified here use another assembler -- nasm
(Netwide Assembler).
You can download it from the
nasm page,
it comes with full source code.
Compile it, or try to find precompiled binary for your OS;
note that several distributions (at least Linux ones)
already have nasm
, check first.
Second, you need linker -- ld
, since nasm
produces only object code.
Any distribution should embrace ld
.
If you're going to dig in, you should also install include files for your OS, and if possible, kernel source.
Now you should be ready to start, welcome..
Now we will write our program, classical "Hello, world" (hello.asm). You can download its sources and binaries here. But before let me explain several basics.
Unless program is just implementing some math algorithms in assembly, it will deal with such things as getting input, producing output, and exiting. Here comes a need to call some OS service. In fact, programming in assembly language is quite the same in different OSes, unless OS services are touched.
There are two common ways of performing a system call in UNIX OS: trough the C library (libc) wrapper, or directly.
Using or not using libc in assembly programming is more a question of taste/belief than something practical. Libc wrappers are made to protect program from possible system call convention change, and to provide POSIX compatible interface, if kernel lacks it for some call. However usually UNIX kernel is more or less POSIX compliant, this means that syntax of most libc "system calls" exactly matches syntax of real kernel system calls (and vice versa). But main drawback of throwing libc away is that are loosing several functions that are not just syscall wrappers, like printf(), malloc() and similar.
This tutorial will show how to use direct kernel calls, since this is the fastest way to call kernel service; our code is not linked to any library, it communicates with kernel directly.
Things that differ in different UNIX kernels are set of system calls and system call convention (however as they strive for POSIX compliance, there's a lot of common between them).
Note for (former) DOS programmers: so, what is that system call?
Better to explain it in such a way:
if you ever wrote a DOS assembly program (and most IA32 assembly programmers did),
you remember DOS services int 0x21, int 0x25, int 0x26
etc..
This is what can be designated as system call.
However the actual implementation is absolutely different,
and this doesn't mean that system calls necessary are done via some interrupt.
Also, quite often DOS programmers mix OS services with BIOS services
like int 0x10
or int 0x16
, and are very surprised when they fail
to perform them in UNIX, since these are not OS services).
As a rule, modern IA32 UNIXes are 32bit (*grin*), run in protected mode, have flat memory model, and use ELF format for binaries.
Program can be divided into sections (or segments):
.text
for your code (read-only),
.data
for your data (read-write),
.bss
for uninitialized data (read-write);
actually there can be few other, as well as user-defined sections,
but there's rare need to use them and they are out of our interest here.
Program must have at least .text
section.
Ok, now we'll dive into OS specific details.
System calls in Linux are done through int 0x80. (actually there's a kernel patch allowing system calls to be done via syscall (sysenter) instruction on newer CPUs, but this thing is still experimental).
Linux differs from usual UNIX calling convention,
and features "fastcall" convention
for system calls (it resembles DOS).
System function number is passed in eax
,
and arguments are passed through registers, not the stack.
There can be up to five arguments in ebx, ecx, edx, esi, edi
consequently.
If there's more than five arguments, they are simply passed though the
structure as first argument.
Result is returned in eax
, stack is not touched at all.
System call function numbers are in sys/syscall.h,
but actually in asm/unistd.h,
some documentation is in the 2nd section of manual
(f.e. to find info on write
system call, issue man 2 write
).
There are several attempts to made up-to-date documentation of Linux system calls, examine URLs in the references.
So, our Linux program will look like:
section .text
global _start ;must be declared for linker (ld)
msg db 'Hello, world!',0xa ;our dear string
len equ $ - msg ;length of our dear string
_start: ;we tell linker where is entry point
mov edx,len ;message length
mov ecx,msg ;message to write
mov ebx,1 ;file descriptor (stdout)
mov eax,4 ;system call number (sys_write)
int 0x80 ;call kernel
mov eax,1 ;system call number (sys_exit)
int 0x80 ;call kernel
As you will see futther, Linux syscall convention is the most compact one.
Kernel source references:
FreeBSD has "usual" calling convention,
when syscall number is in eax, and parameters are on the stack
(the first argument is pushed the last).
System call is to be performed through the function call to a
function containing int 0x80
and ret
, not just int 0x80
itself
(return address MUST be on the stack before int 0x80
is issued!).
Caller must clean up the stack after call.
Result is returned as usual in eax
.
Also there's an alternate way of using call 7:0
gate instead of int 0x80
.
End-result is the same, not counting increase of program size,
since you will also need to push eax
before,
and these two instructions occupy more bytes.
System call function numbers are in sys/syscall.h, documentation is in the 2nd section of man.
Ok, I think the source will explain this better:
Note: Included code may run on other *BSD as well, I think.
section .text
global _start ;must be declared for linker (ld)
msg db "Hello, world!",0xa ;our dear string
len equ $ - msg ;length of our dear string
_syscall:
int 0x80 ;system call
ret
_start: ;tell linker entry point
push dword len ;message length
push dword msg ;message to write
push dword 1 ;file descriptor (stdout)
mov eax,0x4 ;system call number (sys_write)
call _syscall ;call kernel
;actually there's an alternate
;way to call kernel:
;push eax
;call 7:0
add esp,12 ;clean stack (3 arguments * 4)
push dword 0 ;exit code
mov eax,0x1 ;system call number (sys_exit)
call _syscall ;call kernel
;we do not return from sys_exit,
;there's no need to clean stack
Kernel source references:
BeOS kernel is using "usual" UNIX calling convention too.
The difference from FreeBSD example is that you call int 0x25
.
On information where to find system call function numbers and other interesting details, examine asmutils, especially os_beos.inc file.
Note: to make nasm
compile correctly on BeOS you need
to insert #include "nasm.h"
into float.h
,
and #include <stdio.h>
into nasm.h
.
section .text
global _start ;must be declared for linker (ld)
msg db "Hello, world!",0xa ;our dear string
len equ $ - msg ;length of our dear string
_syscall: ;system call
int 0x25
ret
_start: ;tell linker entry point
push dword len ;message length
push dword msg ;message to write
push dword 1 ;file descriptor (stdout)
mov eax,0x3 ;system call number (sys_write)
call _syscall ;call kernel
add esp,12 ;clean stack (3 * 4)
push dword 0 ;exit code
mov eax,0x3f ;system call number (sys_exit)
call _syscall ;call kernel
;no need to clean stack
Building binary is usual two-step process of compiling and linking. To make binary from our hello.asm we must do the following:
$ nasm -f elf hello.asm # this will produce hello.o object file $ ld -s -o hello hello.o # this will produce hello executable
That's it. Simple.
Now you can launch hello program by entering ./hello
, it should work.
Look at the binary size -- surprised?
I hope you enjoyed the journey. If you get interested in assembly programming for UNIX, I strongly encourage you to visit Linux Assembly for more information, and download asmutils package, it contains a lot of sample code. For comprehensive overview of Linux/UNIX assembly programming refer to the Linux Assembly HOWTO.
Thank you for your interest!
Carol's new colleague Violet--Vi for short:
More HelpDex cartoons are on Shane's web site, http://mrbanana.hypermart.net/Linux.htm.
A few years ago, Miguel Icaza has started Gnome inspired by GNU and its Public License. Now, Gnome has grow to be a Project with hundreds of volunteers around the world. Recently, he started HelixCode, a company dedicated to provide the latest "features, improvements, and enhancements" of Helix Gnome distribuiton.
OLinux: Where were you born? How old are you? Where did you study and graduate from college?
Miguel de Icaza: I was born in Mexico City, and I am 27 years old this year.
I did study at The National and Autonomous University of Mexico (UNAM), I did enroll for the Math major degree, but I dropped out of college half-way trough the degree.
OLinux: How did you come with the idea of Gnome Project? Was it a sort of insight?
Miguel de Icaza: There were various things that played a role on this.
A few weeks before the GNOME project had been launched, I had visited Microsoft, and got a chance to learn about their component architectu (COM and Active-X) and how it worked.
The idea of a component system fascinated me, and Federico and I started working on specing out this project, we got the name (GNOME) and did some work on it, but it was not finished, as I was working on the Linux/SGI port with Ralf and Federico was back then the GIMP maintainer.
Then KDE appeared on the scene, and we were all very excited with the project, and although the license was known to have a few problems, we did not pay attention initially. Later, when talking to Richard Stallman and Erik Troan, we realized how bad the license for Qt (the underlying library for KDE was).
Also, Gtk+ was the GUI toolkit for the GIMP, one of the most successful Free Software/Open Source projects ever done, and many contributors were joining the Gtk+ effort.
OLinux: What is the main purpose of Gnome and how far is it to achieve its goals?
Miguel de Icaza: GNOME is trying to bring new and missing technologies to free systems, in particular GNU/Linux, but GNOME works on pretty much every Unix out there.
We have gone a long distance. GNOME currently provides:
1. A user friendly desktop, with the common abstractions that people expect from a desktop.
2. A number of tools to get work done on a computer by regular users (non hacker, non-sysadmin kind of users).
3. Productivity applications (The drawing program Gimp, the Diagram program Dia, the Gnumeric spreadsheet, the Vector drawing program SodiPodi, the Gnome Calendar)
4. Development tools: GNOME ships with various development tools for rapid application development (Glade, a GUI designer), memprof (for profiling, detecting memory leaks and improving memory allocation patterns in applications)
5. Development libraries: we provide libraries for various tasks: GUI application creation (the Gtk+ library); An application framework (The GNOME libraries); XML parsing; uniform access to resources; asyncronous IO; a unified printing architecture; a CORBA implementation; unified image loading and manipulation and various more
6. The Bonobo component architecture: a system for creating reusable components. The component architecure is built on top of CORBA, and it enables people to create compound documents.
This is probably one of the most exciting projects in GNOME right now, as it will help us create more complex applications that are easier to use, easier to maintain, easier to grow and will enable more people to join the project.
Bonobo is what GNOME was originally thought to be.
7. A team of contributors devoted to making better software.
OLinux: Gnome has grown to be a big Project with hundreds of people involved and programs being develped, how do you manage to control all of that? How many coordenators are directly involed with Gnome daily activities? How many people are involved, adding developeers and volunteers? Give those numbers for the past few years. Are there companies or organizations that sponsor and support Gnome?
Miguel de Icaza: The GNOME project consists of many various "subprojects". Each subproject is managed by a different person, and the structure is pretty much the same as the one used in the Linux kernel: people submit patches to the upstream maintainer, where the changes are reviewed and installed into the system if they are considered to be good.
There are about 400 people with access to the CVS repository these days. Contributors range from documenters, to translators to programmers, and system administrators.
There are a few companies shipping GNOME with their operating system distributions (Turbo Linux, Red Hat, SuSE) and they do fund some of the work that goes into GNOME.
On the other hand there is now a growing GNOME industry. First of all my company Helix Code has been working in providing support services for GNOME, as well as developing high-end and high-quality productivity applications. We are currently developing and improving the Evolution groupware suite and the Gnumeric spreadsheet.
Evolution is a pretty ambitious project for providing a uniform, and powerful interface to the information a user has to handle. The idea is to provide ways for users to find, and keep track of all their information sources: mail, contacts, chats, instant messaging, paging services and more. With a pluggable architecture based on Bonobo, the system can be extended to handle all sorts of information that needs to be managed.
Evolution is also intended to be a client for Lotus Notes and Microsoft Exchange servers to enable easy corporate deployment of free-software desktop systems.
Gnumeric is the other project we are developing: a Spreadsheet that is intended to be a replacement for the proprietary offerings that provides all the features people expect from these sort of tool.
Eazel is another company working on GNOME, they are working on the new file manager for GNOME.
And there are a few other GNOME-based startups that are filling the various needs of the free software community, but I will let them announce themselves.
OLinux: How is Gnome integrated with the rest of Linux communities, other development projects, alliances, partnerships? Give us some names and the activities exchanged between thoe groups.
Miguel de Icaza: GNOME is part of the GNU project. We try to work with any other free software projects, as in the end our objective is for GNU to be a full solution to the user needs.
OLinux: You have started a HelixCode, right? What is the focus of your business? How many cds of Helixcode have been sold you downloaded from the site? Are you planning an IPO?
Miguel de Icaza: Helix Code is a company focused on making sure free software is viable in today's world. So we are developing free software productivity applications under the GPL, and we are also providing consulting services and support for other companies.
The Helix GNOME distribution is just a service to the GNOME community: we know that it is sometimes hard to keep up with the latest advances in the rapidly evolving GNOME project.
Helix GNOME is managed by Jacob Berkman and Joe Shaw. They are the magicians behind providing a real-time GNOME environment for people to use. Now users have a chance of running the latest version of GNOME without having to know any system administration, nor being experts.
The latest GNOME with the latest features, improvements, and enhancements is only a few clicks away from your desktop.
It is hard to tell how many people have installed Helix GNOME, as there are many mirrors of the master site.
We are planning on growing to meet the needs of the free software market in terms of technology and usability.
Best wishes, Miguel.
OLinux: All OLinux user's thank you too.
Mosfet is a key developer in KDE Project and ahead a great reponsability as the world waits for KDE2 and KOffice suite releases. Mosfet told OLinux about the details related to KDE2, its current development stage and how "KDE2 intends to compete with Windows head-on in all features".
OLinux: Tell us about our career: college, jobs, personal life (age, birth place)
Mosfet: Well, I'm 25 years old and was born in Chicago Il, USA. When I was a child we moved to Austin, Texas and I'm currently living in Indiana. As far as a personal life, that is mostly just something that exists in theory for me ;-)
I went to school at Purdue University in Indiana and started doing Unix admin professionally when I was 19. I started with Unix when I was around 15 yrs old with a bootlegged copy of Xenix because I wanted to do 32bit graphics programming. Previously I was making DOS calls in assembler to the extended memory manager, throwing myself in protected mode to do calculations, then going back to real mode. Unix was pure joy compared to that :)
Once I got older my career has been pretty much swinging back and forth between work and education. For most of the time work involving Unix has focused on both administration of Oracle and either Digital Unix (now Tru64) or HP/UX database clusters and custom application development for database interface applications. Recently, with the advent of Linux and user interfaces such as KDE, I have been involved with that and am now paid by Linux Mandrake to work on KDE2 full time.
Olinux: What are your responsibilities at KDE? Do you have any other jobs?
Mosfet: KDE2 is my sole focus at this point. I am responsible for widget and window manager style engines (dynamic look and feels written in C++), widget and window manager theme code, the plugin mechanism for the window manager, a lot of the new panel's code is mine (although recently Matthias Elter is working a lot on this), some of the KDE graphical effects engine, and a new extensive image management system called "Pixie".
Of course, being free software people can pretty much choose what they want to work on and when. The best place to see what myself and others are currently working on is my KDE2 development webpage at http://www.mosfet.org.
OLinux: How is KDE organized? Try to give us an idea of how KDE works? How the work is coordinated and managed (servers, directories, contribution, staff payment)? ow many people are involved? What are the main problems?
Mosfet: KDE core development is based on contribution from a large group of free software developers. KDE's core system gets several thousand commits (developers doing stuff and making improvements) a month - you can get exact numbers for a given month at http://lists.kde.org. This is excluding the hundreds of applications not maintained in the KDE CVS and part the KDE project itself. As far as the exact number of individual authors I don't know off hand but there are a few hundred developers registered with our source management system (CVS).
If you write code and it rocks it gets into KDE. Anyone can contribute to KDE, although each project has it's own maintainer and if you want to do extensive work it's best to contact that person first. Once that is done you can work either in our CVS or via patches.
There is a difference between software included in the KDE core packages (such as kdebase, kdegraphics, etc...), and those maintained by individual authors. The core packages are largely a collaborative effort and gets the attention of a large group of developers. Individual packages are usually the efforts of either individuals or small groups of people. As an application developer, the approach you take is largely a matter of style. Do you want a potentially huge group of people fixing things and adding features to your code or do you want to maintain strict control over the development of your app? This largely mandates if your going to be a core developer or maintain a separate app outside of the core of KDE.
OLinux: Does any private company supports KDE? Everyone is volunteer?
Mosfet: Many companies support KDE development. Most notably Linux Mandrake, Suse, Caldera, Corel, Red Hat, and Troll Tech all have developers dedicated to KDE - and that's just what comes to mind. There are also several non-Linux distribution companies I know of that are acquiring KDE developers for free software development.
The difference between KDE and competing projects is KDE developer funding seems to be spread over a wider group of Linux companies. You don't have one or two interests controlling an important group of KDE developers. You have a couple people working on KDE in many different companies and collaborating with each other. A vast number of different interests both by volunteer developers and those working on distributing free software are represented.
KDE also seems to be the choice being made by commercial application developers coming from Windows such as Inprise/Corel. Many of these people can't imagine doing application development in a primarly C API as a step above what they had in Windows, even if there are bindings, etc... The KDE/Qt API is the only one which makes sense to these people. Combining the power of Linux/Unix systems and the powerful C++ API of KDE is a dream compared to what they had on other platforms and what they could get with other toolkits and bindings. This is extremely important considering Linux's growth. If Linux continues growing at it's current rate most of the developers will be coming from non-Unix platforms, where C++ application development has been the standard for compiled GUI applications for almost a decade.
Of course, despite all of the above, KDE core development is maintained and controlled by volunteers. People do it for fun, make applications because they want to, submit patches because they like to fix things, etc... If they are good coders and want a job developing free software, more likely than not they could get it with KDE.
OLinux: What are the main differences between KDE1 and the next release KDE2?
Mosfet: Pretty much everything ;-) The libraries have been rewritten to be more extendable, most of the UI is configurable with XML integrated into the core system, there is a new internet transparent I/O subsystem, a new browser, new HTML capabilites with support for things like CSS, bidirectional text, unicode, and soon Netscape plugins, a new window manager, help system, configuration system, panel, a whole slew of new widgets and classes, widget styles and themes... The list goes on and on.
The main difference is now KDE2 is heavily component based, focusing on the browser. All of the KOffice applications (KWord, KPresenter, KIllustrator, KSpread, KImageShop, KIllustrator, KChart, and KFormula) as well as many other KDE applications such as the PS/PDF viewer, mpeg and image viewers, and DVI viewers are all components now - internet transparent and embeddable in the browser. You can even embed the terminal application in the browser and change directories using the arrow buttons ;-) Pretty cool. KDE easily boasts the most extensive and complete component model support for Unix desktops.
OLinux: Do you consider Corba technology as a advance for KDE in matters of a better functionality? Do you see a lot of programmers using it? Give us some advantages.
Mosfet: Well, actually we found it wasn't an advance for us ;-) The problem with Corba is the API is not ideal and it's very difficult for new programmers to learn. We rely on components more extensively than any other free desktop project has attempted thus far, and the requirement to learn Corba in order to do even trivial KDE development was a huge restriction. AFAIK Gnome got around this by both using components less and providing easier function specific bindings where non-Corba experts are likely to be doing development (such as control panel applets).
This did not seem reasonable or clean to us. Even though we were using Corba for a long time (well before KDE2 development started), and had hundreds of thousands of lines of code based on it with both KOffice and KDE2, people started looking at other standard Unix technologies that accomplish the same thing. Orbit (the Corba Object Request Broker used by Gnome) was considered and was faster than the ORB we were using but still didn't solve the problems mentioned above - which are inherent in Corba. We then came up with the current KParts technology we are using for components. It is all based on standard Unix libraries such as libICE, and allows people to learn how to do fully functional components in less than a half hour. Using KDE you get the most component features such as browser embedding and internet transparency that are extremely fast and require the least amount of effort. No need to purchase 1,000+ page Corba tomes at your local bookstore, you can learn it over lunch :) Once this transition was made the development of KDE2 increased significantly over what was occuring before (an increase of over a thousand commits a month now usually compared to our Corba days). This shows that we made the right choice for developers. As far as interoperability, under the hood all the technology we use is in C and accessible through that, XML bindings are available, and Corba middleware is in the works. AFAIK a Java interface is also being looked into. The interoperability argument for Corba is largely misleading anyways, you need to do custom programming to interface legacy code with the desktop's API's no matter what mechanism you use - it doesn't just happen magically. Both Linux desktops introduce new component API's you need to port to, but using KDE this is extremely easy to do without any prior experience.
You can learn more about KParts and check out the "Learn KParts in 1/2 hour" tutorial at http://developer.kde.org. You can read a small overview I wrote of why we chose the mechanism we did when the decision was made a few months ago at http://www.kde.org/technology.html.
OLinux: About Qt without X, do you think it will run in all Unix's machines and influence some special feature of KDE?
Mosfet: This is an interesting new development. Troll Tech has written a version of Qt (the base toolkit used by KDE), that can run solely off the Linux framebuffer and doesn't require X11. Originally intended for embedded systems, combined with virtual windowed framebuffer windows it can potentially end up as a very low overhead KDE desktop framework. It already offers many advanced features directly influenced by direct framebuffer access such as anti-aliased (smoothed) fonts and alpha channel support.
As far as what varients of Unix it will run on, I currently know it supports Linux although any Unix system with a framebuffer (such as Solaris) shouldn't be that difficult.
Nonetheless, KDE will remain supporting X11. There is no reason not to, one of the reasons for using high level toolkits like Qt is you don't have to deal with the lower level stuff like if your running on a framebuffer or an X server. Also, KDE/Qt is leading the way with this technology and we are the first people to support it with a toolkit used by a desktop. Some users will want access to legacy and non-KDE apps and games, and X11 is essential as a common platform where you can run applications developed under many different toolkits. As far as people who only use KDE applications, they may very well get a quite cool low overhead desktop...
OLinux: KDE1 use to be a heavy application, now that KDE2, adding all this new technology to KDE2, do you think that a new and potent hardware will be required for users to have a good system performance?
Mosfet: Hopefully not ;-) The component model of KDE helps a lot here. When you start KDE1 a lot of things happen like the file manager and browser are loaded, etc... Now KDE is designed to start the absolute minimum until you actually use something - then dynamically load what you need. A lot still happens, but now it is mostly low level stuff like initalizing the client server I/O system and not things like loading HTML widgets.
KDE2 is still alpha and there are issues getting in the way of making this really giving users a gain yet, but it's being worked on. KDE2 will certainly not require any more resources than KDE1, and hopefully will require even less.
OLinux: What are the better features KDE will bring to users that Windows doesn't have? Can KDE already compete with Windows2000 in terms of GUI?
Mosfet: Absolutely! KDE2 intends to compete with Windows head-on in all features - no excuses made. We got the components, the transparent access to the web, the modern C++ API, developer support, and the applications needed to seriously contend for user's desktops. As far as the GUI, that is a matter I specifically deal with and I believe ours is becoming far superior. Although as a fan of Mac and BeOS interfaces, I feel most UI's are superior... what will be the default look and feel of KDE2 is drastically improved from KDE1.
OLinux: When KDE expect to release KOffice1.0?
Mosfet: Alongside the release of KDE2.0, which is now in a library freeze preparing for the second alpha release. I'm not sure if there will be a third or if after that we will go straight to betas. KDE does have a long beta cycle though, we will not release an official version until we are sure it works well for the majority of people. We feel that is our responsibility to the users of KDE, who have come to expect a stable system.
OLinux: How do you see the Digital Society and future of Intenet five years from now? Say something about ecommerce, wireless internet, hand held appliances briefly.
Mosfet: Eeek, a long stream of buzzwords! Hell, I don't know. We will all probably be out of shape and unable to tolerate sunlight because of too much time on the internet ;-) That's about all I know...
OLinux: Send a message to programers in Brazil that work in Free Software/Opensource projects and to OLinux user's?
Mosfet: Brazil rocks! I lost my credit rating there last year...
[By the way, I really like the animated image of the turning gears in the bottom left of the KDE web site. -Ed.]
Linux in France: ideas and plans of a MandrakeSoft company and it´s comercial operation. Mandrake´s co-founder and Vice-President Gael Duval describes talks to OLinux about how he managed to make an international and top seller company as the world number two Linux distribuiton in boxes sold in 1999 in less than two years and also Mandrake plans for an IPO. Will you invest?
OLinux: Say something about your professional and personal background? Where did you graduated? Brief us about your Linux career?
Gael Duval: I'm 26 years old. So no need to tell you that I didn't do much before! I just studied computer sciences for 5 years at University in France. In my last year I specialized in Networking & Electronic Documents. After having used UNIX a lot (Solaris/SunOS) I discovered Linux in 1995. My first Linux distribution was taking 50 diskettes!
OLinux: As founder of Mandrake, how the idea of a distribution came to your mind? Were you inspired by other distributions success?
Gael Duval: This is quite easy. In 1997 I had a need of a Linux distribution which would be easy to install and very easy to use. Red Hat was quite easy to install, but not easy to use. So I put KDE in Red Hat, added a few things for simplifying the users life and I released. This wouldn't have been possible in the proprietary software world! Since that, Mandrake has evolved on its own way and it's good that we can invent many new concepts.
OLinux: When Mandrake was created officially as a company?
Gael Duval: MandrakeSoft was founded in the end of 1998.
OLinux: What is Mandrakes mission and strategy?
Gael Duval: We want to provide Linux to the biggest number of users. Personal users as well as corporates. The strategy is quite clear: we want to improve as much as possible, release the best quality possible, and offer the largest possible variety of products on as many plateforms as possible. That's why we will soon have an offer for the enterprises, and that we don't limit to x86 architecture. The force of Linux is that it runs in x86, SPARC, Alpha, PPC, Mips, 68k... That's not the case for any other proprietary OS!
OLinux: Is Mandrake an internacional company? How many affiliates, distributors and resellers Mandrake works with? By this moment, international revenues are already significant?
Gael Duval: MandrakeSoft has been an international company from the beginning because before it was founded, I already had many contacts all around the world, in Europe, America, Asia, Russia... We have two main offices: one in Europe (Paris), one in the USA (Los Angeles). We have many online resellers (about 50 for south+north America) and two major distributors: MacMillan Software Publishing (we have a 5 years agreement with them), and Kasper. Of course our international revenues are significant, especially from America.
OLinux: What it's your position and responsibilities at Mandrake today?
Gael Duval: Well... My official title is "Vice-President & Co-Founder - Open Source Development".
OLinux: How much Mandrake has grow since then? How much revenues have grown and what is the projection for 2000? How many Mandrake's boxes were sold last year and will be sold in 2000? Where is located Mandrake HeadQuarters?
Gael Duval:We had a fantastic growth! In the end of 1998 we were 3 people in MandrakeSoft. In the end of 1999 we were 40, and right now we are 70. The first Mandrake officially shipped by MandrakeSoft was 5.3. It's been sold at around 3000 items. Then we sold more than 200,000 6.0 + 6.1 packs. The revenues have grown consequently.
Gael Duval: MandrakeSoft headquarters are located in Paris, France.
OLinux: Can you give us an idea, how hard the work is ahead of a Linux company?
Gael Duval: 16 hours a day, from Monday to Sunday. Several hundreds emails fall in my mailbox everyday!
OLinux: How profitable is Linux business in France, German and Europe?
It's ok but not very reactive. The big Linux hype is just starting here. Not any Linux company is on a public market in Europe yet.
OLinux: Give us a panorama of Linux effective and potential growth in these locations?
Gael Duval: It's quite hard to say: we do 70% of revenues in the USA. It seems that it's just beginning in fact.
OLinux:Is Mandrake planning an IPO for his year?
Gael Duval: Yes.
OLinux: Do you have examples of companies that have deployed Linux in a large scale?
Gael Duval: Do you mean in Europe? Do you mean Mandrake Linux? If it's Mandrake Linux, we have official contacts with 180 companies that are already using Mandrake.
OLinux: Give us cases of study about Mandrake usage inside important companies.
Gael Duval: There are several listed on our website. There is for example a manufacturing aerospace company with 400 employees which uses Mandrake for most its intranet servers. Of this big hotel in Poland with 450 employees which uses Mandrake for its internal servers databases...
But the biggest userbase of Mandrake users are individual users because we don't have a real offer yet for enterprises (that will come in two months).
OLinux: How Mandrakes supports OpenSource/Free Software organizations around the world?
Gael Duval: Firstly we give back all our code to the community: everything is published under the General Public License. Secondly, we often sponsor some events (such as a big Gnome meeting recently), we give web server space on demand, we pay some people for working on free-software (for example, we pay a KDE developpers and a KOffice developper) etc.
OLinux: Nowadays, companies as Oracle are saying future of business is ecommerce through the Internet. Do you see it this way? is Linux Mandrake developing any products or have any plans for ecommerce? What the main bets and guidelines?
Gael Duval: Ecommerce is effectively a big thing. We already provide many tools that can be used to build some ecommerce services (Postgres, MySQL, Apache, PHP...). Furthermore, a complete solution for ecommerce has been envisaged.
OLinux: What Linux hasn't achieved yet in your opinion?
Gael Duval: Large recognition. This leads to a lack of end-users applications. It will change.
OLinux: What is the relation between Mandrake and Redhat?
Gael Duval: Do you mean official contacts or the relation between the two products?In the very beginning, Mandrake was based in RedHat work?
Gael Duval: Yes. Mandrake 5.1 was basically RH+updates+KDE+several improvements. Nowdays, RedHat uses Mandrake many new features?
RH has taken several ideas from Mandrake. The biggest one is certainly the remote update tool (which permits to update packages that have security holes etc.) but they have rewritten it. However in 6.2, they directly put one of our package ("rpmlint" which can make some checks on RPM packages). Maybe one day they'll adopt our installation procedure which is the best installation procedure recognized in the Linux world.
OLinux: How do you see this crossing relation?
Gael Duval: Great. It's not competition it's co-petition! Moreover, some of our developers have personal contacts with RH and they work together on improving RPM for example. Greets, Gael
Many tutorials and introductions to bash talk about using aliases. Unfortunately most of them don't cover functions. This is a real loss, because functions offer many values that aliases don't.
Aliases are simple string substitutions. The shell looks at the first word of a command and compares it against it's current list of aliases. Further, if the last character of an alias is a space, it looks at the next word as well. For example:
$ alias 1='echo '
$ alias 2='this is an alias'
$ 1 2
this is an alias
$
Aliases don't allow for control-flow, command line arguments, or additional trickery that makes the command line so useful. Additionally, the rules surrounding alias expansion are a bit tricky, enough so that the bash(1) manpage recommends "[t]o be safe, always put alias definitions on a separate line, and do not use alias in compound commands".
Functions are really scripts run in the current context of the shell. (This bit of techspeak means that a second shell is not forked to run the function, it is run within the current shell.) Functions really are full scripts in and of themselves, and allow all the flexibility and capability that entails.
You can create a functions a couple of different ways. You can just enter it into a file and source the file with the '.' command (either from the command line or in your start-up scripts). You can also just enter the function into at the command line. A function is only available in a session where it has been made available through one of these methods (or has inherited it from its parent shell).
To create a function from the command line you would do something like this:
$ gla() {
> ls -la | grep $1
> }
This is a pretty simple function, and could be implemented as an alias as well. (There are reasons you might not want to do this, we'll get to those later.) As written, it does a long listing of the local directory and greps for any matches for the first argument. You could make it more interesting by punching it through awk to find any matching files that are larger than 1024 bytes. This would look like:
$ gla() {
> ls -la | grep $1 | awk ' { if ( $5 > 1024 ) print $0 } '
> }
You can't do this as an alias, you're no longer just replacing gla with the 'ls -la | grep'. Since its written as a function, there is no problem using the $1 (referring to the first argument to gla) anywhere in the body of your commands.
For a larger example (well, okay it's a fair amount larger), suppose you are working on two projects with two different CVS repositories. You might want to be able to write a function that allows you to set appropriate CVSROOT and CVS_ROOT variables, or clear any values from these variables if the argument unset is given. It would also be nice if it would run 'cvs update' for you if given the argument 'update'. With aliases, you could approximate this, but only by running multiple aliases from the command line. Using functions, you could create a text file containing the following: (text version)
setcvs() {
export done="no"
if [ "$1" = "unset" ]
# we want to clear all of the variables
then
echo -n "Clearing cvs related variables: "
export CVSROOT=""
export CVS_RSH=""
export done="yes"
echo "done"
fi
if ( pwd | grep projects/reporting > /dev/null && \
[ "$done" != "yes" ] )
# if we're in the reporting area, and we're not already done
then
echo -n "Setting up cvs for reporting project: "
export CVSROOT="issdata:/usr/local/cvs/"
export CVS_RSH="ssh"
export done="yes"
echo "done"
fi
if ( pwd | grep projects/nightly > /dev/null && \
[ "$done" != "yes" ] )
# if we're in the nightly area, and we're not already done
then
echo -n "Setting up cvs for nightly project: "
export CVSROOT="/home/cvs/"
export done="yes"
echo "done"
fi
if [ "$1" = "update" ]
# we want to update the current tree from the cvs server after
# setting up the right variables
then
if [ -z "$CVSROOT" ]
# if there is a zero length $CVSROOT (it has already been
# cleared or was never set) throw an error and do nothing
then
echo "no cvs variables set ... check your cwd and try again"
elif [ -n "$CVSROOT" ]
# if there is a $CVSROOT try and do the update
then
echo "updating local tree"
cvs -q update
echo "done"
fi
fi
}
Then you could enable the function and use it like this:
$ . ~/scripts/setcvs
$ cd
$ pwd
/home/a257455
$ setcvs unset
Clearing cvs related variables: done
$ echo $CVSROOT
$ echo $CVS_RSH
$ cd projects/reporting/htdocs/
$ setcvs
Setting up cvs for reporting project: done
$ echo $CVSROOT
issdata:/usr/local/cvs/
$ echo $CVS_RSH
ssh
$ cd ../../nightly/
$ setcvs
Setting up cvs for nightly project: done
$ setcvs update
Setting up cvs for nightly project: done
updating local tree
done
$ cd
$ setcvs unset
Clearing cvs related variables: done
$ setcvs update
no cvs variables set ... check your cwd and try again
$
Functions can do a lot more than aliases, the function above shows a little bit of flow control, some error handling, and the ability to use variables. Certainly it could be improved, but it shows the point. Another big win is that functions can be re-used in scripts, while aliases can't. For example, because the function above is saved in a file called '~/scripts/setcvs' you can write a script like:
#!/bin/bash
# a sample script
# first source the functions
. ~/scripts/setcvs
# now go to the project directories and update them from cvs
cd ~/projects/reporting/htdocs
setcvs update
cd -
cd ~/projects/nightly
setcvs update
# now go back to where you were and unset any cvs variables.
cd -
setcvs unset
Aliases are very useful little things, but I hope that after this introduction, you find functions at least as interesting (and probably even more useful). A final caveat to both aliases and functions is that you should never replace a standard command with an alias or a function. It is too easy to really hurt yourself by trying to execute your alias when it doesn't exist. Imagine the difference between:
$ alias rm='rm -i'
$ cd ~/scratch
$ rm * # here the rm alias catches you and interactively
# deletes the contents of your current directory
$ su -
# cd /tmp
# rm # here the rm alias no longer exists, and you whack
# a bunch of stuff out of /tmp
Happy hacking!
-pate
This article is the the current installment in an ongoing series of site reviews for the Linux community. Each month, I will highlight a Linux-related site and tell you all about it. The intent of these articles is to let you know about sites that you might not have been to before, but they will all have to do with some aspect of Linux. Now, on with the story...
To get your project out, and to get the feedback that is so essential to help find bugs and get feature suggestions, you need to first create a project website. It would be nice to build this site with php and mySQL so you can add news items through a web form rather than rewriting the page everytime you wanted to post a little tidbit. You'd also like your project site to have a short and simple URL, rather than stretching to Timbuktu and back, peppered with tildes. Your project site needs to link to a download location where users can get a copy of your code, and you'll need a place to put the app for the users to download. Next, you need to create mailing lists for your users and, if you're getting some coding help, your co-developers. Some of those web forums would be nice too. Then you remember the old adage "release early and often" and you wonder how you can get your code out to the masses quicker after you've made changes.
Whew! That's a pretty tall order for some of the smaller projects. How do all these developers find the time, servers and money to do all this? One answer that is becoming more popular is SourceForge.
The great unwashed masses need only a web browser and an internet connection to get to your project site at SourceForge. The portions that you make public in your project hosting are available to everyone. The people you designate as developers connect to your project account with ssh1 (note, ssh2 is not yet supported at the time this was written), so security is less of a problem.
"As open source developers ourselves, we have run into the kinds of obstacles that still plague many would-be developers. It was our intent to remove many of those obstacles and let developers focus on software development. (An odd concept, but easier to get used to than you'd think.) A suite of tools isn't enough, though. In the end, you need the hardware power for the whole setup."The list of projects already hosted at SourceForge is impressive. The current categories include:
Last month, we took a look at some basics of creating a shell script,
as well as a few of the underlying mechanisms that make it all work. This
time around, we'll see how loops and conditional execution let us direct
program flow in scripts, as well as looking at a few good shell-writing
practices.
CONVENTIONS
The only thing to note in this article are ellipses (...) - I use them
to indicate that the code shown is only a fragment, and not an entire script
all by itself. If it helps, think of each ellipse as one or more lines
of code that is not actually written out.
LOOPS AND CONDITIONAL EXECUTION
"FOR;DO;DONE"
Often, scripts are written to automate some repetitive task; as a random example, if you have to repeatedly edit a series of files in a specific directory, you might have a script that looks like this:
for n in ~/weekly/*.txt
do
ae $n
done
echo "Done."
or like this:
for n in ~/weekly/*.txt; do ae $n;
done; echo "Done."
The code in both does exactly the same thing - but the first version is much more readable, especially if you're building large scripts with several levels. As good general practice in writing code, you should indent each level (the commands inside the loops); it makes troubleshooting and following your code much easier.
The above control structure is called a 'for' loop - it tests for items remaining in a list (i.e., 'are there any more files, beyond the ones we have already read, that fit the "~/weekly/*.txt" template?'). If the test result is true, it assigns the name of the current item in the list to the loop variable ("n" in this case) and executes the loop body (the part between "do" and "done"), then checks again. Whenever the list runs out, 'for' stops looping and passes control to the line following the 'done' keyword - in our example, the "echo" statement.
A little trick I'd like to mention here. If you want to make the "for" loop 'spin' a certain number of times, the shell syntax can be somewhat tiresome:
for i in 1 2 3 4 5 6 7 8 9 10 11 12
13 14 15
do
echo $i
done
What a pain! If you wanted it to iterate, say, 250 times, you'd have to type all of that out! Fortunately, there's a 'shortcut' - the "seq" command, which prints a sequence of numbers from 1 to the given maximum, e.g.,
for i in $(seq 15)
do
echo $i
done
This is functionally the same as the previous script. "seq" is part
of the GNU "shellutils" package and is probably already installed on your
system. There's also the option of doing this sort of iteration by using
a "while" loop, but it's a bit more tricky.
"WHILE;DO;DONE"
Often, we need a control mechanism that acts based on a specified condition rather than iterating through a list. The 'while' loop fills this requirement:
pppd call provider &
while [ -n "$(ping -c 1 192.168.0.1|grep
100%)" ]
do
echo "Connecting..."
done
echo "Connection established."
The general flow of this script is: we invoke "pppd", the PPP paenguin... I mean, daemon :), then keep looping until an actual connection is established (if you want to use this script, replace 192.168.0.1 with your ISPs IP address). Here are the details:
1) The "ping -c 1 xxx.xxx.xxx.xxx" command sends a single ping to the
supplied IP address; note that it has to be an IP address and not a URL
- "ping" will fail immediately due to lack of DNS otherwise. If there's
no response within 10 seconds, it will print something like
PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx):
56 data bytes
ping: sendto: Network is unreachable
ping: wrote xxx.xxx.xxx.xxx 64 chars,
ret=-1
--- xxx.xxx.xxx.xxx ping statistics
---
1 packets transmitted, 0 packets received,
100% packet loss
2) The only line we're interested in is the one that gives us the packet loss percentage; with a single packet, it can only be 0% (i.e., a successful ping) or 100%. By piping the output of "ping" through the "grep 100%" command, we narrow it down to that line, if the loss is indeed 100%; a 0% loss will not produce any output. Note that the "100%" string isn't anything special: we could have used "ret=-1", "unreachable", or anything else that's unique to a failure response.
3) The square brackets that contain the statement are a synonym for the 'test' command, which returns '0' or '1' (true or false) based on the evaluation of whatever's inside the brackets. The '-n' operator returns 'true' if the length of a given string is greater than 0. Since the string is assumed to be contiguous (no spaces), and the line we're checking for is not, we need to surround the output in double quotes - this is a technique that you will use again and again in script writing. Do note that the square brackets require spaces around them - i.e., [-n $STRING] won't work; [ -n $STRING ] is correct. For more info on the operators used with 'test', type "help test"; a number of very useful ones are available.
4) As long as the above test returns "true" (i.e., as long as the "ping"
fails), the 'while' loop will continue to execute - by printing the
"Connecting..." string every ten seconds. As soon as a single ping is successful
(i.e., the test returns "false"), the 'while' loop will break and pass
control to the statement after "done".
"UNTIL;DO;DONE"
The 'until' loop is the reverse of the 'while' - it continues to loop
as long as the test is false, and fails when it becomes true. I've never
had the occasion to use it; the 'while' loop and the flexibility of the
available tests have sufficed for everything I've needed so far.
"IF;THEN;[ELSE];FI"
There are many times when we just need to check for the existence of a condition and branch the execution based on the result. For those times, we have the 'if' statement:
if [ $BOSS="jerk" ]
then
echo 'Take this
job and shove it!'
else
echo 'Stick around;
the money is good.'
fi
...
<grin> I guess it's not quite that easy... but the logic makes sense. Anyway, if a variable called BOSS has been defined as "jerk" (C programmers take note: '=' and '==' are equivalent in a test statement - no assignment occurs), then the first 'echo' statement will be executed. In all other cases, the second 'echo' statement will run (if $BOSS="idiot", you'll still be working there. Sorry about that. :). Note that the 'else' statement is optional, as in this script fragment:
if [ -n $ERROR ]
then
echo 'Detected
an error; exiting.'
exit
fi
...
This routine will obviously exit if the ERROR variable is anything other
than empty - but it will not affect the program flow otherwise.
"CASE;IN;;ESAC"
The remaining tool that we can use for conditional branching is basically a multiple 'if' statement, based on the evaluation of a test. If, for example, we know that the only possible outputs from an imaginary program called 'intel_cpu_test' are 4, 8, 16, 32, or 64, then we can write the following:
case $(intel_cpu_test) in
4) echo "You're
running Linux on a calculator??";;
8) echo "That 8088
is past retirement age...";;
16) echo "A 286
kinda guy, are you?";;
32) echo "One of
them new-fangled gadgets!";;
64) echo "Oooh...
serious CPU envy!";;
*) echo "What
the heck are you running, anyway?";;
esac
(Before all you folks flood me with mail about running Linux on a 286 or an 8088... you can't run it on a calculator either. :)
Obviously, the "*" at the end is a catch-all: if someone at
the Intel Secret Lab runs this on their new CPU (code name "UltraSuperHyperWhizBang"),
we want the script to come back with a controlled response rather than
a failure. Note the double semicolons - they 'close' each of the "pattern/command"
sets and are (for some reason) a common error in "case/esac" constructs.
Pay extra attention to yours!
BREAK and CONTINUE
These statements interrupt the program flow in specific ways. The "break", once executed, immediately exits the enclosing loop; the "continue" statement skips the current loop iteration. This is useful in a number of situations, particularly in long loops where the existence of a given condition makes all further tests unnecessary. Here's a long (but hopefully understandable) pseudo-example:
while [ hosting_party ]
do
case $FOOD_STATUS
in
potato_chips_gone) replace_potato_chips;;
peanuts_finished) refill_peanut_bowl;;
pretzels_gone) open_new_pretzel_bag;;
...
...
esac
if [ police_on_scene
]
then
talk_to_nice_officers
continue
fi
case $LIQUOR_STATUS
in
vodka_gone) open_new_vodka_bottle;;
rum_gone) open_new_rum_bottle;;
...
...
esac
case $ANALYZE_GUEST_BEHAVIOR
in
lampshade_on_head) echo "He's been drinking";;
talking_to_plants) echo "She's been smoking";;
talking_to_martians) echo "They're doing LSD";;
levitating_objects) echo "Who spiked my lemonade??";;
...
...
...
esac
done
echo "Dude... what day is it?"
A couple of key points: note that in checking the status of various party supplies, you might be better off writing multiple "if" statements - both potato chips and pretzels may run out at the same time (i.e., they are not mutually exclusive). The way it is now, the chips have top priority; if two items do run out simultaneously, it will take two loops to replace them.
We can keep checking the food status while trying to convince the cops that we're actually holding a stamp-collectors' meeting (in fact, maintaining the doughnut supply is a crucial factor at this point), but we'll skip right past the liquor status - as it was, we got Joe down off the chandelier just in time...
The "continue" statement skips the last part of the "while" loop as
long as the "police_on_scene" function returns 'true'; essentially, the
loop body is truncated at that point. Note that even though it is actually
inside the "if" construct, it affects the loop that surrounds it:
both "continue" and "break" apply only to loops, i.e., "for", "while",
and "until" constructs.
BACK TO THE FUTURE
Here is the script we created last month:
a=$(date +%T-%d_%m_%Y)
cp -i $1 ~/Backup/$1.$a
Interestingly enough, shortly after finishing last month's article,
I was cranking out a bit of C code on a machine that didn't have 'rcs'
(the GNU Revision Control System) installed - and this script came in very
handy as a 'micro-rcs'; I used it to take "snapshots" of the project status.
Simple, generalized scripts of this sort become very useful at odd times...
ERROR CHECKING
The above is a workable script - for you, or anyone who cares to read and understand it. Let's face it, though: what we want from a program or a script is to type the name and have it work, right? That, or tell us exactly why it didn't work. In this case, though, what we get is a somewhat cryptic message:
cp: missing destination file Try `cp --help' for more information.
For everyone else, and for ourselves down the road when we forget exactly how to use this tremendously complex script with innumerable options :), we need to put in error checking - specifically, syntax/usage information. Let's see how what we've just learned might apply:
if [ -z $1 ]
then
clear
echo "'bkup' -
copies the specified file to the user's"
echo "~/Backup
directory after checking for name conflicts."
echo
echo "Usage: bkup
filename"
echo
exit
fi
a=$(date +%T-%d_%m_%Y)
cp -i $1 ~/Backup/$1.$a
The '-z' operator of 'test' returns '0' (true) for a zero-length string; what we're testing for is 'bkup' being run without a filename. The very beginning is, in my opinion, the best place to put help/usage information in a script - if you forget what the options are, just run the script without any, and you'll get an instant 'refresher course' in using it. You don't even have to put in the original comments, now - note that we've basically incorporated our earlier comments into the usage info. It's still a good idea to put in comments at any non-obvious or tricky places in the script - that brilliant trick you've managed to pull off may cause you to cuss and scratch your head next year, if you don't...
Before we wrap up playing with this script, let's give it a few more capabilities. What if you wanted to be able to send different types of files into different directories? Let's give that a shot, using what we've learned:
if [ -z $1 ]
then
clear
echo "'bkup' -
copies the specified file to the user's ~/Backup"
echo "directory
tree after checking for name conflicts."
echo
echo "Usage: bkup
filename [bkup_dir]"
echo
echo "bkup_dir
Optional subdirectory in '~/Backup' where the file"
echo " will be
stored."
echo
exit
fi
if [ -n $2 ]
then
if [ -d ~/Backup/$2
]
then
subdir=$2/
else
mkdir -p ~/Backup/$2
subdir=$2/
fi
fi
a=$(date +%T-%d_%m_%Y)
cp -i $1 ~/Backup/$subdir$1.$a
Here is the summary of changes:
1) The comment section of the help now reads "...directory tree" rather than just "directory", indicating the change we've made.
2) The "Usage:" line has been expanded to show the optional (as shown by the square brackets) argument; we've also added an explanation of how to use that argument, since it might not be obvious to someone else.
3) An added "if" construct that checks to see if $2 (a second argument to 'bkup') exists; if so, it checks for a directory with the given name under "~/Backup", and creates one if it does not exist (the "-d" tests if the file exists and is a directory).
4) The 'cp' command now has a 'subdir' variable tucked in between "Backup/" and "$1".
Now, you can type things like
bkup my_new_program.c c
bkup filter.awk awk
bkup filter.awk filters
bkup Letter_to_Mom.txt docs
etc., and sort everything into whatever categories you like. Plus, the old behavior of "bkup" is still available -
bkup file.xyz
will send a backup of "file.xyz" to the "~/Backup" directory itself;
useful for files that fall outside of your sorting criteria.
By the way: why are we appending a "/" to $2 in the "if" statement instead of right in the "cp" line? Well, if $2 doesn't exist, then then we want 'bkup' to act as it did originally, i.e., send the file to the "Backup" directory. If we write something like
cp -i $1 ~/Backup/$subdir/$1.$a
(note the extra "/" between $subdir and $1), and $2 isn't specified, then $subdir becomes blank, and the line above becomes
cp -i $1 ~/Backup//$1.$a
- not a particularly desirable result, since we want to stick with standard shell syntactic practice wherever possible.
In fact, it's a really good idea to consider all the possibilities whenever you're building variables into a string; a classic mistake of that sort can be seen in the following script -
#!/bin/bash
# Written by Larry, Moe, and Shemp
- the Deleshun PoWeR TeaM!!!
# Checked by Curly: "Why, soitainly
it woiks! Nyuk-nyuk-nyuk!"
# All you've gotta do is enter the
name of this file followed by
# whatever you want to delete - directories,
dot files, multiple
# files, anything is OK!
rm -rf $1*
DO NOT USE THIS SCRIPT!
<Sigh> At least they commented it. :)
What happens if somebody does run "three_stooges", and doesn't enter a parameter? The active line in the script becomes
rm -rf *
Assuming that you're Joe User in your home directory, the result is pretty horrible - it'll wipe out all of your personal files. It becomes a catastrophe if you're the root user in the root directory - the entire system goes away!!
Viruses seem like such friendly, harmless things about now... <grin>
Be careful with your script writing. As you have just seen, you have the power to destroy your entire system in a blink.
Unix gives you just enough rope to
hang yourself -- and then a
couple more feet, just to be sure.
-- Eric Allman
The philosophy makes sense: unlimited power in the tools, restriction
by permissions - but it imposes a responsibility: you must take appropriate
care. As a corollary, whenever you're logged in as root, do not run any
shell scripts that are not provably harmless (note the Very Large assumptions
hanging off that phrase - "provably harmless"...)
WRAPPING IT UP
Loops and conditional execution are a very important part of most scripts. As we analyze other shell scripts in future articles, you'll see some of the myriad ways in which they can be used - a script of even average complexity cannot exist without them.
Next month, we'll take a look at some tools that are commonly used in
shell scripts - tools that may be very familiar to you as command-line
utilities - and explore how they may be connected together to produce desired
results. We'll also dissect a couple of scripts - mine, if no one else
is brave enough to send in the results of their keyboard concoctions. (Be
Afraid. Be Very Afraid.) :)
I welcome all comments and corrections in regard to this series of articles,
as well as any interesting scripts that you may send in. All flames will
be sent to /dev/null (Oh no, it's
full...)
Until next month -
Happy Linuxing!
``What's this script do?
'unzip; touch; finger; mount; gasp; yes; umount; sleep'
Hint for the answer: not everything
is computer-oriented. Sometimes you're in a sleeping bag, camping out with
your girlfriend.''
-- Frans van der Zande
REFERENCES
The "man" pages for 'bash', 'seq',
'ping', 'grep'
The "help" command for 'for', 'while',
'until', 'if', 'case', 'test',
'break', 'continue'
"Introduction to Shell Scripting -
The Basics" by Ben Okopnik, LG #53
This xmodmap and kimap solutions will work for you in setting up any international keyboard for (Debian, RedHat, Mandrake, Corel Linux) Linux, FreeBSD, OpenBSD, NetBSD and possibly every Unix that uses Xfree86 and KDE. The advantage of this package is that it is not architecture specific and will work on SPARK, MIPS and all other systems. I don't want to say that other packages are architecture independent, but I don't like writing garbage in the bash_profile and XF86Config or possibly somewhere else. This was written by Juraj Sipos (c) on 4/22/2000, xvudpapc@savba.sk
INTRODUCTION
Imagine you use Linux
or a BSD OS and want to write a business letter to a person that has a
foreign name with a slash or idiaresis. Danish language uses signs like
ø and Portuguese like ñ. With this information you can make
your own international keyboard layout without installing any additional
packages. The following information will help you set up German, Spanish,
Italian, Slovak, Czech, Polish, Slovenian, Croatian, Danish, Dutch, French,
Finnish, Norwegian, Estonian, Latvian, Swedish and other keyboards without
additional installing of national packages and without writing garbage
to bash_profile and XF86Config files. You can also alternatively look at
my homage at http://www.home.sk/www/man/bsd1.htm
to see pictures of various
keyboards. In case you want to install Greek, Hebrew or Russian language,
follow my information and apply changes pertinent to these languages (e.g.,
to install Greek fonts, etc.).
The biggest problem with kikbd or international keyboard under KDE in X Windows is that it doesn't work in Xfree86 that easily (you have to install some national packages and write garbage with complicated syntax to the above-mentioned files). A user expects a simple way for configuring his or her keyboard for international settings. A simple way is to start KDE, change the international keyboard settings and immediately write in the language we chose (this will work for German and other languages, but in Eastern European keyboards some letters don't function). According to the KDE documentation it should work, but it doesn't. After exploring many KDE resources on the net, I didn't find a solution (except for the one that forces you to install some national package). I know that some locale settings should be included in my bash_profile or csh logic scripts, but after applying these settings I couldn't change (and install) keyboards in FreeBSD and it appeared like going through a darker forest compared to the information I already had regarding localization of KDE and X Windows under Xfree86.
Here are some solutions for installing international keyboard layouts. The following information will help you set up any European keyboard layout. The solution works for Xfree86 in 3.1 RELEASE in FreeBSD (.Xmodmap solution), Corel Linux, Debian Linux, RedHat and FreeBSD 3.3 RELEASE and 4.0 RELEASE (KDE *.kimap solution). I tested it on those systems. Note: .Xmodmap solution works well with other windows managers. Some Unixes override .Xmodmap setting when used with KDE. If .Xmodmap doesn't work, change must be made to the KDE kimap files in .../kikbd directory.
If .Xmodmap solution doesn't work in KDE, copy skz.kimap (at the end of this article) to /usr/local/share/apps/kikbd, which is your KDE keyboard directory. The problem with KDE is that after installing another keyboard, you have no chance to use it as KDE documentation doesn't clearly state how to define your locale settings in a bash_profile. After I installed Slovak keyboard in KDE, I couldn't write in Slovak or Czech, so I made few changes to skz.kimap file, which are explained later in this file. After applying these changes, no other changes are necessary - you don't need to write any special commands to your bash_profile or XF86Config. BUT WHEN YOU INSTALL ANOTHER KEYBOARD in START/SETTINGS/INPUT DEVICES/INTERNATIONAL KEYBOARDS from your KDE menu, CHECK AUTOSTART. Then everything will work fine. The difference between skz.kimap and sky.kimap (and csz.kimap and csy.kimap) is that y,Y and z,Z are swapped, so with skz.kimap or csz.kimap you will have z,Z instead of y,Y, but with sky.kimap or csy.kimap, y,Y doesn't change its position on the IBM English keyboard layout.
How it all works:
a) Copy the "Compose"
file from /usr/X11R6/lib/X11/locale/iso8859-2 to:
/usr/X11R6/lib/X11/locale/iso8859-1
directory (yes, iso8859-1, not iso8859-2). Back up
the original
"Compose" file if you want (alternatively, copy other iso885*** Compose
file to
iso8859-1 directory).
b) Put the included
.Xmodmap file to your root directory (Slovak language, or make your own
.Xmodmap
file) (or possibly put your own *.kimap file to the kikbd directory if
Xmodmap
will not
work)
c) Install ISO8859-2
fonts (or other pertinent fonts).
d) Disable every "Scroll
lock" uncommented line in your XF86Config, because our .Xmodmap
uses scroll
lock to switch between keyboards.
e) Put the appropriate
fontpath for your newly installed fonts in your XF86Config file, if
necessary.
The .Xmodmap solution may be applied to all X keyboards except Hebrew, I suppose (I'm joking). The .Xmodmap file overrides all settings of keyboard layouts as defined in /usr/X11R6/lib/X11/xkb/symbols/, where are symbols for many international keyboards. The .Xmodmap solution will give you an enhanced Slovak typewriter keyboard layout.
First, I must say that in my solution, different mapping is used for .Xmodmap file (and kimap file) for some ISO definitions. This means that the ISO definitions will either give you what they say they are (aacute [á], eacute [é], etc.), or they will not give you what they say they are (putting "threequarters" in your .Xmodmap file will give not give you "3/4" but "z" with a caron above it). For example, "mu" will give lcaron, "oslash" rcaron, etc. (Obviously, in other case you need to install some national packages to use "lcaron" definition instead of "mu"). Normally, you can not put "lcaron" to the .Xmodmap file, because it will not give you lcaron; you must write "mu" instead, or "guillemotright" for tcaron. I also tried hexadecimal numbers and they work. However, other key definitions, for example, adieresis (a with two dots above it), uacute (u with slash above it), as well as dead_diaeresis do not require a substitution of other definitions and work pretty well as they're defined everywhere (dead key is a key you press, you hold it and nothing happens, but after pressing another key you will get a special letter). The original "Compose" file in .../iso8859-1 directory can be fully utilized for Slovak or Czech keyboard layouts (Polish, Hungarian, Slovenian, Croatian), but there is only one problem with the Slovak or Czech keyboard (and other languages too) layout - dead_caron doesn't work. That's why you have to copy the "Compose" file from the iso8859-2 directory to iso8859-1 directory, or alternatively, you can edit the "Compose" file in iso8859-1 directory and put all references about "dead_caron" from iso8859-2/Compose to iso8859-1/Compose file.
You can leave the Keyboard section in your XF86Config without much change. Put (if it's not already there) the following in the "Keyboard" section:
Section "Keyboard"
Protocol "Standard"
XkbRules "xfree86"
XkbModel "pc101"
XkbLayout "us"
Some X Windows managers and/or environments override .Xmodmap settings, so if you use KDE and .Xmodmap doesn't work, use kikbd keymap instead of .Xmodmap. (A sample kikbd kimap for the Slovak language is included at the bottom of this file). The Slovak/Czech/English keyboard layout is switched to by scroll lock with .Xmodmap. You may use languages only with the applications that have access to your ISO-8859-2 (or other fonts) fonts (this may not work with StarOffice or with other applications that have their own built-in fonts). StarOffice has its own fonts directory - afm fonts in ../xp3/fontmetrics/afm, and ps fonts in ../xp3/pssoftfonts, so you must add the ISO8859-2 fonts directory to these directories (to tell StarOffice to use these fonts too) and edit fonts.dir file and add the symlinked fonts there. I can easily use any language in StarOffice.
Important note: If you want to exchange documents between StarOffice or WordPerfect and MS Word, you must include the information about windows 1250 encoding to the file you write (win1250 is similar to iso8859-2, but it's a little bit different). There's a solution: use a converter from iso8859-2 to win1250 (you can find one at my home page at http://www.home.sk/www/man/bsd1.htm).
If you want to edit and make your own .Xmodmap keyboard layout definitions, I'll explain one line of the .Xmodmap file to make clear what you should do. This explanation can be used for all keycodes.
For example, the line:
keycode 0x11 = 8 asterisk aacute 8
(note: keycode 0x11 is
derived from xkeycaps utility)
says that the first
pair, the default one, (number "8" and "asterisk") will display number
"8" when you press keycode 0x11 ("8"), will display asterisk when a "shift"
key is pressed. After pressing the scroll lock, there's another definition:
ISO_NEXT_GROUP, which means that when you press the default "8" key, no
"8" will be displayed, but aacute ("á"), when you press the "shift"
key, number "8" will be displayed. So if you change "aacute" and "8", anything
you put instead of "aacute" and "8" will be displayed, for example:
keycode 0x11 = 8 asterisk semicolon colon
will give you "semicolon" and "colon" in your 0x11 keycode after pressing the scroll lock. If you delete the ISO_NEXT_GROUP (the next pair of definitions on the right), you will have only one group of keyboard definitions ("8" and "asterisk"). Be careful when editing the .Xmodmap file. You mustn't delete definitions that enable utilization of the scroll lock unless you know what you are doing. These are the lines such as:
keycode 0x4e = ISO_Next_Group
add mod5 = ISO_Next_Group
etc. You must also keep in mind that Unixes are case sensitive.
If you want to find out more about keycodes, install a package "xkeycaps".
________________cut_here__________________________________________________
! This is an `xmodmap'
input file for PC 101 key #2 (FreeBSD/XFree86; US)
! keyboards created
by XKeyCaps, modified by Juraj Sipos on 8/17/1999.
! XKeyCaps 2.38 is Copyright
(c) 1997 Jamie Zawinski <jwz@netscape.com.
! http://people.netscape.com/jwz/xkeycaps/
This is an .Xmodmap solution for
! Slovak keyboard. You
must have ISO-8859-2 fonts installed with a
! pointer in /etc/XF86Config
! NOTE: "!" is a
comment. Some information follows but I deleted
! it as it is commented
and not important.
! If you want to know
what I deleted, start xkeycaps utility and generate your
! own Xmodmap file.
! The "0 Ins" key generates
KP_Insert and KP_0
! The ". Del" key generates
KP_Delete and KP_Decimal
!#define XK_dead_semivoiced_sound 0xFE5F
!dead_iota, dead_voiced_sound, dead_belowdot, dead_tilde, dead_macron
keycode 0x09 = Escape
keycode 0x43 = F1 F11
F1 Multi_key
keycode 0x44 = F2 F12
F2 F12
keycode 0x45 = F3 F13
F3 F13 idiaeresis
keycode 0x46 = F4 F14
F4 F14 mu yen
keycode 0x47 = F5 F15
F5 F15 guillemotright guillemotleft
keycode 0x48 = F6 F16
F6 F16 ograve
keycode 0x49 = F7 F17
F7 dead_abovedot oacute
keycode 0x4A = F8 F18
F8 dead_breve acute
keycode 0x4B = F9 F19
F9 dead_cedilla ugrave
keycode 0x4C = F10 F20
F10 dead_ogonek
keycode 0x5F = F11 F21
dead_acute dead_caron
keycode 0x60 = F12 F22
dead_abovering dead_diaeresis
!keycode 0x6F = Print
Execute dead_doubleacute dead_circumflex
keycode 0x6F = Print
Execute dead_iota
keycode 0x4E = ISO_Next_Group
keycode 0x6E = Pause
keycode 0x31 = grave
asciitilde semicolon dead_diaeresis
keycode 0x0A = 1 exclam
plus 1
keycode 0x0B = 2 at
mu 2
keycode 0x0C = 3 numbersign
onesuperior 3
keycode 0x0D = 4 dollar
egrave 4
keycode 0x0E = 5 percent
0x0bb 5
keycode 0x0F = 6 asciicircum
threequarters 6
keycode 0x10 = 7 ampersand
yacute 7
keycode 0x11 = 8 asterisk
aacute 8
keycode 0x12 = 9 parenleft
iacute 9
keycode 0x13 = 0 parenright
eacute 0
keycode 0x14 = minus
underscore equal percent
keycode 0x15 = equal
plus dead_acute dead_caron
keycode 0x33 = backslash
bar ograve parenright
keycode 0x16 = BackSpace
keycode 0x6A = Insert
keycode 0x61 = Home
keycode 0x63 = Prior
keycode 0x4D = Num_Lock
Pointer_EnableKeys
keycode 0x70 = KP_Divide
slash
keycode 0x3F = KP_Multiply
asterisk
keycode 0x52 = KP_Subtract
minus
keycode 0x17 = Tab ISO_Left_Tab
keycode 0x18 = q Q
keycode 0x19 = w W
keycode 0x1A = e E
keycode 0x1B = r R
keycode 0x1C = t T
keycode 0x1D = y Y z
Z
keycode 0x1E = u U
keycode 0x1F = i I
keycode 0x20 = o O
keycode 0x21 = p P
keycode 0x22 = bracketleft
braceleft acute slash
keycode 0x23 = bracketright
braceright diaeresis parenleft
keycode 0x24 = Return
keycode 0x6B = Delete
keycode 0x67 = End
keycode 0x69 = Next
keycode 0x4F = KP_Home
7 KP_Home
keycode 0x50 = KP_Up
8
keycode 0x51 = KP_Prior
9
keycode 0x56 = KP_Add
plus
keycode 0x42 = Caps_Lock
keycode 0x26 = a A
keycode 0x27 = s S
keycode 0x28 = d D
keycode 0x29 = f F
keycode 0x2A = g G
keycode 0x2B = h H
keycode 0x2C = j J
keycode 0x2D = k K
keycode 0x2E = l L
keycode 0x2F = semicolon
colon ocircumflex quotedbl
keycode 0x30 = apostrophe
quotedbl section exclam
keycode 0x53 = KP_Left
4
keycode 0x54 = KP_Begin
5
keycode 0x55 = KP_Right
6
keycode 0x32 = Shift_L
ISO_Next_Group
keycode 0x34 = z Z y
Y
keycode 0x35 = x X
keycode 0x36 = c C
keycode 0x37 = v V
keycode 0x38 = b B
keycode 0x39 = n N
keycode 0x3A = m M
keycode 0x3B = comma
less comma question
keycode 0x3C = period
greater period colon
keycode 0x3D = slash
question minus underscore
keycode 0x3E = Shift_R
keycode 0x62 = Up
keycode 0x57 = KP_End
1
keycode 0x58 = KP_Down
2
keycode 0x59 = KP_Next
3
keycode 0x6C = KP_Enter
Return
keycode 0x25 = Control_L
ISO_Next_Group
!keycode 0x40 = Alt_L
Meta_L
keycode 0x40 = Meta_L
Alt_L
keycode 0x41 = space
keycode 0x71 = Alt_R
Meta_R
keycode 0x6D = Control_R
keycode 0x64 = Left
keycode 0x68 = Down
keycode 0x66 = Right
keycode 0x5A = KP_Insert
0
keycode 0x5B = KP_Delete
period
!keysym Alt_L = Meta_L
!keysym F12 = Multi_key
clear Shift
!clear Lock
clear Control
clear Mod1
clear Mod2
clear Mod3
clear Mod4
clear Mod5
add Shift = Shift_L Shift_R
add Control = Control_L
Control_R
!add Mod1 = Alt_L Alt_R
add Mod1 = Meta_L Alt_R
add Mod2 = Num_Lock
add Mod5 = ISO_Next_Group
!add Mod1 =
!add Mod2 = Alt_R Alt_L
Mode_switch
keycode 0x73 = ISO_Next_Group
keycode 0x74 = dead_acute
dead_diaeresis
keycode 0x75 = dead_caron
dead_abovering
_____________cut_here__________________________________________________________
# KDE skz.kimap Config
File, modified by Juraj Sipos
# name this file as
skz.kimap and copy it to KDE .../kikbd directory
[International Keyboard]
Label=Skz
Locale=sk
# *** here was some info I deleted.
[KeyboardMap]
CapsSymbols=q,w,e,r,t,y,u,i,o,p,a,s,d,f,g,h,j,k,l,z,x,c,v,b,n,m
keysym0=1,plus,1,exclam,,
keysym1=2,mu,2,at,,
keysym2=3,onesuperior,3,numbersign,,
keysym3=4,egrave,4,dollar,,
keysym4=5,0x0bb,5,percent,,
keysym5=6,threequarters,6,asciicircum,,
keysym6=7,yacute,7,ampersand,,
keysym7=8,aacute,8,asterisk,,
keysym8=9,iacute,9,parenleft,,
keysym9=0,eacute,0,parenright,,
keysym10=minus,equal,percent,minus,underscore,
keysym11=grave,dead_diaeresis,dead_circumflex,grave,asciitilde,
keysym12=equal,dead_acute,dead_caron,equal,plus,
keysym13=bracketleft,uacute,slash,bracketleft,braceleft,
keysym14=bracketright,adiaeresis,parenleft,bracketright,braceright,
keysym15=semicolon,ocircumflex,quotedbl,semicolon,colon,
keysym21=y,z,Z,,,
keysym22=z,y,Y,,,
# I changed some
keysyms above (as "mu" instead of "lcaron") and added the following lines
keycode43=51,ograve,parenright,backslash,bar,
keycode40=48,section,exclam,apostrophe,quotedbl,
keycode51=59,comma,question,less,comma,
keycode52=60,period,colon,period,greater,
keycode53=61,minus,underscore,slash,question,
____cut_here___________________________________________________________________________
(The numbers of keycodes are derived from the "xkeycaps" utility)
The purpose of the following info is to help you build any .Xmodmap keyboard layout with iso8859-2 or other character sets. If you're going to use other languages than the Central European ones, find a pertinent table for your ISO*** character set on Internet. The gdkkeysyms.h file is in (RedHat) /usr/include/gdk/gdkkeysyms.h and it contains all the special names we're using here (it also contains names of Greek characters).
UNIX
ISO-8859-2 (ISO Latin2) character set
octal hex (you can use it for other languages too)
----------------------------------------------------------------------
First, try to see if
definitions will give you (after installing pertinent fonts and keyboard
in X) what they say they are. If they will not give you what they say they
are (some keycodes will be unfunctional), then you must make a substitution.
Definitions which will not give you what they say they are can be traced
by their visual shape in Western Latin 1 encoding. For example, if you
load a Slovak website, do not use ISO8859-2 character set encoding for
viewing, but Western ISO8859-1 encoding for viewing. Thus you will see
bad fonts with letters like ¾ and so on. You will see what you must
substitute. But if you don't know what "¾" is called in ISO terminology,
find and download an appropriate character set table for ISO-8859-1. It
must be somewhere on the net. The symbols on your right (for example, mu
[micro], which is µ, will give you lcaron instead of µ) will
give you what's on their left. NOTE: vowel *acute signs require no substitution,
therefore I omitted iacute (í), aacute (á), etc.
0243 0xa3 /Lslash £
(Explanation: writing the name of £ ["pound" in our case]
to Xmodmap or kimap file will give you Lslash. But Lslash can be
obtained by a dead_caron - you press a dead_caron and L). The sign
on the right, if put in Xmodmap or kimap files, will
print you the character which is on its left
0245 0xa5 /Lcaron ¥
Thus, writing "yen" to kimap or Xmodmap file will give us Lcaron
0251 0xa9 /Scaron ©
copyright (will give us Scaron)
0253 0xab /Tcaron «
guillemotleft (will give us Tcaron)
0256 0xae /Zcaron ®
registered
0265 0xb5 /lcaron µ
mu
0271 0xb9 /scaron ¹
onesuperior
0273 0xbb /tcaron »
guillemotright
0276 0xbe /zcaron ¾
threequarters
0306 0xc6 /Cacute Æ
find out yourself
0312 0xca /Eogonek Ê
find out yourself
0313 0xcb /Edieresis
Ë Edieresis
0314 0xcc /Ecaron Ì
find out yourself
0317 0xcf /Dcaron Ï
find out yourself
0321 0xd1 /Nacute Ñ
Ograve
0322 0xd2 /Ncaron Ò
find out yourself
0324 0xd4 /Ocircumflex
Ô Ocircumflex
0325 0xd5 /Ohungarumlaut
Õ find out yourself
0330 0xd8 /Rcaron Ø
find out yourself
0331 0xd9 /Uring Ù
find out yourself
0333 0xdb /Uhungarumlaut
Û
0336 0xde /Tcedilla
Þ
0343 0xe3 /abreve ã
0345 0xe5 /lacute å
0346 0xe6 /cacute æ
0350 0xe8 /ccaron è
egrave
0352 0xea /eogonek ê
0354 0xec /ecaron ì
0357 0xef /dcaron ï
0361 0xf1 /nacute ñ
ntilde
0362 0xf2 /ncaron ò
0365 0xf5 /ohungarumlaut
õ
0370 0xf8 /rcaron ø
0371 0xf9 /uring ù
0373 0xfb /uhungarumlaut
û
0376 0xfe /tcedilla
þ
0377 0xff /dotaccent
ÿ
You may experiment to
find out which definitions will give you which characters, it shouldn't
be difficult. This is just a hint on how to start. I didn't go on to investigate
further definitions because I have my Slovak and Czech keyboards and I'm
not, for now, interested to use other keyboards. Look at my homage
and build your own keyboard.
Enjoy.
Juraj Sips