|
Table of Contents:
|
|||||||||
TWDT 1 (gzipped text file) TWDT 2 (HTML file) are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version. | |||
This page maintained by the Editor of Linux Gazette, gazette@ssc.com
Copyright © 1996-2000 Specialized Systems Consultants, Inc. | |||
The Mailbag!Write the Gazette at gazette@ssc.com |
Contents: |
Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.
This section was edited by Michael Williams <iamalsogod@hotmail.com> aka. "Alex".
Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.
Wed, 31 May 2000
10:48:35 +0100
From: "Anne Parker" <alp30@admin.cam.ac.uk>
Subject: kppp playing up
I'm running RedHat 6.0, with KDE as my user desktop and fvwm as
my root
desktop.
I have set up ppp and can activate it successfully through the
Network
Configurator when logged in as root. When I go to my user
desktop, I can dial up my ISP (ClaraNET) and collect email, surf,
etc - providing I have already started the relevant apps. Once I
have logged on, I can't start new apps because kppp has (I think)
started playing with resolve.conf. If I go to a terminal and log
on as a different user, it says that my hostname is, e.g.
du-208.clara.co.uk, which is presumably my dynamic IP address. It
doesn't even reset it after I've logged off - I have to su root
and type
"hostname localhost"
before KDE will talk to me again. I'm know I've successfully set
up kppp in the past, but the various examples I've seen in books
tend to assume that your machine has its own name and IP address,
so I may have chosen some incorrect settings in kppp this time.
Can anyone help with the correct kppp settings for a plain
standalone box (localhost, 127.0.0.1) dialing on a regular phone
connection to an ISP that assigns a dynamic IP address?
Thanks
Anne
Thu, 1 Jun
2000 09:15:38 +0530
From: "karthik subramanian" <karthik_subramanian@grabmail.com>
Subject: HELP!!! Keyboard Problems
Hi,
i'm running Red Hat 6.2 on a Pentium-MMX(233MHz.). i get crazy
problems with my keyboard... after i work with my computer for
about 20 minutes or so, i start missing keystrokes and sometimes
keystrokes are duplicated. then Linux throws this error at me:
"Keyboard: too many NACKs -- noisy keyboard cable?"
if i'm running X (i use KDE), and i exit, i see this message too:
"QGDict: Look: Attempt to insert null item"
it's so bad that i can hardly get any work done on my PC...
the typematic rate setting option in BIOS is disabled - enabling
it and fiddling with the rates does not help either. i don't know
what's happening, could somebody please help!!!!
thanks,
karthik
Fri, 2 Jun
2000 10:01:38 +0530
From: "Karthik" <kartjeevs@yahoo.com>
Subject: Startx Blues
I am a LINUX newbie. I have installed it on a Pentium II 233 Mhz
machine.I have a S3 TrioV2 DX/GX video card and Samsung Samtron
4Bni (14") monitor. I can boot up an login into root but
when i issue the command the command startx to get into X, my
screen flashes and I get a blank screen. Can anyone please help
me with this problem.
Thanks.
[Have you tried running 'Xconfigurator' (remember, it's case sensative)? Did you set up your graphics card properly during setup? It sounds to me that you've made an error during the X setup section of your installation. -Alex]
Fri, 23 Jun
2000 06:25:22 GMT+08:00
From: "Michael Smith" <mwdsmith@singnet.com.sg>
Subject: RH Upgrade Problems
Dear Sir/Madam,
I am having a problem with my computer...
I did an upgrade installation of Redhat 6.2 on my machine; which
now,after booting and loading Linux, would reach the text login
prompt, then supposedly Linux would start X11, but X11 never
appears, my screen is blank with some fuzzy, jagged white lines
flashing across my screen every
few seconds. Ctrl+Alt+Backspace and Ctrl+Alt+F1-6 don't seem to
work as
they normally do. Before I upgraded, I was happily running
Mandrake 6.0 (which is a beefed
up version of RH as I understand) on my 166 Mhz, 64 MB Ram home
PC. If you know what the problem is and how to fix it, great, I
would like to hear from you! But even information on how to stop
X11 (if it is X11) from loading would be appreciated. Any other
ideas?
Thanks,
Michael Smith (mwdsmith@singnet.com.sg)
P.S. Pls keep in mind that I have only been using Linux for 3 or
so months so am not an expert at everything.
Fri, 23 Jun
2000 04:43:01 +0530
From: "Dipankar Mitra" <sunres@vsnl.com>
Subject: how to use wav file on diskless linux.
hello,
I want to put a diskless linux m/c & want to use wav file on
those diskless m/c
Can any one help me how to do that , giving me step by procedure
for
that.
Tue, 20 Jun
2000 14:24:48 +0100 (BST)
From: Paul Nettleship <paul_nettleship@yahoo.co.uk>
Subject: The answers gut - File formats!!!
Hello,
I was just having a quick look through your magazine and the
'answers guy' sections really intrested me,
and prompted this question....
So, I've always wondered about file formats for exe and obj
files. I guess there is all sorts of
intresting data hiding away in there. Symbol tables, mark up info
and god knows what else.
Is there some standard for these! Or is it completely compiler
dependant?
Just out of interest, Paul.
[Well, it's not strictly a Linux question, but yes, there are [obviously] standards for all executable files and object files. The compiler couldn't just throw together a load of rubbish now could it? -Alex]
Mon, 19 Jun
2000 21:07:47 -0400
From: "Gary R. Cook" <grcook@erols.com>
Subject: Signaling application running in xterm window of mouse
click event
If I am running an application in an xterm window
(e.g., xterm -e myapp) and I click on that window, making it my
active window, how can I notify my application of that event?
Thanks!!
Mon, 19 Jun
2000 11:16:34 +0200
From: "Angus Walton (EEI)" <Angus.Walton@eei.ericsson.se>
Subject: Grep
Hi,
I'm quite new to Linux, but I want to learn as much as possible.
Here's my question (not really a problem, it would just be
interesting to find out how to do this):
Lets say I do a 'finger' and heaps of users are spewed up onto
the screen. I only want to see the users which are preceeded with
the text 'potatoe'. So, I do the command 'finger | grep
"potatoe" ' .But, some users, for example tomatoe_man,
are connected to the computer
'potatoe.shellaccount.mycomputer.com', which means that they come
up
aswell. Without making finger not display the 'Where' column, how
would I weed out these users?
Keep up the good work on the gazette.
Aengus Walton
Sat, 17 Jun
2000 22:10:27 -0500
From: Ivan Gauthier <igsys@telcel.net.ve>
Subject: linux crash
dear sir,
i would like to know if there is solution for the following
problem:
when Linux crash due for example to an electric blackout, on the
next boot it tries to repair the file system (in this case the
hard disk has 2 partitions, on the 1st. the complete operating
system is installed, including boot, home, root etc., the 2nd.
partition is used for user programs files and database data.
Linux is RedHat 6.0 and Mandrake 6.0 (nad 7.0) but most of the
time it cannot repair and give a message like:
** e2fsck cannot automatically repair file system. please do
e2fsck
manually without the -a or -p options. **
(type control-d to continue or type root passwd ..)
here my questions:
1- if one types control-d nothing happen (well linux tries to
repair but it finished as before..)
2.- when manually repairing the file system a LOT of files are
lost, including important ones like inetd.conf, and not only on
this partition but on the 2nd. one. is this normal? . and what
can be done..
this question is not really a problem but an option i would like
to have. do you know how to get the "hour clock" or
"glass clock" on KDE after doble clicking an icon on
the desktop. (like Corel Linux is doing with KDE )
many thanks
ivan gauthier
venezuela
Thu, 15 Jun
2000 16:25:23 +1000
From: "Nick Adams" <unitedusers@yahoo.com>
Subject: Port 80 Telnet
Hello,
Quick question.
I want to change my port to accept telnet connections to port 80.
This enables me to connect from behind my proxy at work. How do I
do this?
Thanks,
Nick Adams
Wed, 14 Jun
2000 17:05:20 +0100
From: David Whitmarsh <david@sparkle.local>
Subject: Second X server and Redhat 6.2/gdm
Following up on Bob Hepple's tip on running a second X server, I
tried to to the same on my Redhat 6.2 box. I found that I could
start a new X server on the command line, but it gave me only the
basic X-server screen (fine grey check with the X cursor), and no
login screen. Same problem with Xnest.
I could however run two X servers at once by placing a second
entry in the gdm.conf file and restarting gdm.
It would be nice to only have the overhead when I want the second
server though. Any thoughts on how to get the login screen?
Regards,
David Whitmarsh
Tue, 13 Jun
2000 11:03:17 +0800
From: "michaelkwan" <michaelkwan@mdr.com.hk>
Subject: Sendmail Question
Hi,
I have setup the RedHat6.1 with sendmail 8.9.3. All the clients
use IMAP4 to connect to the server using Outlook Express 4.
The problem I found is, there will be several processes for the
same user running the [imap]. As a result, the user cannot
receive or delete any mail in their mailbox. I have to 'kill' the
'extra' processes otherwise the mailbox will be read only. Is
there any thing I can do with this?
Thanks!
Michael Kwan
Mon, 12 Jun
2000 11:32:28 -0300
From: Eduardo Spremolla <lalo@terminus.dtdantel.com.uy>
Subject: Pentium III boot problem
I have here a Pentim III machine and when it boot ~= 4 out of 5
times it
locks with a mesage:
387 failed, traying to reset.
I comes from the bugs chequing module. Is there some issue with
PIII and the check? or did I got a faulty CPU ?
Thanks in advance.
Eduardo Spremolla
Montevideo,Uruguay
Sat, 10 Jun 2000 01:12:02 +0200
From: "almighty" <mightyfredy@wanadoo.fr>
Subject: Graphics card setup
hi i'am a new lunix user but i want to know how can i make lunix
set my graphic card:
(intel (r) 810 Chipset graphics driver pv1.1)
it's could be fun if you can send me a solution to enjoy linux
thanks a lot
P.S:sorry for my english because i'am french
Fri, 9 Jun 2000 10:20:33 -0700 (PDT)
From: "Timothy McPherson" <a9958@asl.bc.ca>
Subject: TERM Variable
Hi,
Hoping someone might have an solution for me. I have a Solaris
system
with a number of Wyse 60 terminals connected to it. I have a
Corel Linux PC on a LAN with the same Solaris box. I believe I
have compiled the terminfo wyse60 entry correctly on the Linux PC
(as it did NOT have one originally).
Basically I took the output from the command "infocmp
wy60" on the
Solaris box and used "tic" to recompile it on my Linux
PC. So far so good. The entry is now in /etc/terminfo/w/wy60 and
"infocmp wy60" is identical to the Solaris one. I set
my TERM variable to wy60 but it doesn't look to healthy either
through an xterm session, a console text login or a telnet on
either. Any ideas? Am I going about this the wrong way? Basically
if I can get this working properly I can scrap Windows in favour
of Linux on all PCs on the LAN:) Thank you for any help you can
offer.
-Timothy
Wed, 07 Jun 2000 12:11:48 -0500
From: Brian Finn <nacmsw@airmail.net>
Subject: Linux Webserver and AS/400 Database?
Hi,
I was just curious if any good readers have any success stories
about using a Linux server running the Apache web server as a
front end for a DB2 database on an IBM AS/400? I'd like to know
what solutions there are (or should be) for accessing DB2 data
from a Linux box.
Thanks!
Tue, 6 Jun 2000 10:46:13 -0700
From: "Christine Rancapero" <crancapero@nationalsecurities.com>
Subject: hi
Do you have an issue regarding the advantages and disadvantages
of migrating linux mail server to an MS exchange? Your help is
gratefully appreciated....thank you very much =)
Mon, 05 Jun 2000 14:34:43 -0500
From: Noah Poellnitzh <noah.poellnitz@ssa.crane.navy.mil>
Subject: linux booting
I was wondering if you have ever heard of anyone booting up a
system with a linux boot floppy. The system previously lacks the
ability to boot from a CD , but after installing linux, uses the
CD drive to install another operating sytem which at teh same
time will write over the Linux system.
Sun, 4 Jun 2000 09:09:23 +0100
From: "Graham" <smiffy10@email.com>
Subject: mother board help desperately needed
Hello there,
I have added a video card and sound card to a GMB-P56SPC mother
board (ESS I think) I have managed to disable the sound chip (
both sound and video were both onboard with SIS 5596 video and
SiS 1868 sound chips) with jumper JP13 unfortunately I cannot
find a video jumper.
Please help how do I disable the onboard video?
Many thanks in advance
Sat, 03 Jun 2000 12:04:54 -0400
From: James Dahlgren <jdahlgren@netreach.net>
Subject: modprobe: can't locate module ppp-compress-21
First I want to thank you and all the people at Linux Gazette for
all
the fine work you are doing. Many of the services running on my
Linux box wouldn't be running without the help I've gotten from
your fine site.
I'm assuming this is trivial, but it still bothers me. I've
usedSlackware, RedHat, and Mandrake distributions of the 2.2.x
kernel. I'm not sure which pppd version the Slackware and
Mandrake had but the RedHat has pppd 2.3.7. Iv'e use the
2.2.5,10,13,14,15 kernel revisions. With all of them I get error
messages when pppd starts:
Jan 15 17:54:40 paxman modprobe: can't locate module ppp-compress-21 Jan 15 17:54:41 paxman modprobe: can't locate module ppp-compress-26 Jan 15 17:54:41 paxman modprobe: can't locate module ppp-compress-24It doesn't matter if I'm calling my ISP or a friends Linux box so I'm pretty
Fri, 2 Jun 2000 11:48:21 -0600
From: "Doug" <doug@springer.net>
Subject: IPX, RH 6.2, socket: Invalid argument
I am a sorta newbie on Linux. I have Red Hat 6.2, am running
Samba and DHCP and ftp server on a peer-to-peer Windoze network
with ethernet. All of the above works fine. I am trying to get
netbios working over ipx for a port I am doing from DOS. My
initial install was RH 5.2, then I upgraded to 6.2. My linuxconf
does give me a Segmentation fault (core dumped) message when I
try to run it, which it didn't do before I upgraded to 6.2. My
main problem is this:
When I try to run 'ipx_interface add eth0 802.3', I get the
following:
'ipx_interface: socket: Invalid argument'
Any clues as to what is going on? How to fix it? Places to go for
more
info
on netbios over ipx?
Thanks,
Doug
Fri, 2 Jun 2000 11:48:21 -0600
From: "Allen Tate" <allendtate@yahoo.com>
Subject: Getting Linux to see my network card during bootup
I have recently installed Phat Linux (which by the way is an excellent
Linux distribution for beginners) and for the life of me, I can't find
which boot script I need to edit to get the system to see my network
card during bootup. Can someone point me to the correct boot script or
the correct HOWTO file? The strange thing is that the KDE System tool
can see the Ethernet card and tells what the IRQ and I/O are. Please
email me privately at
allendtate@yahoo.com and I'll explain
in better detail about what I'm talking about.
Never mind, I figured it out. I transposed 3c509 for 3c905. That didn't work so I used the 3c59x driver and it came right up. I love it when I figure it out on my own.
Fri, 2 Jun 2000 11:48:21 -0600
From: Tom Russell
Subject: How to run Windows programs on Linux
Whether or not Microsoft is successful in their appeal to the US High Courts on technical grounds, the facts remain that after a lengthy and involved legal process, they were found guilty of breaking the laws of their country, and secondly of using their monopoly powers to hold up innovation in what is still a developing industry.
As a believer in religious principles, and the laws and morals of my country, I find it unethical and personally abhorrent to continue to use or recommend products and services produced by Microsoft.
I recently read that Corel are using a Linux Windows emulator to enable their Windows office suite to operate successfully in the Linux environment without the need for any Microsoft products or services.
Could you please let me and other concerned readers know how to do this for other non Microsoft products, so we are no longer forced to be immoral and unethical by association with a guilty party.
31 May 2000 18:24:10 +0200
From: Jan-Hendrik Terstegge <webmaster@jhterstegge.de>
Subject: Re: Linux Gazette - German Translation [LG #54, General Mail]
Hi guys!
In LinuxGazette 54 you printed under General Mail my eMail from 29 Apr 2000, concerning my question for a german translation of LinuxGazette. Today the new issue came out and I created a translation of the article "Building a Secure Gateway System". Hoping that more Linux guys will help me to translate more articles, I copied it to my webpage. The german LinuxGazette Mirror-Page can now be found under http://www.linuxgazette.de.
I hope that after my call someone will help me to get more and more articles online.
[We also have a Spanish translation now at http://gaceta.piensa.com. -Ed.]
Mon, 26 Jun 2000 08:49:51 -0700
From: Heather Stern <star@betelgeuse.starshine.orglg@ssc.com>
Subject: Kudos to our translators
Pass my thanks to our translators everywhere. It's a tough and usually unsung job.
[And as an amateur translator myself, I'll add that it's quite time-consuming. -Ed.]
Wed, 14 Jun 2000 01:05:46 +0300
From: Charles Kibue <ckibue@mailafrica.net>
Subject: Thanks!
Hi! Here's from a very happy Linux user in Kenya...... Your magazine is very informative as well as interesting to read. I got some issues on my disk from the SuSE installation of the LDP and I sure look forward to reading more issues from you. Thanks to you all and you sure will be hearing a lot more from me. Cheers.
Composed on a Digital HiNote VP562 Series Laptop...powered by... SuSE Linux 6.3!
Tue, 20 Jun 2000 17:22:37 +0100
From: Steve Emms <sde@linuxlinks.com>
Subject: Who controls the Linux Media ?
I run LinuxLinks.com - a linux portal and recently we added a personalised calendar service to our web site. We submitted an article to LinuxToday (owned by internet.com) and it was published only to be pulled almost immediately. The reason given was that website enhancements are no longer news. However a similar service offered by another website was published. And who owns that website ? Why internet.com of course.
OK, this calendar isn't state of the art - but it is a free service and it does complement the existing facilities on the site. And sure, it is up to LinuxToday what they think is newsworthy and so post. But wait a minute, this sort of thing has made the news before - linuxstart announced a similar calendar service - take a look at
http://linuxtoday.com/news_story.php3?ltsn99-07-13-015-10-PR
What's the difference ? Well, Linuxstart are owned by internet.com
This opens up a number of questions about how we judge the news we read. Linux is becoming big business and there are vested interests. Web sites are merging and being taken over by large conglomerates. Who determines the impartiality of the news we read ? Who determines what is news and what is advertising ?
LinuxToday is one of the major daily linux newsites and they determine that enhancements to major Linux websites like LinuxLinks is not important. But LinuxLinks is independent - it isn't owned by internet.com and it isn't owned by VA Linux. Is it and sites like it being penalised because they don't have a monopoly in the Linux media ? And is this really in the spirit of the Linux movement ?
Wed, 14 Jun 2000 23:37:41 +0530
From: Vikrant Dhawale <vkdhawale@vsnl.com>
Subject: subscription info.
I have read the current issue of the linux gazette and found it very interesting and informative. Is it available as a e-mail letter which I can subscribe and recieve in email since reading it online wastes a lot of online time as it is spread over pages.
[See the LG FAQ, questions 2-4. The Gazette is too big to send via e-mail. To minimize online time, download the FTP version of each issue and read it from your hard drive rather than via the web. -Ed.]
Sun, 4 Jun 2000 19:24:00 +0530 (IST)
From: LUG Account <ilug@hbcse.tifr.res.in>
Subject: Claims of First Indian OS: Aryabhatt Linux
Press Release
in response to articles on `Aryabhatt Linux' as
the first Indian Operating System.
We introduce ourselves as the Linux Users Group, Bombay Chapter (ilug-bom.org.in). We are a non-profit voluntary organisation actively involved in promoting open source software. Our activities include mailing lists for users, training, workshops, open source projects etc. Our group constitutes more than 400 man years of Linux experience vested in its members.
We draw your attention to the following articles which have appeared in the publications mentioned below.
1. "Made-in-India Linux to go global", Express Computer dated 29/05/2000, page 1.
2. "Linux Technologies launches Aryabhatt Linux", Times Computing dated 31/05/2000 page 5.
3. "Aryabhatt Linux", PC Quest dated June 2000 page 174.
... and several others.
In these articles there are numerous inconsistencies, false claims and trademark violations made by the company Linux Technologies Pvt. Ltd. We have evaluated their Linux distribution Aryabhatt Linux and compared it with other currently available distributions such as Red Hat, SuSE, Mandrake etc. Listed below are some of the findings:
1. RedHat Trademark Violation?: The packaging of Aryabhatt Linux distribution mentions that it is "Based on RedHat Linux 6.1", but fails to comply with RedHat's Licensing policy, as is evident from the following excerpt taken from http://www.redhat.com/about/trademark_guidelines.html
" C. You may state that your product "is based on Red HatŪ Linux X.X," but you must do so in a fashion that indicates that "Red Hat Linux" is not the name or brand of your product and that Red Hat is not a source or sponsor of your product. Some guidelines to follow on this point include:
" You must clearly indicate how your product differs from Red HatŪ Linux. This includes listing the packages that you deleted from Red HatŪ Linux and those that you added to your product, as well as indicating any and all other changes you made. This information must be clearly and prominently presented in all packaging, advertisements and other marketing materials for your product in a typeface no smaller than the typeface you use for the words "Red HatŪ Linux."
" The use of "Red HatŪ Linux" must be in a typeface (which includes appearance, size and color) no larger than one-third the size of the typeface used for the name of your product.
" The typeface you use for the words "Red HatŪ Linux" must be the same typeface you use for other written text to describe your product. You may not use a unique typeface for "Red HatŪ Linux" in order to set it off from the other text included on your product.
" You may not do anything at all to state or imply that your product is an official product of Red Hat, Inc. and may not do anything else to create confusion in the market between your product and the products of Red Hat, Inc.
" You must include the following statement in a prominent place in your product packaging and in all marketing and promotional efforts for your product:
"Red HatŪ is a registered trademark of Red Hat, Inc. This product is not a product of Red Hat, Inc. and is not endorsed by Red Hat, Inc. This is a product of [name of publisher] and we have no relationship with Red Hat, Inc."
2. Linux trademark violation?: Linux is a registered trademark of Linus Torvalds who was the original developer of the Linux kernel. He has permitted use of this trademark subject to an explicit mention of his ownership. Neither their web site nor the product packaging mentions this fact.
3. It is not an original Indian Linux distribution: The Aryabhatt Linux is a distribution based on another Linux distribution viz Red Hat Linux 6.1 as mentioned on the product packaging.
4. Misrepresentation of License: Most of the programs distributed in Red Hat Linux are licensed as GPL (General Public License). This licensing policy permits anybody to go through the source code and modify as per their requirements. It also explicitly requires the GPL to be mentioned clearly. Arybhatt Linux does not seem to have been licensed under GPL since the GPL copy on the CD is issued by Red Hat and not by Linux Technologies Pvt. Ltd.
5. GPL requires distribution of source code as well as free download of source code: Under GPL it is mandatory to distribute source code FREE OF COST either on CD's or on ftp sites. As on date Linux Technologies has not made any provisions for the same. The company also does not have any ftp sites.
6. The Graphics Driver for SiS6215: Graphics Drivers for SiS6215 card were developed by SuSE GmbH (suse.com) and XFree86 (xfree86.org) and copyrighted under GPL. Linux Technologies Pvt. Ltd. falsely claims to have developed the same.
7. Misuse of Linux Logo: The Linux logo which depicts a penguin has been affectionately referred to for a long time as Tux by the entire Linux community. It also appears on most of the web sites, publications and articles pertaining to Linux. Other Linux distributions also feature Tux on their packaging. By writing Peggy across the penguin and registering it as their own trademark, Linux Technologies Pvt. Ltd. has attempted to cash in on the popularity of Tux and deeply hurt the sentiment of Linux users.
8. The picture on product packaging box: The product packaging box as well as the step by step User Guide of Aryabhatt Linux depict a picture of peggy surrounded by networked computers. This picture was designed and copyrighted by Jassubhai Digital Media and was published in the August 1999 issue of CHIP magazine, CHIP Linux special and Network Computing.
9. Most of the applications in any Linux Distribution are developed by GNU (www.gnu.org). Linux Technologies Pvt. Ltd. does not acknowledge the same.
10. Though Aryabhatt Linux is claimed as "customized for the Indian user", as on date, it does not support any Indian Language. And the claimed hardware support for the locally assembled hardware already exists in other distributions of Linux.
11. On going through the step by step guide, we found numerous inconsistencies and wrong information.
As Indians we would love to have an Indian Linux distribution but we are thoroughly disappointed and disgusted with Aryabhatt Linux's blatant attempt to hijack the efforts of the open source and free software community. This ruthless exploitation of free, open source software will tarnish the image of the Indian software industry. We therefore urge you to set the record straight as regards the claims of Linux Technologies Pvt. Ltd., if you have published articles related to Aryabhatt Linux. For others who have not published any reports, pleas consider this as information.
This message is released in public interest, and in the interest of Open Source Software by the following active members of the Indian Linux Users Group, Bombay Chapter:
Aditya Kulkarni adityak@linuxfreak.com Apurva Shah apu@freeos.com G.Sagar sagarg@bol.net.in Kiran Jonnalagadda jace@radiolink.net Kishor Bhagwat kishorbhagwat@usa.net Mitul Limbani mitul@mitul.com Nagarjuna.G nagarjun@hbcse.tifr.res.in Parag Mehta linuxadmin@softhome.net Philip Tellis philip.tellis@iname.com Prakash Advani prakash@freeos.com Prakash Shetty info@maxlinux.net Rajen Parekh rajen@softhome.net Rakesh Tiwari rakesh_tiwari@jasubhai.com Sandesh Rao sandeshr@vsnl.com Terrence D'Souza jtdesouza@yahoo.com Vikas Pawar vpawar@usa.net
Contact Address: GNU/Linux User Group of India, Bombay Chapter ilug@ilug-bom.org.in http://www.ilug-bom.org.in
Mailing Address: Dr. Nagarjuna G. Homi Bhabha Center for Science Education TIFR, V.N. Purav Marg, Mankhurd, Mumbai 400088 INDIA Phones: 091-22- 556 7711, 555 4712, 555 5242 Fax: 091 - 22 - 556 6803.
It sounds like this company is violating the GPL in the use of some FSF-copyrighted software. If that is true, we can try to enforce the GPL. Would you please double-check for us that FSF-copyrighted programs are included on their CD and on their ftp site, and send us the names of the specific programs you identified? We also need to know the URLs and the official name of the product.
Also, could you tell me how to contact them? We need the company name, snail addresses, email addresses, etc. With this information, we can have lawyers contact them to object.
Meanwhile, one more item you can add to your list is that they're calling the whole operating system "Linux". Linux is actually the kernel, one of the important pieces of the system. That is what Linus wrote in 1991.
At that time, we had been working on the GNU operating system for almost a decade, and it was almost complete enough for self-hosting; the only major piece missing was the kernel. Combining Linux with GNU produced a complete free operating system, versions of which are now integrated by Debian, Red Hat, and others. Everyone is free to redistribute it, but they ought to call it GNU/Linux and give us a share of the credit. It isn't a legal requirement, but it is the right thing to do.
See http://www.gnu.org/gnu/linux-and-gnu.html for more explanation.
Regarding the idea of an "Indian operating system": I think that kind of nationalism is not a good thing for world peace.
But India should not feel left out. The GNU system has been an international project since the very beginning; no single country originated it. Humanity originated it. And India is part of humanity.
Contents: |
The July issue of Linux Journal is on newsstands now. This issue focuses on Science & Engineering.
Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue75/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.
For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/
SOT has released their Best Linux operating system to Russian-speaking users for the first time. The new version is also available in English, Swedish and Finnish, and includes the brand new XFree 4.0, kernel version 2.2.14 and integrated office solution Star Office(tm) by Sun Microsystems.
The Best Linux 2000 boxed set includes some new features never seen before in Linux--like lifetime installation support. The boxed set also contains a 400 page manual, an installation CD, a source code CD, a Linux games CD and a software library CD providing an easy way for consumers and business desktop users alike to start using a complete Linux system.
Founded in 1991, SOT is based in Finland, where it builds and maintains the Best Linux distribution. SOT counts among their customers large organisations such as Nokia, Sonera and the Finnish Board of Education.
OREM, UT-June 6, 2000-Caldera Systems, Inc., today announced that free support is available for OpenLinux users in Europe. Registered OpenLinux users will receive free 30-day phone and 90-day e-mail support in German, French, Italian and English.
In Germany, registered OpenLinux users may dial 030 726238 88 or visit support@caldera.de. Registered users needing support in English, French or Italian may dial +353 61 702033 or visit europe.support@calderasystems.com.
Space is limited. For more information or to register for the tour, visit http://www.calderasystems.com/partners/tour .
LuteLinux at the Technical Certification Expo 2000 revealed not only their new LuteLinux Lite software, but also their training and certification plans . In addition to offering certification for various levels from user to specialist, LuteLinux will also offer Trainer Certification. This will include training on teaching and public speaking, classroom techniques for beginner to advanced users, as well as common classroom scenarios and bridging the gap between the classroom environment to the real world. Their certification not only qualifies you as a LuteLinux trainer, but many of the techniques and lessons are easily transferable to other training environments.
LuteLinux is taking a new approach to certification. They recognize that multiple choice and even most simulation environments only provide for one right answer, and most examinations don't respond to non-standard approaches to a problem: something that is required on a daily basis in the real world. LuteLinux is fixing that explains Mr. Daunheimer, "our on-line LuteLinux simulation responds to multiple approaches to a problem. There is more than one answer to the questions, just as there is more than one way to solve a problem in the real world. There will be an ongoing assessment of responses during the examination, and the questions which are presented are chosen by a system that takes into account your last response." Although it's easy to talk about results, at LuteLinux they guarantee them. If any client is not satisfied with their training, or feels unprepared to apply new skills on the job, they will retrain them - for free. Examinations will be available both in-house at one of their training centers, and on-line via their web site.
PITTSBURGH, PA - June 19, 2000, /PRNewswire/ -- Advanced Computer & Network Corporation has received Red Hat Linux 6.2 certification and is now included on Red Hat's Hardware Compatibility List for RAID storage systems. http://www.acnc.com/product_jetstorii_lvd.html.
SuSE, the international technology leader and solution provider in open source operating system (OS) software, has opened their new Latin America headquarters office in Caracas, Venezuela. Xavier Marmol, well known to the Latin American Linux Community, has been chosen to run SuSE's Latin America presence.
Xavier Marmol is highly regarded in the Latin American Linux community. He was previously the Network Administrator at the University of Zulia Academic Network. As an active Linux advocate since 1995, he successfully implemented Linux OS for the Universities network of several hundred computers. As founding President of VELUG (Venezuela Linux User Group) in 1997, he led the initiative to include VELUG achievements at the Latin American LinuxWeek, the first Spanish-speaking Linux event. Marmol was also the first content manager of the Spanish LinuxStart.com Web site.
SuSE will release the first fully engineered version of Linux for the Apple PowerPC, IBM RS 6000 and Motorola PreP in mid-June. [It was unclear to LG at press time whether it has been released yet. -Ed.] In addition to databases, firewall scripts, web servers and mail programs, there are also such interesting applications as the video editing system Broadcast 2000, or the powerful image processing program, GIMP.
Of great interest to Mac users and professionals is the inclusion of the Virtual Machine (MOL) MAC on LINUX in the distribution, making it possible to start the MacOS in Linux and switch from one program to the other. In addition, the Mac user also has the option of using standard PCI hardware, such as network cards or TV cards.
Vancouver, BC., June 20, 2000 - Stormix Technologies Inc. and StarNet Communications Corp., today announced an agreement to include a fully- functional copy of StarNet's X-Win32 PC X server Version 5.0 with Storm Linux 2000 Deluxe Edition. This allows Windows workstations to connect to Linux servers. Storm Linux 2000 customers will receive a free one-year license for X-Win32. This product is normally listed at approximately $200.00US.
"Libre" Software Meeting #1 (Rencontres mondiales du logiciels libre), sponsored by ABUL (Linux Users Bordeaux Association) |
July 5-9, 2000 Bordeaux, France French: lsm.abul.org/lsm-fr.html English: lsm.abul.org
|
Linux Business Expo (co-located with COMDEX event) |
July 12-14, 2000 Toronto, Canada www.zdevents.com/linuxbizexpo
|
O'Reilly/2000 Open Source Software Convention |
July 17-20, 2000 Monterey, CA conferences.oreilly.com/convention2000.html
|
Ottawa Linux Symposium |
July 19-22, 2000 Ottawa, Canada www.ottawalinuxsymposium.org
|
LinuxWorld Expo |
August 15-17, 2000 San Jose, CA http://www.linuxexpo.com/
|
IEEE Computer Fair 2000 Focus: Open Source Systems |
August 25-26, 2000 Huntsville, AL www.ieee-computer-fair.org
|
Linux Business Expo (co-located with Networld + Interop event) |
September 26-28, 2000 Atlanta, GA www.zdevents.com/linuxbizexpo
|
Atlanta Linux Showcase |
October 10-14, 2000 Atlanta, GA www.linuxshowcase.org
|
ISPCON |
November 8-10, 2000 San Jose, CA www.ispcon.com
|
Linux Business Expo (co-located with COMDEX event) |
November 13-17, 2000 Las Vegas, NV www.zdevents.com/linuxbizexpo
|
USENIX Winter - LISA 2000 |
December 3-8, 2000 New Orleans, LA www.usenix.org
|
Linux Lunacy Co-Produced by Linux Journal and Geek Cruises |
October 21-28, 2001 Eastern Carribean www.geekcruises.com
|
May 31st., 2000. Mexico City, Mexico. Bufete Consultor de Mexico - Piensa Systems has announced the launch of the new web site entitled La Gaceta de Linux as a Spanish version of the very well known electronic magazine Linux Gazette, effective this June 1st.
"We are searching for volunteers to enrich and enhance La Gaceta de Linux in every aspect."; said Monique Ollivier, image and content managing editor of the Spanish edition, "in first place, we need lots of support to translate, in a monthly basis, the original articles in foreign languages; mostly English; to ours, and maybe even more important, we want La Gaceta de Linux to be an open forum to whomever wants to write about Linux or just publish their work."
"After the site launch, we will add more features and services to La Gaceta de Linux, providing special benefits to the ones that contribute to the site with a bit of their time as well as additional services to the general public."
Felipe Barousse, Bufete Consultor de Mexico, S.A. de C.V., CEO and General Director: "We want all this free and open technologies and the broad experience that the global Linux community has acquired, to be leveraged at the maximum by all those Spanish speaking individuals and companies that across the globe."
"Having a tool like La Gaceta de Linux and, of course, the web, allows us to very efficiently promote Linux and all related technologies to be used in real life industry, businesses and corporate use; for instance; in Latin America, most of small to medium sized businesses do require IT services and systems that really do work, and work well but, not at the very high cost of the current "systems" that are well known to all company owners ... It is our experience that Linux is an excellent alternative."
"Another very important readers and users group for La Gaceta de Linux is the academic one. That's where tomorrow's IT people are and, we have to let them know about Linux and what this new technology can provide. Educating people at every level is the key and that is the most important goal of La Gaceta de Linux."
About Bufete Consultor de Mexico, S.A. de C.V. (BCM): BCM, founded in 1994 is a private Mexican Information technology consulting firm. BCM has wide experience in the implementation of mission critical systems in customers of various sectors across Latin America. "Piensa Systems" and "piensa.com" are BCM registered trademarks.
[The original press release read "the official Spanish version". I changed it to "a Spanish version" to make clear that LG has no affiliation with BCM and does not wish to endorse one Spanish translation over another. Nevertheless, we are grateful to BCM for making LG accessible to the Spanish-speaking, as we have long desired. -Ed.]
"Adomo wants a place in your home. Not on top of your TV, or as a firewall or gateway hiding in a closet. What Adomo wants, is to fill your home with a network of low-cost, easy-to-use information appliances. All over the place. And they will all have Linux inside..." This is from LinuxDevices' technical overview paper about Adomo.
An Adomo spokesman calls it "kind of like a Cobalt Qube for the home".
[This reminds me of the children's science-fiction story, Danny Dunn and the Automatic House, as well as Ray Bradbury's (?) story "There Will Come Soft Rains", and Bill Gates' home entertainment system/art gallery. -Ed.]
Compaq announced it has ported the Linux operating system to its iPAQ handheld computer. The goal of the port and the supporting program is to enable developers and researchers looking to explore applications and uses for handheld computing to experiment with Compaq's iPAQ handheld by gaining access to the Linux-based source code for the device.
6th June 2000, SUPERCOMM 2000, Atlanta, USA. Axtar Limited, a UK-based developer of programmable communications solutions for public network operators and service providers, has announced OneSwitch, the industry's first standards-based Central Office programmable telephony switch to use both the Red Hat Linux operating system and the compactPCI (cPCI) form factor. Value-added communications services that can be supported by the OneSwitch include: web-based call centre services, personal numbering, pre-paid calling card, Internet Call Waiting services and 1xxx services. The product will start shipping Q3 2000.
SANDY, UTAH, JUNE 23, 2000 - Linux NetworX, Inc., a provider of large-scale cluster computer solutions for Internet, industry and research fields, announced the launch of its new Web site (www.linuxnetworx.com). The site provides information about the company's products and services as well as other useful information about computer cluster technology.
Individuals who browse the site will find it updated with information concerning cluster computer solutions. Along with extended company information and event calendar, the site includes an informational cluster tour and a comprehensive newsroom containing computer cluster news.
Principle products include ClusterWorXHardware and Software Control, a hardware-based cluster management system controlling up to thousands of nodes, independent of specific motherboards or chipsets.
Santa Cruz, CA (June 21, 2000) The Tarantella Division, an independent business unit of The Santa Cruz Operation, Inc., today announced that FreeDesk.com has chosen Tarantella software to replace Citrix MetaFrame as the key technology to centrally manage and deliver applications via the web. FreeDesk.com is switching to Tarantella to get better application performance over the web.
I am the editor and manager for a free local publication entitled Atlanta Linux Newsletter. We have been freely distributing this publication for over a year. In that time, we have increased our distribution from 500 to 5,000, and we are now increasing that number to 10,000.
The publication is distributed throughout the Atlanta Area. This refers to Downtown, Midtown, Buckhead, as well as some of the surrounding areas: Roswell, Decatur, Alpharetta, etc. Our new channel for distributing our newsletter will be with Computer User magazine. In addition, we give copies of the Newsletter to all of our Customers as well as any User Group Meetings, seminars and showcases that we attend.
We have been working on our content so as to focus towards the new Linux community. Our customers, and the Linux community, in the past has been hackers and hobbyists. However, with the new surge of Linux users, we are beginning to focus on the business solutions side of Linux as well as the novice. We aim our content to reflect this type of audience.
Our advertisers consist of Linux companies, design companies, web companies, etc. These advertising supporters are assisting in the promotion of the newsletter whether in printing costs, distribution costs, etc.
-Kate Cotrona, Senior Editor & Manager, Atlanta Linux Newsletter
http://www.linuxgeneralstore.com (Click on the logo to enter, then choose "Newsletter" from the menu.)
DENVER (June 14, 2000) -- Kaivo, Inc., today performed the initial launch of the first vendor-neutral marketplace for Open Source products and services. Located at kaivo.com, the Kaivo site is the only service designed to connect IT executives interested in the Open Source revolution with vendors who can design, build, and maintain Open Source solutions.
The Kaivo.com Open Source Marketplace features three primary elements:
In addition to its Open Source Marketplace, Kaivo will deliver professional services consulting and education and training programs to users of Open Source solutions.
Kaivo is the ancient Finnish word for "source".
While Cosource, SourceXchange and Kaivo all have common elements, Kaivo's focus and its audience are different from the others. Both Cosource and SourceXchange excel at helping manage the development process of custom Open Source applications by bringing project managers and development talent together.
Kaivo, on the other hand, is about delivering the full world of Open Source into a corporate setting. Our end-user audience, IT executives, is assumed to be not as hands-on technical as the primary audience of the other sites.
We believe that our market has a need to understand what Open Source solutions exist (in contrast to proprietary models) and desires a simplified channel in which to procure those solutions. So ours is a market place for software, hardware, services and solutions. Kaivo is also designed as an educational site.
In many ways, Cosource, SourceXchange and Kaivo are complementary.
Concord, Mass., June 1, 2000 - API, (Alpha Processor, Inc.), announced the UP1100, the latest addition to its Ultimate Performance Series motherboards. The UP1100 offers Alpha Linux developers a complete, cost-efficient, entry-level Alpha board for Beowulf clusters, Web servers, development systems and rendering solutions.
API's high I/O and memory bandwidth technology, combined with the UP1100's new features and the open source Linux software, enables system integrators to build high-performance, scalable and reliable systems. The low profile UP1100 features the Alpha 21264 processor on the UP1100 motherboard, allowing the overall cost of systems to be reduced. This provides research institutions, computer graphics companies and enterprises a cost-effective system for powering compute-heavy applications.
Doubling disk I/O performance over the UP1000, the UP1100 uni-processor planar design includes on-board integrated sound and Ethernet on a standard ATX form factor, providing a more robust Alpha solution for developers. The on-board Ethernet and sound preserves maximum configurability of system PCI I/O slots, simplifies cluster configuring and lowers the overall solution cost.
The UP1100 will begin shipping in July.
Concord, Mass., Bristol, United Kingdom, June 1, 2000 -- API, (Alpha Processor, Inc.), a leading architect of high-performance solutions for high-bandwidth and compute-intensive applications, today announced its collaboration with Quadrics Supercomputers World (QSW), a leading provider of supercomputer technology in Europe, to develop high-performance supercomputers for Linux. QSW now can offer customers high-performance scaleable supercomputers with the flexibility to support a wide range of parallel programming models on the Linux platform.
Using API's 64-bit platform and the Linux open-source operating system, QsNet, a high-bandwidth, ultra low-latency interconnect for commodity SMP nodes, offers some of the highest possible system interconnect performance and scalability available. The solution, based on QSW's third-generation "Elan and Elite" ASICs and API's UP2000 Ultimate Performance Series motherboards, consists of a network interface and a high-performance multi-stage data network. The system is managed using QSW's Resource Management System.
TCS' Linux Research and Development facility will be located in Round Rock, Texas. Over the course of several months, TCS will recruit highly skilled employees to assist with the development of Linux related products for existing and new clients located worldwide, including Dell. TCS' recruitment effort will include an aggressive outreach to Texas universities and from within the local community.
Available exclusively at the TCS website-- http://www.tcs.com--the test harness and suite, called TAAL (Testing And Analysis tooL) benefits businesses and consumers who use Red Hat Linux 6.X by evaluating the Linux operating system.
TCS is a software technology consultancy company that provides information technology and management consulting services to organizations in over 50 countries across the globe.
Ottawa, Canada - June 27, 2000 - Rebel.com Inc. announced its intention to adopt Transmeta's Crusoe processor family in a future line of residential and small business gateways that will add to its NetWinder OfficeServer line of products.
BASCOM's Open Source Equipment Exchange will match those donating computer equipment with open source developers in need.
The TERMinator is a glossary of PC technical terms.
News articles:
Linux Graphics Programming with SVGAlib is a book that shows both beginners and advanced users how to make graphics applications without X. The URL also features other books on Linux/Open Source products.
Firstlinux has added five articles to its collection of overviews titled "I've installed Linux: What Next?" New topics include MP3, games, scientific/mathematical programs, PIMs (personal information organizers), and CD writing. The site also has a web-based personal calendar in 13 languages. There is also a news site, FirstLinux Network News.
The Maximum RPM book (version 2) is available for download at www.rpm.org in Postscript format. This is a work in progress.
The Linux Security Knowledge Base is SecurityPortal.com's collection of, um, Linux security articles. Writers and translators are needed. All documents are under the GNU Free Documentation License.
Magic Software subsidiary Access Data Corporation will deliver a comprehensive public safety records management solution for all agencies within the State. The solution will include a centralized database of criminal activity to be created using statistical and investigative information. One hundred installations are expected to take place over the next two years.
ITsquare.com has launched Linux Square, a web application to help companies find serious, reliable Linux development firms.
Server-Based Java Programming by Ted Neward is a practical guide which teaches the fundamental concepts of server-based Java. On-line samples are at http://www.manning.com/Neward3/Contents.html and http://www.manning.com/Neward3/chapters.html
Funny articles from Humorix. June's features: corporations buy up almost all the 2-letter country domains; Windows vs Linux holy war in Yakima, Washington; banner ads infest Linux; how Microsoft's anti-piracy policy (not including a Windows CD with new computers) will cause more piracy; "Won't somebody please think of the Microsoft shareholders' children?"; a computer survives the Blue Screen of Death!; and who designed those Blue Screens anyway? (Humorix via Linux Today)
New Warez Distribution Addresses Ease of Use Issues (Another funny story from Segfault via Linux Today)
[Adults only] Linux Loving Sluts must be Linux's first porn site. Scantily-dressed women sport the Tux logo on their clothing and tattoos. Captions read "powered by Linux", "penguin power", "sexy chicks choose Linux", and Linus's oft-cited quote, "Software is like sex--it's better when it's free."
Berlin, the windowing system that's not X-Windows has released version 0.2.0 after a year of work. Download it at http://download.sourceforge.net/berlin/Berlin-0.2.0.tar.gz. Licence: LGPL.
MILL VALLEY, CA - May 15, 2000 - With this release, Stalker expands the number of supported Linux architectures: besides the "regular" Intel-based systems, CommuniGate Pro can be deployed on PowerPC, MIPS, Alpha, Sparc, and now StrongARM processors running the Linux(r) operating system.
The highly scalable messaging platform can support 100,000 accounts with an average ISP-type load on a single server, and the CommuniGate Pro unique clustering mechanisms allow it to support a virtually unlimited number of accounts. For office environments and smaller ISPs, CommuniGate Pro makes an ideal Internet appliance when installed on MIPS-based Cobalt Cubes(r) and, now, Rebel.com's NetWinder(r) mini-servers.
Key Features: full redundancy and load balancing on clusters, over 18 platforms supported, IMAP/HTTP access to mail including unique IMAP multi-mailbox features, personal web page publishing, mailing lists with web searching, web administration, anti-spam features, etc.
A free trial version is available at http://www.stalker.com/CommuniGatePro/.
LINDON, Utah - June 5, 2000 - Lineo, Inc. today began shipping Embedix SDK for x86, a software development kit that simplifies the development of embedded devices and systems. This tool set allows developers to include only the components of Linux and other software needed for the specific solution at hand. Embedix SDK is designed to reduce the system requirements, development time and overall cost of deploying embedded solutions.
Embedix SDK provides the unique tools and technologies necessary for deploying Linux across the full range of embedded devices and systems, from tiny microcontrollers through multidisk backplane servers providing high availabilit y services. Embedix SDK couples the benefits of the Open Source Linux community with Lineo's embedded tools, technologies and professional services.
Embedix SDK is available immediately for $4995 for an initial development seat, which includes a one year upgrade and maintenance agreement. Multi-user licenses are also available.
SAN FRANCISCO - JavaOne Conference and Exhibition, June 5, 2000 - Lutris Technologies, Inc. announced the first release of Enhydra Enterprise code to the developer community. The release of the product source code is an important step in the development of the Enhydra Enterprise application server and represents the first availability of an enterprise-level, Java/XML open source application server. Enhydra Enterprise is the direct result of joint development projects between noted open source supporters BullSoft and France Telecom, and the current Enhydra developer community, all of which contributed significant source code and expertise to the project.
Lutris Enhydra Professional 3.0 includes the Open Source PostgreSQL and the all-Java InstantDB and databases for abbreviated configuration time and fast prototyping. Inclusion and integration with Borland* JBuilder Foundation 3.5 allows developers to work within their preferred environment.
Pricing for Lutris Enhydra Professional 3.0 is $499.00 and includes technical support.
"By joining The Open Group, Lutris Technologies will be able to participate and keep abreast of the latest security issues and ensure that the Enhydra Application Server platform continues to provide robust security support," said Paul Morgan, chief technology officer of Lutris Technologies.
Omnis Software confirmed some of the functionality to be available in the forthcoming release of their Rapid Application Development tool, Omnis Studio. A highlight is the incorporation of a powerful drag and drop WML (Wireless Markup Language) editor to simplify direct connectivity between server based data and remotely located WAP (Wireless Application Protocol) 'phones. WML is based on XML and was developed for specifying content and user interface for devices such as phones and pagers.
WAP phones are driving a market which needs to supply and modify relevant information quickly and clearly. With the WAP generator and one of the many available WAP phone emulators, you can quickly build and test cards and decks that can interface with existing data sources, wherever they may be located.
Omnis Studio is a high-performance visual RAD tool that provides a component-based environment for building GUI interfaces within e-commerce, database and client/server applications. When used with the company's WebClient plug-in technology, Omnis Studio allows the development of client/server relationships over the Internet using popular web browsers, giving fast, secure, scalable solutions in a minimum of development time. Development and deployment of Omnis Studio applications can occur simultaneously in Linux, Windows, and Mac OS environments without changing the application code.
The company cites three reasons for this growth: these include the increasing adoption of the Linux operating system, a reduction in Studio's price, and increasing awareness of the power and speed of the development tool.
Spiderweb Software and Boutell.com proudly present Exile III: Ruined World, an epic fantasy role-playing game for Linux.
Exile III for Linux will be released Summer, 2000. You can find information and a large demo at http://www.spiderwebsoftware.com/exile3/linuxexile3.html.
http://www.spiderwebsoftware.com. Port By: http://www.boutell.com.
Loki has released a beta of the Linux SDK for use with Quake III Arena.
Loki also announced plans to bring Interplay's Descent 3 to Linux by July 2000. Descent has a Rock 'n' Ride simulator that moves a gamer and their monitor up 55 degrees. Up to 16 gamers can play together at one time via the Internet.
Loki will bring CogniToy's MindRover: The Europa Project to Linux by early fall 2000. MindRover is a 3D game that enables players to create autonomous robotic vehicles and compete them in races, battles and sports.
As if that wasn't enough, Loki also signed a deal with QLITech Linux Computers Computers to bundle several games with QLITech's Advanced Multimedia Workstations. Title include Civilization: Call to Power, Heavy Gear II and a Loki Games Demo CD with a full install of Eric's Ultimate Solitaire.
BELLEVUE, Wash., May 24, 2000 - GoAhead(R) Software, the leading provider of off-the-shelf service availability software for Internet infrastructure, today announced the release of GoAhead WebServer(TM) 2.1, the latest version of GoAhead's open source, royalty free, standards-based embedded Web server.
By 2002, there will be more than 42 million devices connected to the Internet (International Data Corporation). Embedding a Web server gives manufacturers access to their devices even after they are shipped. GoAhead WebServer is the only open source embedded Web server currently on the market. It provides a secure, flexible and free way to access remote devices and appliances via standard Internet Protocols. GoAhead WebServer 2.1 now includes support for Secure Socket Layering (SSL) and digest access authentication (DAA).
GoAhead WebServer 2.1's new features were made possible in part through the active developer community that has emerged in support of the product. More than 500 developers download GoAhead WebServer source code each month.
The source code is currently available for download from GoAhead Software's Web site at http://www.goahead.com/webserver/wsregister.htm.
iTools--a suite of tools to dramatically simplify Apache configuration & maintenance
Santa Barbara, CA, June 6, 2000. Tenon Intersystems's iTools extends and enhances Linux's networking performance, efficiency, ease-of-use, and functionality with a family of tools essential to serious, commercial content delivery and eCommerce. iTools is based on open-source implementations of Apache, DNS, FTP, and sendmail, created and maintained by software developers worldwide. Using the Linux open source internet software as a point-of-departure, Tenon's iTools extends the underlying architecture with a point & click interface and a rich set of new features.
In addition to extensions and enhancements to Apache, DNS and FTP, iTools includes a WEBmail server, an SSL encryption engine to support eCommerce, a sophisticated search engine, and both FastCGI and mod_perl support to provide high-performance Perl and CGI execution. All of the tools are supported using a point & click, browser-based administration tool.
The price is $199. A free demo is at http://www.tenon.com/products/itools-linux
Chili!Soft has a new version of ASP and a new product, SpiceRack, for developers working with Active Server Pages. ASP 3.5.2 runs on more distributions than the previous version, and has new database support and an improved installation routine for both experienced administrators and novices. SpicePack 1.0 offers additional ASP objects for sending mail (SMTP), receiving mail (POP3) and uploading files. Evaluation copies are at http://www.chilisoft.com/downloads/. Chili!Soft is a subsidiary of Cobalt Networks, Inc.
XPloy from Trustix AS is a GUI for Linux system administration. Manage your Linux servers graphically from a Linux or Windows workstation.
Proven Software, Inc. has created separate divisions for its single- and multi-user accounting software, citing differing market forces and user demands between the two. BestAcct is a "desktop" program for small organizations and individuals. Proven CHOICE Accounting is for value-added resellers and their clients with multi-user requirements. BestAcct sells for an astounding US$29.95, but includes much more than "checkbook programs" at that price. Proven Software has been developing business applications for over 15 years, exclusively for Linux for over 5 years.
Financial Accounting Systems, Inc. develops Linux accounting software for loan servicing, savings accounting, CD (certificate of deposit) accounting and safe deposit box accounting.
Hypercosm offers next-generation 3D authoring technology to Linux users that had previously been available only for Windows users.
LinkScan 7.2 allows webmasters and quality-assurance engineers to build fast, accurate, scalable and automated test suites for web sites. Four versions are offered--from Workstation to Enterprise--at a price range of US$300-5000. (Electronic Software Publishing Corporation (ELSOP))
WebKing 2.0 is another website-testing program. WebKing takes traditional testing techniques such as white-box, black-box, and regression testing and applies them to Web development. In addition, ParaSoft is introducing a new testing technique called Web-box testing, which is a form of unit testing essential to Web development. http://www.parasoft.com or http://www.thewebking.com. Pricing is US$3495, or $2995 before July 31, 2000.
Metro Link, Inc. has released Open Motif with Metro Link enhancements and bug fixes, available for free FTP download at http://www.metrolink.com/openmotif/ or http://www.opengroup.org/openmotif/.
HELIOS EtherShare 2.6 offers a high-performance Unix implementation of AppleTalk networking. PCShare 3.0 is a high-performance Windows-compatible file and print server for Unix servers, with support for Windows 2000 clients.
Active Concepts (San Francisco, CA) will use VA Linux servers and Linux-Mandrake released and beta versions as a testbed for its flagship Funnel Web prouct, to keep it synchronized with technological advances in all the major Linux distributions. Funnel Web costs US$1199-3499.
Xi Graphics Inc. has released as v1.1 the first major update to its new 3D graphics product line for Linux. The update, which is free to registered owners of the product, provides significant performance improvements and increased stability.
Progressive Development Systems, Inc. markets Level5 Pro, a total database software solution for wholesale distributors. Their newest product is WAM, a Web Access Module. The company is committed to offering its products on the Linux platform.
I'd like to especially thank Michael for stepping up to the wizard's hat early. Also we have a few answerbubbles this month which I think people will enjoy.
I'd like to please discourage people from sending us questions on both plaintext and HTML versions. The HTML produced by mailers is just not of publishable quality, and the mime attachment just makes our mail clunkier. Thanks for the thought, but just text will be fine.
That said, I had an interesting time this month. USENIX Annual Technical was in San Diego, and a number of core linuxers, *BSD developers, and other open source developers were there. There's been a crossover for ages but with their Freenix track it's a little more obvious. Last year the Freenix track book was half as thick as the normal proceedings. This year it's just as thick. I suspect it's a really good thing that Atlanta Linux Showcase (ALS) is partnered with USENIX now, because I think there is a lot more research to publish where those came from... I'll be there of course.
Now, on to the editorial. I thought of this mid-month. I told my friends to look for it. I didn't really expect it would become a slashdot flamewar and so on but I still think it needs to be said. So I'll add a disclaimer which many of you will consider obvious, but others may need to have clear:
Looking outside the tiny little box in front of me, and indeed outside the open source world, we have one of the most hotly debated arguments about what is, and what isn't okay to use. We should follow its model, as it appears to have stood the test of time while most of its strongest adherents have not starved to death.
I am, of course, referring to kosher food.
Many of you may think this cannot possibly relate to computing except insofar as the usual meal preceding a product release is nightly orders of pizza until it's a go. Or chinese food or whatever else it is the managers and engineers share a yen for. Last I recall vegetarian pizza is kosher (though not pareve) and the usual Meat Lover's Special definitely is not. Neither is oyster sauce.
We can think of food in this context because it covers mixing code, as well as dynamic linking. I can take a slice of good Jewish rye, and dynamically link in some corned beef. Yum, still kosher. If I also dynamically link on some swiss cheese, um, no. Still removable? Ask your rabbi if the touched meat remains trafe. Most customers wouldn't be able to tell if this had been done in the kitchen. If I make that a hot sandwich, I statically linked it, guess I should get a new one.
A big fuss in the GPL seems to be about the sentence fragment which, paraphrased, is something like "the whole of derivitive works shall be under the GPL". One of its more common allergies is what to do about things which require linkage against something that is under some other license. (I refuse to label other licenses more or less restrictive, without a context to apply.)
But the fact is, that the rules of kosher food are not about preventing jewish kids from enjoying cheeseburgers with their schoolfriends. They're about health. It just isn't safe to eat crustaceans from the wrong part of the sea, pork that may be undercooked, and a number of other things. Conversely our concern over licenses is about our health. If a company, or a coalition of friends, that is responsible for maintaining a product stops answering their email forever, what am I as a user of their product able to do with it? Even if I don't personally read its source code, under the DFSG compliant licenses, I can always hire some programmer to solve my problems with it and make derivitive works. This truth is made more useful by the fact that it was also legal for me to glom a copy of the source code and keep it around.
It's perfectly normal for me to buy products at the store, in neat packaging even, which are not directly consumer-level food. At least, I know very few people who buy a bag of flour in order to scoop handfuls of it into their mouth and call it lunch. It's normally statically linked against some dairy products or water, leavened with yeast, and made into sandwich fixin's or (with more linkages) sweets. Ooo, I almost forgot. Leavening it means it's not kosher for passover. Do some people eat in this "more kosher" fashion all the time? I suspect some do.
There are other products, like cereal, which we normally expect to be dynamically linked (milk please!) but which are sometimes prepared in other ways (eg. rice krispie bars) and yes, I know kids who eat cereal straight out of the box.
So this is what I was thinking when the debate was re-awakened: Is the K project kosher? I think so. Others don't have to think so. Right now, the "Harmony" project (http://harmony.ruhr.de/ ? I can't read German, and couldn't find code) which would claim to also meet Qt's API, isn't enough to make even little bitty sandwiches with. But one of the Harmony crew feels that the QPL is kosher enough for him (read his letter to LWN at http://lwn.net/1998/1203/a/jd-harmony.html) so it may be a bit of work. I think I'll go get me a nice, thick, not-kosher-for-passover, corned beef sandwich on rye.
Most non-Linux questions don't get published here, or answered at all. Nonetheless, best of luck in your quest for knowledge...
From megabad on Tue, 06 Jun 2000
hello please my i have 5 mins of you time.
Since I have installed by Sis 6326 card when I start my computer it says missing 5591agp.vxd and then missing 5600agp.vxd please help cos i have not got a clue
thanks paul
Those sound like MS Windows problems. You're system has been infected with the infamous and widespread "Redmond" virus and you should wipe out the whole system and install Linux.
Alternatively you should contact a vendor that support Microsoft products. I don't.
(BTW: VXD is the extension used for "virtual extension drivers" or something along those lines. The were introduced in MS Win 3.1 or so IIRC. The AGP stuff is some sort of "advanced graphics port" --- a type of slot in newer PC motherboards. I presume that this error is to tell you to install a new video driver).
From iwomack on Wed, 07 Jun 2000
Dear Answer guy!
Why is it that, whenever I send a word document file to one of my many contacts, the file is received on his end as a winmail.dat file? I am using Microsoft outlook and I am sending a word Document. Please Help?
IAN WOMACK
First, let me contratulate you on NOT sending me a 'winmail.dat' file.
winmail.dat is a file attachment that Microsoft's Outlook mail client attaches to most mail so that it can contain any of MS' extensions to mark up the text of your mail. So the basic text of your message, with no highlighting or special formatting is supposed to comprise the main body of your e-mail, while 'winmail.dat' is supposed to contain all of the formatting and other fluff that makes it look the same to another MS Windows user as it did to you.
I personally find winmail.dat files to be mildly annoying.
However, I find that mail sent to me in proprietary formats (such as MS Word .doc) to be highly irritating. Basically people have to pay me to read those. If you're not a customer or my boss and you send me something as an attachment in any format that I can't readily read --- your mail goes into the bit bucket faster than you can say "delete."
Of course it would be unfair to single out Microsoft in this regard. I don't like Netscape's "vcard" attachments any less obnoxious than "winmail.dat" and I find Netscape's previously default behavior of appending HTML formatted copies of the body text to all outgoing e-mail to be almost as bad as appending .doc or other binary formats. (At least I can read between the tags if I care to).
Of course I'm a curmudgeon in this regard. I think that plain old unadulterated text is a fine tool for communications and I don't like to see a lot of formatting fluff to confuse the issue. I still use Lynx for most of my web browsing, and I still work from text mode consoles more often than not. (Although I've made it a point to stay in X most of the time on my new laptop, mars, and on my latest home desktop client, canopus. I still use a big xterm running a copy of 'screen' for almost all of my work).
Anyway, If you want to learn how to send mail that is likely to be most effective and least irritating to the broadest range of correspondents, then eschew all of the fancy formatting, and learn to write!
As for configuring your mail client to behave itself, I don't know. I don't use any MS products and certainly wouldn't use a GUI mail client. Perhaps Microsoft offers some sort of support with their products. Last I heard they run a 900 (pay-by-the-minute) telephone service. Perhaps that could answer your questions more thoroughly for a few quid.
(On the other hand, you could switch to Linux, which would make Outlook basically unavailable to you. Then you'd also be protected from the next few outbreaks of the "Melissa" and "Love Bug" viruses among others. Indeed you'd be immune from that whole class of plagues).
From Sheree_Shannon on Thu, 08 Jun 2000
Hi, I purchased a 1998 Chrysler Town & Country van recently that has a cd player. When I try to put a cd in, it immediately comes back out. Someone told me another cd must be stuck in there. How can I find out, and how do I get it out?
Thanks. Sheree'
I know this is going to sound shocking, but did you look in the owner's manual or contact a factory dealership?
- I suppose you could try the "Ask Chrysler" CGI program at:
- http://ask.chrysler.com
Or take it down to your favorite neighborhood car stereo shop and have them take a look at it. Of course all of those venues will try to sell you a new CD player, or a new car, or something.
I won't try to sell you anything. I just answer Linux questions for this online magazine called the "Linux Gazette." They picked "The answer guy" as a name for my column which is presumably how you got tricked into mailing this question to me.
O.K. I lied. I'll try to sell you something. You could try replacing that CD player with an automotive MP3 player. That would mean that you'll "rip" your CDs on you home computer, download them into a little computer in your car and use that to play them.
Here's a few links on that idea:
- Open Directory - Computers: Software: Operating Systems: Linux: Music
- http://www.dmoz.org/Computers/Software/Operating_Systems/Linux/Music
(BTW: the Open Directory Project, dmoz, is very cool. Think, community driven Yahoo!)
- Knight Rider MP3 Player
- http://knightrider.linuxave.net
- Slashdot:IBM and Mp3
- http://slashdot.org/articles/99/04/04/1530213.shtml
- Slashdot:Doing the Quickee Boogie
- http://slashdot.org/articles/99/01/13/226240.shtml
From texastootles on Mon, 12 Jun 2000
WebTV Support Line? NOT!
For 3 days I get this update when i try to get on webtv I have let them but they just never seem to finish what shall I do?
~tootles~
This is not the WebTV tech support line.
Perhaps they offer some sort of customer service with the product they sold you and the "service" to which you are subscribed.
[Rolls eyes heavenward! Sighs!]
One or more questions may be posted here, as well as any that need translation before the Gang can answer. Got any answers for these? Send them to tag@ssc.com
From David Lee on Thu, 22 Jun 2000
Hello,
I would like to have a question about stripping binary and library files.
Actually I am building a Linux boot/root floppy disk. I need to fit in some huge shared library files, especially libc-2.1.1.so.6. (I am using Linux Mandrake 6.1). Reducing file size is necessary.
I think either objcopy or strip can be used. However, the Linux Bootdisk HOWTO says that only debug symbols should be removed (--strip-debug). What would happen if everything is removed (--strip-all)? I have tried and the resulting boot/root disk seems to be OK. However, something must be wrong ...
Thanks for your help.
David.
From hilsen kasper on Thu, 22 Jun 2000
Dav jeg syntes at det er en gode side du har med en masse gode brugbare råd . men det er ikke det jeg vil , je har et problem som du måske kan hjælpe mig med . Jeg har en 450 mhz p3 cpu som jeg gerne vil have overclocket jeg har et asus bundkort model :p2b/f1440bx agp atx. Jeg ved ikke om at jeg skal have noget extra køling på når det kun er til 500 mhz da mit bundkort ikke kan tage mere.en anden ting er at jeg ikke ved hvordan jeg gør så jeg håber at du vil hjølpe mig. JEg håber at du vil hlælpe mig med mine spørgsmål.
hilsen kasper
From D. Scott Lowrie on Mon, 19 Jun 2000
hi,
I've been able to use procmail when I send the mail to &myuserid+keyword.
Where I assign a variable (say PLUSARG=$1).
I can use the variable PLUSARG to base some procmail recipes. So what's the question??? well it seems if I us and alieas set up for me and the "+keyword" syntax that procmail doesn't pass the "+keyword" in as the $1 parameter. E.G. $1 is found when I use myid+keyword but not with alieasId+keyword. thanks,
Scott Lowrie
I think that your mailer (sendmail) is actually the culprit. I think that the MTA is stripping out everything from the + to the @ since (since that's how it figures out which mailbox is the intended recipient with the old "plusaddressing" convention).
It seems to be that the ^TO macro in procmail does expose the header address (which retains the + extension) while the envelope address is being passed to your PLUSARG.
Try using the ^TO pattern.
From D. Scott Lowrie on Tue, 20 Jun 2000
thanks for the suggestion ... I really appreciate your help.
Scott FYI I'll let you know what we are doing with the "+arguement" -
its not all that clever but in our case makes for an easy way to add things to our online documentation. Perhaps some other user may find this useful for simple/quick documentation.
The simple idea is that we have a userid called "caddoc" and if you have an email that others may find useful (perhaps like the tip you just gave me!) then we just would send/forward/cc/bcc to "caddoc+mailtips". The procmail script then processes the caddoc to signify you want to document something; the +mailtips to signifiy you want it in our "mailtips" archive area; and then uses "mhonarc" to add this email in html format to the data area defined in the "mhonarc" call. So with just a simple addition of "caddoc+mailtips" we get the info tucked away for future reference. The alternative of swipping the data; putting this data in a file in the mailtips area; updating the index.html would also work but the reality is "our natural laziness" makes it unlikely to happen.
Scott Lowrie
From Devil Man on Thu, 08 Jun 2000
Hello answer guy I was wondering and have been unable to find any info about a shell scripting utility or command that can be used to generate a random number such as if I wanted to create a shell script to generate a random number between 1-20 or so. It dose not have to be a all in one basically how do you generate random numbers and the command line?
Thanks randomly speaking
Well the easiest way, under bash is to simply use the predefined "magic" shell variable: $RANDOM. So the following might work for you:
RANDOM=$$$(date %+s) function d20 () { d=$[ ( $RANDOM % 20 ) + 1 ] }
... The first line just seeds bash' random number generator using your current process ID (PID) and the current time/date expressed as the number of seconds since 1970 (the UNIX epoch). This should prevent RANDOM from generating the same predictable sequence every time you run it. (You can set bash' RANDOM to new seed values, but if you ever 'unset' it --- it will lose its special "magic" property for the life of that shell/process. This is true of a couple of bash' "magic" variables).
Note that this form of random seeding is common but not adequate for proper cryptography, or high stakes gambling. For that we probably wouldn't be using the shell, we certainly wouldn't be storing things in environment variables, and we'd probably want to read a bit of entropy out of the Linux /dev/urandom or /dev/random devices (depending on the relative importance of speed versus "quality of entropy" required).
Shell function, which I've named after the gamer's conventional abbreviation for their favorite polyhedral (dice), simply takes a $RANDOM value modulo 20 (modulus is the remainder of a division, and thus gives us a number between 0 and 19) and then I add one to just this from the range 0-19 up to 1-20.
This method (take a modulus of a number and add a base) is commonly used by programmers to get random values within a specific range. If you want the numbers to follow a specific curve you can use additional arithmetic operations and additional random values. For example to get a nice bell curve that reasonably approximates a natural population where lots of entities are "average", a few are "exceptional" or "bad" and a very few are "super" or "woeful" you can use a sum of several random numbers.
The classic "Dungeons & Dragons (TM)" 3d6 gives such a curve which is why they don't simply use a single d20 for each ability score. It's also why simple percentile rolling on a pair of d10s or d20s doesn't give the "right" distribution of results.
You can get some really wacky curves if you take one random value and divide it with another (round down to the nearest int). For example a d6/d4 gives a number from 0 to 6 with only a 1 in 24 chance of getting a 6, and 25% chance of getting nothing, just over %30 of getting a 1, etc. But I digress.
Of course my example here depends on bash. So it's not very portable.
Here's a method that's somewhat more portable:
r=`fortune | cksum | cut -f1 -d" "` d=`expr \( $r % 20 \) + 1 `
Those are shell backtick (command substitution) operators. That's an older syntax which is supported by very old shells (and is still supported by new ones). I use that on the command line sometimes, but I prefer to use the newer syntax $( ... ) in scripts and when explaining shell programming. It's easier to read and it's easier to write clearly on a whiteboard. (Of course both forms mean the same thing, execute the enclosed command(s) capturing the output from them, and paste that output into parent expression as a replace for the whole "backtick" expression).
The 'fortune' command is included with most versions of UNIX and is commonly installed. It's just a little program that randomly chooses a "fortune cookie" --- a random quotation or aphorism --- and prints it. Lots of people see those every time they log in, and some of the X screensavers (like Mr Nose) use them. In this case we get a random phrase and feed it to the cksum (BSD/SysV checksum program). The checksum of a random strings should be random. (I don't have a rigorous mathematical proof of that handy --- but I'm pretty sure it's true; though it may not give a very even distribution). (That's another advantage to the $(...) form. It's nestable, you can have $( foo $( bar ... ) ) without ambiguity or error).
So I use another line and the old 'expr' command to scale $r to the desired range. I have to use two lines in this case, since the old "backtick" form cannot be "nested" (or at least the kinds of quoting tricks that might allow one to nest such a beast would probably not be very portable and would certainly be less readable.
Note that the 'expr' command is fairly picky --- so we must separate our operands and operators with spaces so that it sees each as a separate command line argument. Also note that I must quote/escape the parentheses in my arguments to 'expr' since I need for 'expr' to see them, so I have to prevent the shell (specifically the subshell that's executing my backtick command) from seeing those parentheses as a "subshell" operator. You could also wrap each of those parens. in quotes, single (hard) or double (soft). However you should NOT try to just wrap the whole expression in single or double quotes, because then 'expr' will see it all as one big (string) argument rather than as a sequence of numbers and operators. Sorry that's so complicated. That's how 'expr' works. In general it's much easier to use a more recent ksh, bash, or zsh which supports the internal 'let' command as well as $(( .... )) and/or $[ ... ] syntax for arithmetic operations.
Obviously if you want you script to be very portable, and you can't guarantee that your users will have a 'fortune' command installed, or that they'll have a recent version of a decent shell then you'll have to work at some other way to get a random number.
As long as you have expr, cksum, and ps (and/or w and/or who), date (and/or the time command), cut (or awk) it should be possible to cook up small random numbers suitable for dice games, etc.
The trick is to run some of those commands in a subshell, piping their combined output into cksum and cut out the checksum value. Any commands that are very likely to give different, even slightly different information when run from one second to the next are suitable as input to your checksum. Thus one new process or one that dies or changes state give different ps output. Every second the idle time reported by the 'w' (who) command will be updated. Of course the 'date' command will be different every second, as well.
Of course once you have a seed value (based on something non-deterministic, or something that is usually going to be different each time your program runs) then you can use your own arithmetic operations to perturb that seed value.
Here's a link to a discussion of "simple psuedo-random number generation":
http://www.sct.gu.edu.au/~anthony/info/C/RandomNumbers
The examples can be adapted to sh pretty easily:
Set initial value:
seed=`( echo $$ ; time ps ; w ; date ) | cksum | cut -f1 -d" " `
Use it:
echo $seed
seed=` expr \( $seed \* 9301 + 4929 \) % 233280 `
... note I have to escape my "*" 'expr' operator to prevent it from being expanded (into a list of files) due to shell globbing.
Also note that this must be run in the current shell context --- putting the seed=... line in a shell script wouldn't work because the shell script runs in its own shell, updates its own value of the seed, and then exits. That would leave our copy of the seed unchanged.
So, if this calculation (the linear congruential method) is to be stored in a shell script it must be invoked with the shell's "dot operator" or the 'source' built-in command. That will execute it within the context of the current shell, allowing the lines therein to modify the values of your current shell's variables.
I came across another nice article on the "linear congruential" calculation of psuedo-random numbers at:
http://www.acm.inf.ethz.ch/ProblemSetArchive/B_US_NorthCen/1996/prob_f.html
This apparently was in the context of a programming assignment, challenge or contest of some sort.
It should be noted that the values of your L, I, and M (the numbers you multiple, increment and modulo your current/seed value with at each iteration) can't be arbitrarily chosen. There are some some values for these that give "good" psuedo-randomness (an even distribution of return values across the spectrum of available numbers) while others will give very bad numbers.
Frankly I think all that stuff is two complicated. So I'm glad I use Linux where I can just use:
dd if=/dev/urandom count=1 2> /dev/null | cksum | cut -f1 -d" "
... to get all the randomness I want.
So, I hope that's more than you wanted to know about generating psuedo-random numbers using the shell.
From Devil Man on Mon, 12 Jun 2000
Just a Note see below.
And thanks for all the wonderful info and the quick response...
-- The Linux Gazette Answer Guy <answerguy@ssc.com> wrote:
Getting Random Values in sh
>Hello answer guy I was wondering and have been unable to find
>any info about a shell scripting utility or command that can be
>used to generate a random number such as if I wanted to create a
>shell script to generate a random number between 1-20 or so. It
>dose not have to be a all in one basically how do you generate
>random numbers and the command line?
>Thanks randomly speaking
Well the easiest way, under bash is to simply use the predefined "magic" shell variable: $RANDOM. So the following might work for you:
RANDOM=$$$(date %+s)
shouldn't the date command be (date +%s)
Yep. That was a typo.
Answered By dps on Thu, 08 Jun 2000
There are valid reasons other than those evil one you give for wanting to limit the exported symbols---prime examples are big jobs split into multiple source files, which export symbols that implement the feature. Normally one would not want to export these symbols after linking the library.
IF anyone wants to use undocumented functions that for advanatge over the competition then removing it also stops them linking those functions. A little reading of the binutils man page will reveal, for example the -L optionm in objcopy
-L symbolname, --localize-symbol=symbolname Make symbol symbolname local to the file, so that it is not visible externally. This option may be given more than once.
which seems to fit the bill. (Several symbols in the resolver code became local in the glibc 2.0 yto glibc 2.1, breaking various programs that used the undocumented behaviour of those symbols.)
The main differences is that M$ dll's seem to require you to explicitly list what is externally visible, unlike most unicies and their shared libraries.
I didn't think I characterized it as "evil." I just didn't think it would be very useful.
However, you've shown me a new trick. I hope my earlier correspondent checks back if he or she still needs this tidbit.
Answered By Steven G. Johnson on Tue, 30 May 2000
Hi, noticed your answer regarding "public interfaces" in shared libraries in the latest Linux Gazette, and I had a couple of comments. (I am a programmer, and have written several libraries and shared libraries under Linux.)
There are at least two good reasons to hide functions from public interfaces:
- If a function is internal to the library, and it may well disappear or change incompatibly without warning in future versions, so that you don't want to be worry about people using it.
Any library will almost certainly contain a large number of such internal functions, and the code would be utterly unmaintainable if you couldn't change them between releases because people depended on them.
Of course, it is usually sufficient to simply not document those functions or declare them in your header files, so that programmers who find out about them should know that they use them at their own risk. (Some programmers are foolish enough to do so, even though it is almost never a good idea. e.g. there was a well-known case where StarOffice had depended upon internal glibc functions and therefore broke when glibc was upgraded.)- If you don't want to pollute the namespace.
If I have an internal function in my library called something generic, like print_error, I run the risk of accidentally conflicting with a function of the same name in a calling program, with unpredictable results. One way around this is to prefix the function with the name of my library, calling it e.g. foo_print_error if my library is libfoo. But this can be awkward to do for every little internal function you write, and it is often preferable to simply hide them from the linker.
There is a solution, however, provided by ANSI C: simply declare your functions with the "static" keyword, and they will only be visible/callable within the file they are defined in. This isn't perfect, I suppose, because they also aren't visible to other files in the same library. However, it covers the case where foo_ prefixes are most annoying: little utility functions that are only called within one file.
Cordially,
Steven G. Johnson
Answered By Dave Cotton on Wed, 31 May 2000
This is just a long shot. I have only loaded Corel once or twice to see what it looked like. His name does not tell me, is he using a US keyboard, or a non US one? I have found that before the system is loaded on some distros you are in US keyboard layout, afterwards it runs your native keyboard. I got caught by this, I had numbers in the password, in the US they're unshifted in FR they're shifted!
Keep up the good work
Dave Cotton
France
Answered By Bill Rausch on Thu, 1 Jun 2000
All? of the popular free sh-like shells (pdksh, bash, etc.) have the bug you mentioned in http://www.linuxgazette.com/issue54/tag/11.html. It is documented somewhere; I've seen a reference a couple of times (pdksh man page I think, maybe other places as well).
I got bit porting some scripts from HP-UX to Linux. Took a while to figure out what was busted. The particular construct I was using was piping a command through a "while read loop".
The original ksh is now available from ATT but with some kind of goofy license. I don't know if it will catch on or not.
Bill
Essentially, the goofiest thing about it is making sure everyone is clear
that further recipients have to actively agree to the license; active
agreement is why they don't have a normal FTP site to get these toys from.
-- Heather. ]
Answered By David Uhring on Thu, 22 Jun 2000
Using lilo's mapping facility does not attempt to fool Windows; it fools the BIOS. Linux Mandrake's installer generates a lilo.conf file similar to this one, actually in use on my system:
boot = /dev/hda map = /boot/map install = /boot/boot.b vga = normal default = linux read-only linear prompt timeout = 150 message = /boot/message image = /boot/vmlinuz label = linux root = /dev/hda14 image = /boot/vmlinuz.suse label = suse root = /dev/hda14 other = /dev/hdb1 label = dos table = /dev/hdb map-drive=0x80 to=0x81 map-drive=0x81 to=0x80 other = /dev/hda1 label = obsd table = /dev/hda other = /dev/hda2 label = sun table = /dev/hda other = /dev/hda3 label = fbsd table = /dev/hda
The Win98SE installation, BTW, is remarkably stable. I've been using it for about four months with only two BSOD's. Course, with four flavors of UNIX available, I really don't use Windows all that much.
Dave
From ng chin kar on Sat, 17 Jun 2000
Hi answerguy,
I'm an novice to linux and had just installed Linux redhat 6.0 to my system and like to configure to multi-boot to my WINNT4.0, WIN98 and DOS. How to counter act this problem. Pls Advice. Thanks
Regards LINUX NOVICE
From Michael Williams on Sat, 17 Jun 2000
It's difficult to answer this problem since you do not give enough details about your problem. Answer these questions, and I can help you out:
1. Do you have LILO installed? 2. What partitions and drives are all the OS's on?
Mike
From ng chin kar on Tue, 20 Jun 2000
Answered by Michael Williams
Hi there,
Current;y I'm able to have the multi-boot option. But I had another question to seek your advise that is I install another linux (Corel Linux) and the installation goes smoothly but when at the login screen, the screen starts to brink and it starts when try to start the KDE display manager. Pls advise.
Regards and thanks
Right, here's a solution. As far as I can tell, all you need to do is edit your /etc/lilo.conf file to include the following text:
boot=/dev/hda read-only prompt timeout=50 vga=ext #Windows 98 other=/dev/hda1 label=dos table=/dev/hda #Linux image=/boot/linux #Put the name of your kernel here root=/dev/hda3 label=linux #Windows NT other=/dev/hda2 label=winnt table=/dev/hda
You can edit it by typing:
pico /etc/lilo.conf
Then type:
lilo
To install lilo to the MBR
Obviously, you can use any text editor. That should solve your problem, unsless of course something else is wrong
Mike
From pha17 on Wed, 07 Jun 2000
My linux computer freezes when i try to boot, it just says " LI " then hangs.
I have got into the system with a boot disk and checked the lilo.conf file and run lilo which returns " Added Linux * " but lilo still will not boot
Can you tell be whats wrong ?
In the documentation for LILO there is a description of this problem. The LILO boot loader prints each of the characters: "L ... I ... L ... O" (LILO) at a different point in the processs of reading/parsing the partition table, loading the secondary boot loader code, locating and loading its maps (which it uses to locate and load kernels and optionally a initial ram disk --- initrd--- images).
When the system stops at LI then that tells you that the process failed before it could reach the part of that sequence where it would have printed the second "L".
Usually this means that you have a mismatch in the way that LILO and the BIOS are trying to access specific parts of a drive. One may be using CHS (cylinder, head, sector) co-ordinates while the other might be expecting LBA (linear block address) offsets. So, try adding the "linear" directive to the global section of your /etc/lilo.conf (and re-running the /sbin/lilo command to build and install a new boot loader from that conf file, of course).
Alternatively, try changing your PCs CMOS Setup options. Look for an option like "LBA" or "UDMA" mode and disable it. Note that this may not work with newer large capacity drives.
Search the back issues of LG on the term "LILO" for many other discussions of this sort of issue and explanations about what LBA and CHS mean, and some commentary on the historical reasons why IDE has evolved through all these EIDE, LBA, UDMA iterations.
Also note that it's still a good idea to make a small (16 - 32 Mb) "boot" partition at or near the beginning of any hard drive on which you install Linux. That should be entirely below the 1024 cylinder line. Newer versions of LILO can work around that infamous limit in most cases --- but it's still a good idea. Most people mount this partiton on /boot. It is the best place to put your kernels, initrd images, and their System.map files. (If you have MS-DOS or some other Microsoft OS installed in a large partition at the beginning of a drive, such that you can't put a small partition below cylinder 1024, consider using LOADLIN.EXE instead of LILO).
It may also be a good time to look at GRUB (the GNU grand unified bootloader). I haven't played with this yet; but I've heard that some people are very happy with it.
You can find out more about GRUB at its home page on the Free Software Foundation's (FSF) web pages:
http://www.gnu.org/software/grub/grub.en.html
From TRANS on Tue, 06 Jun 2000
I'm having trouble getting into telnet from an off campus computer. I've
tried to access my shakespeare account, lear, using the email express tab on the IUB homepage, but everytime I try to connect a little dialog box pops up and says connection failed. I know that my modem is working, so I don't know what the problem is. I would appreciate any help you could offer. Thanks,
Nellie Khalil
Unfortunately you message doesn't contain enoough information to troubleshoot the problem.
It sounds like you're trying to access an account under the name of 'lear' on a host named 'shakespeare' from some other system which is "off campus." I guess you're looking at the web pages at "IUB" which I presume is some university or college, and you're clicking om some link that this web page provides (which is labelled "email express" or something like that).
So, you think your modem is working (presumably becuase you can use your browser to access some web page). It's not clear why you think telnet is involved in any of this. Is the link you are clicking of the form "telnet://shakespeare..."?
However, there are way too many variables here. Most likely the system you are trying to access is behind some sort of firewall or on some network that is not routable from where you're connected to the net. It is also likely that the link on this IUB page shouldn't be visible from the outside world. Of course it's also possible that you have a broken browser, don't have a telnet client installed, are looking at a broken link or that I've completely failed to guess at what your question really meant.
In any event it is VERY unlikely that this is a Linux question and it is VERY likely that you should contact the help desk at whatever campus you're talking about.
It also seems like you might won't to take some extra classes in communications skills. It amazes me that anyone could write a message such as this with an apparent total ignorance of how much it assumes of the reader.
How many people out on the internet know what you mean by IUB? How many other meanings of IUB might collide with that? What campus? What "tab"? Where would I look to see that "tab" and how were you "using" it (clicking, I guessed)? What put up that little dialog box? (Your browser? MS Windows? Linux GNOME or KDE?) etc.
I probably should just delete messages like this. It's clear that I spend far more time and energy trying to understand them than the writers put into composing them.
However, there's a part of me that hopes that some of the people that read this rant will think about it. Maybe they'll re-read what they've written to some tech support guy out there and ask themselves: "Have I provided enough information that this correspondent could possibly answer my question?"
From Aung Win Thu on Tue, 06 Jun 2000
Dear Dennis
I have installed Linux Redhat 6.0 on my intel machine. I want to disable anonymous login to my ftp site. how can i do it? Thaks for your help.
Aung
Traditional FTP daemons used the existence of an account named "ftp" (an ftp:... entry in the /etc/passwd file) as a flag to enabled anonymous FTP.
If you're using a plain old BSD port of ftpd you could just remove that user. There are various other FTP daemons such as WU-ftpd (originally from Washington University in St Louis, wustl.edu), NCFTPd (by Mike Gleason, author of the ncftp client) and ProFTPd (http://www.proftpd.net). Each of those supports some way of enabling anonymous FTP --- so all you have to do is reverse those steps to "disable" it.
I also recommend that you use TCP wrappers (edit your /etc/hosts.allow and /etc/hosts.deny files) to limit whence FTP connections are accepted. In other words, start with ALL: ALL in your /etc/hosts.deny and add the networks and IP address patterns (and/or domain name patterns) from which you will allow you users to access your site.
Personally I'd like to see FTP fade away in favor of SSH or more secure protocols. (Although it may be more of a race between the secure protocols at the applications level and the transparently secure protocols in IPSec FreeS/WAN).
From michael.rees on Wed, 07 Jun 2000
Hi, Sorry to bother you but could you help me with the following??
i am running red hat linux 6.1 and am encountering some problems i can login as root from the console but not from anywhere else
i have to login as webmaster on all other machines on ntwk
from nowhere, including the console, can i su once logged in as webmaster
any help would be appreciated
Regards, Michael
Your system is enforcing a very reasonable policy by preventing direct 'root' logins from over the network.
The best way to circumvent this policy is to use one of the implementations of SSH (the original SSH by Tatu Ylonen, now owned and commmercially available from DataFellows Inc http://www.datafellows.com, or OpenSSH http://www.openssh.com --- which is ironically at a .com rather than a .org domain, or the GPL'd lsh at http://www.net.lut.ac.uk/psst).
Any of these should allow you to access your system through cryptographically secured authentication and session protocols that protect you from a variety of sniffing, spoofing, TCP hijacking and other vulnerabilties that are common using other forms of remote shell access (such as telnet, and the infamous rsh and rlogin packages).
If you really insist on eliminating these policies from your system you can edit files under /etc/pam.d that are used to configure the options and restrictions of the programs that are compiled against the PAM (pluggable authentication modules) model and libraries. Here's an example of one of them (/etc/pam.d/login which is used by the in.telnetd service):
# # The PAM configuration file for the Shadow `login' service # # NOTE: If you use a session module (such as kerberos or NIS+) # that retains persistent credentials (like key caches, etc), you # need to enable the `CLOSE_SESSIONS' option in /etc/login.defs # in order for login to stay around until after logout to call # pam_close_session() and cleanup. # # Outputs an issue file prior to each login prompt (Replaces the # ISSUE_FILE option from login.defs). Uncomment for use # auth required pam_issue.so issue=/etc/issue # Disallows root logins except on tty's listed in /etc/securetty # (Replaces the `CONSOLE' setting from login.defs) auth requisite pam_securetty.so # Disallows other than root logins when /etc/nologin exists # (Replaces the `NOLOGINS_FILE' option from login.defs) auth required pam_nologin.so # This module parses /etc/environment (the standard for setting # environ vars) and also allows you to use an extended config # file /etc/security/pam_env.conf. # (Replaces the `ENVIRON_FILE' setting from login.defs) auth required pam_env.so # Standard Un*x authentication. The "nullok" line allows passwordless # accounts. auth required pam_unix.so nullok # This allows certain extra groups to be granted to a user # based on things like time of day, tty, service, and user. # Please uncomment and edit /etc/security/group.conf if you # wish to use this. # (Replaces the `CONSOLE_GROUPS' option in login.defs) # auth optional pam_group.so # Uncomment and edit /etc/security/time.conf if you need to set # time restrainst on logins. # (Replaces the `PORTTIME_CHECKS_ENAB' option from login.defs # as well as /etc/porttime) # account requisite pam_time.so # Uncomment and edit /etc/security/access.conf if you need to # set access limits. # (Replaces /etc/login.access file) # account required pam_access.so # Standard Un*x account and session account required pam_unix.so session required pam_unix.so # Sets up user limits, please uncomment and read /etc/security/limits.conf # to enable this functionality. # (Replaces the use of /etc/limits in old login) # session required pam_limits.so # Prints the last login info upon succesful login # (Replaces the `LASTLOG_ENAB' option from login.defs) session optional pam_lastlog.so # Prints the motd upon succesful login # (Replaces the `MOTD_FILE' option in login.defs) session optional pam_motd.so # Prints the status of the user's mailbox upon succesful login # (Replaces the `MAIL_CHECK_ENAB' option from login.defs). You # can also enable a MAIL environment variable from here, but it # is better handled by /etc/login.defs, since userdel also uses # it to make sure that removing a user, also removes their mail # spool file. session optional pam_mail.so standard noenv # The standard Unix authentication modules, used with NIS (man nsswitch) as # well as normal /etc/passwd and /etc/shadow entries. For the login service, # this is only used when the password expires and must be changed, so make # sure this one and the one in /etc/pam.d/passwd are the same. The "nullok" # option allows users to change an empty password, else empty passwords are # treated as locked accounts. # # (Add `md5' after the module name to enable MD5 passwords the same way that # `MD5_CRYPT_ENAB' would do under login.defs). # # The "obscure" option replaces the old `OBSCURE_CHECKS_ENAB' option in # login.defs. Also the "min" and "max" options enforce the length of the # new password. password required pam_unix.so nullok obscure min=4 max=8 # Alternate strength checking for password. Note that this # requires the libpam-cracklib package to be installed. # You will need to comment out the password line above and # uncomment the next two in order to use this. # (Replaces the `OBSCURE_CHECKS_ENAB', `CRACKLIB_DICTPATH') # # password required pam_cracklib.so retry=3 minlen=6 difok=3 # password required pam_unix.so use_authtok nullok md5
This is from my Debian laptop (mars.starshine.org) and thus has far more comments (all those lines starting with "#" hash marks) than those that Red Hat installs. It's good that Debian comments these files so verbosely, since that's practically the only source of documentation for PAM files and modules.
In this case the entry that you really care about is the one for 'securetty.so' This module checks the file /etc/securetty which is classically a list of those terminals on which your system will allow direct root logins.
You could comment out this line in /etc/pam.d/login to disable this check for those services which call the /bin/login command. You can look for similar lines in the various other /etc/pam.d files so see which other services are enforcing this policy.
This leads us to the question of why your version of 'su' is not working. Red Hat's version of 'su' is probably also "PAMified" (almost certainly, in fact). So there should be a /etc/pam.d/su file that controls the list of policies that your copy of 'su' is checking. You should look through that to see why 'su' isn't allowing your 'webmaster' account to become 'root'.
It seems quite likely that your version of Red Hat contains a line something like:
# Uncomment this to force users to be a member of group root # before than can use `su'. You can also add "group=foo" to # to the end of this line if you want to use a group other # than the default "root". # (Replaces the `SU_WHEEL_ONLY' option from login.defs) auth required pam_wheel.so
Classically the 'su' commands on most versions of UNIX required that a user be in the "wheel" group in order to attain 'root' The traditional GNU implementation did not enforce this restriction (since rms found it distasteful).
On my system this line was commented out (which is presumably the Debian default policy, since I never fussed with that file on my laptop). I've uncommented here for this example.
Note that one of the features of PAM is that it allows you to specify any group using a command line option. It defaults to "wheel" because that is an historical convention. You can also use the pam_wheel.so module on any of the PAMified services --- so you could have programs like 'ftpd' or 'xdm' enforce a policy that restricted their use to members of arbitrary groups.
Finally note that most recent versions of SSH have PAM support enabled when they are compiled for Linux systems. Thus you may find, after you install any version of SSH, that you have an /etc/pam.d/ssh file. You may have to edit that to set some of your preferred SSH policies. There is also an sshd_config file (mine's in /etc/ssh/sshd_config) that will allow you to control other ssh options).
In generall the process of using ssh works something like this:
- Install the sshd (daemon) package on your servers (the systems that you want to access)
- Install the ssh client package on your clients (the systems from which you'd like to initiate your connections).
- Generate Host keys on all of these systems (normally done for you by the installation).
.... you could stop at this point, and just start using the ssh and slogin commands to access your remote accounts using their passwords. However, for more effective and convenient use you'd also:
- Generate personal key pairs for your accounts.
- Copy/append the identity.pub (public) keys from each of your client accounts into the ~/.ssh/authorized_keys files on each of the servers.
This allows you to access those remote accounts without using your passwords on them. (Actually sshd can be configured to require the passwords AND/OR the identity keys, but the default is to allow access without a password if the keys work).
Another element you should be aware of is the "passphrases" and the ssh-agent. Basically it is normal to protect your private key with a passphrase. This is sort of like a password --- but it is used to decrypt or "unlock" your private key. Obviously there isn't much added convenience if you protect your private key with a passphrase so that you have to type that every time you use an ssh/slogin or scp (secure remote copy) command.
ssh-agent allows you to start a shell or other program, unlock your identity key (or keys), and have all of the ssh commands you run from any of the descendents of that shell or program automatically use any of those unlocked keys. (The advantage of this is that the agent automatically dies when you exit the shell program that you started. That automatically "locks" the identity --- sort of.
There are alot of other aspects to ssh. It can be used to create tunnels, through which one can use all sorts of traffic. People have created PPP/TCP/IP tunnels that run through ssh tunnels to support custom VPNs (virtual private networks). When run under X, ssh automatically performs "X11 forwarding" through one of the these tunnels. This is particularly handy for running X clients on remote systems beyond a NAT (IP Masquerading) router or through a proxying firewall.
In other words ssh is a very useful package quite apart from its support for cryptographic authentication and encryption.
In fairness I should point out that there are a number of alternatives to ssh. Kerberos is a complex and mature suite of protocols for performing authentication and encryption. STEL is a simple daemon/client package which functions just like telnetd/telnet --- but with support for encrypted sessions. And there are SSL enabled versions telnet and ftp daemons and clients.
Another issue where I talked a bit about crypto software available for Linux:
http://www.linuxgazette.com/issue35/tag/crypto.html
Another issue where I answer questions about remote root logins:
http://www.linuxgazette.com/issue35/tag/remoteroot.html
From Nathan F. on Thu, 08 Jun 2000
Hello! Excuse the beginner question, but I was wondering how in the heck to install and run DEVFS in my RedHat 6.2 linux OS?
Short Answer: get 2.4.0test* and use that!
Before I say anything else I should point out that I haven't been using Red Hat for the last few revisions. So I don't know the details of their 6.2 release.
That said I can answer the general question about devfs.
I never saw the option under menuconfig, and whenever I try to do something like "mount -t devfs none /devfs" it says that the kernel doesn't support it.
[For our readers who may not have heard of it: devfs is a "virtual filesystem" like /proc, but for dynamically representing devices instead of processes. /proc and devfs are sort of like RAM disks, they exist in memory rather than on a physical disk or partition. Each represents ways for the kernel to present its internal state to userspace using a common directory/file hierarchical abstraction. Richard Gooch has been working on devfs for a couple of years, and Linus recently accepted it into the mainstream developmental kernels]
I suspect that 6.2 is shipping with a 2.2.15 or so kernel. devfs was not included in mainstream kernels prior to 2.3.46 or so. So, you'd have to either use a development/test kernel (2.4.0test1-ac10 is what I'm using at the moment), or download Richard Gooch's patches for the earlier kernel versions.
[
RedHat shipped with 2.2.14. A 2.2.16 kit is posted in their updates
area, including i586 and i686 specific flavors for uniprocessor or smp.
You definitely want to find a mirror site to do the download from, though.
-- Heather. ]
You can learn all about Richard's work at his web site:
http://www.atnf.csiro.au/~rgooch/linux/kernel-patches.html
The patch includes changes to the make files so that the new options will appear in 'make menuconfig' and its ilk.
last but not least,
You're probably asking for:
"devfs=nomount"
The patch, and some of the 2.3.46+ series of kernels had this really irritating default. They would mount the filesystem over /dev as the kernel booted, thus over-riding your normal device list. This would make the system essentially unusable if you had not made all of the correct adjustments to all of your rc scripts, /etc/fstab and other files. (There was a warning about this in the configuration help in 'menuconfig' and in other places).
In the latest kernels (2.4.0test1) this is NOT the default. You can enable devfs support when you build a kernel and then play with it at your leisure.
For example I have devfs mounted on /devfs on mars (my laptop). Here's a (partial) list of the devices that I currently have recognized/active thereunder:
/devfs/.devfsd /devfs/cpu /devfs/cpu/mtrr /devfs/misc /devfs/misc/apm_bios /devfs/misc/psaux /devfs/mem /devfs/kmem /devfs/null /devfs/port /devfs/zero /devfs/full /devfs/random /devfs/urandom /devfs/tty /devfs/console /devfs/vc /devfs/vc/1 /devfs/vc/2 .... /devfs/vc/63 /devfs/vc/0 /devfs/ptmx /devfs/pty /devfs/pty/m0 /devfs/pty/m1 .... /devfs/pty/m254 /devfs/pty/m255 /devfs/pty/s0 .... /devfs/pty/s7 /devfs/pts /devfs/vcc /devfs/vcc/0 /devfs/vcc/a /devfs/vcc/1 /devfs/vcc/a1 .... /devfs/vcc/7 /devfs/vcc/a7 /devfs/rd /devfs/rd/0 .... /devfs/rd/15 /devfs/ide /devfs/ide/host0 /devfs/ide/host0/bus0 /devfs/ide/host0/bus0/target0 /devfs/ide/host0/bus0/target0/lun0 /devfs/ide/host0/bus0/target0/lun0/disc /devfs/ide/host0/bus0/target0/lun0/part1 /devfs/ide/host0/bus0/target0/lun0/part2 /devfs/ide/host0/bus0/target0/lun0/part4 /devfs/ide/host0/bus0/target0/lun0/part5 /devfs/ide/host0/bus0/target0/lun0/part6 /devfs/ide/host0/bus0/target0/lun0/part7 /devfs/ide/host0/bus0/target0/lun0/part8 /devfs/ide/host0/bus1 /devfs/ide/host0/bus1/target0 /devfs/ide/host0/bus1/target0/lun0 /devfs/ide/host0/bus1/target0/lun0/cd /devfs/cdroms /devfs/cdroms/cdrom0 /devfs/discs /devfs/discs/disc0 /devfs/scsi /devfs/tts /devfs/tts/0 /devfs/tts/1 /devfs/cua /devfs/cua/0 /devfs/cua/1 /devfs/root /devfs/floppy /devfs/floppy/0u1440 /devfs/floppy/0u1680 /devfs/floppy/0u1722 /devfs/floppy/0u1743 /devfs/floppy/0u1760 /devfs/floppy/0u1920 /devfs/floppy/0u1840 /devfs/floppy/0u1600 /devfs/floppy/0u360 /devfs/floppy/0u720 /devfs/floppy/0u820 /devfs/floppy/0u830 /devfs/floppy/0u1040 /devfs/floppy/0u1120 /devfs/floppy/0u800 /devfs/floppy/0 /devfs/printers /devfs/printers/0
I go that by using the command:
mount -t devfs /devfs /devfs
... since I'm just playing with devfs for now.
thx in adv!
-ion
That should help. It's probably best for you to upgrade to 2.4... and play with it. That's likely to have the most stable support for devfs.
Keep in mind that you should consider this feature and the whole 2.3 and 2.4.0test* kernel series to be experimental. You should get a few months (or at least weeks) of testing on them done before deploying them in a production server role.
Answered By Carl Davis on Thu, 08 Jun 2000
Thanks Jim, but I have solved the mystery................
The problem was that lilo does not like multiple "append" statements in /etc/lilo.conf. I fixed this by putting all the statements on the one append line, separated by commas and of cotatement2, statement3" You may wish to add this snippet to the list of 2cent tips.
Regards Carl Davis
Actually those should probably be separated with spaces and not commas. The commas may work in your case --- but it might cause problems in other cases.
Anyway --- this response will make it into the search engines, so the tip will be published that way.
From Future Systems Today on Thu, 08 Jun 2000
ok, here it is... I have a suse linux 6.3 server that is using a cable modem and has a static ip address from my isp which is 63.92.157.x. The isp also is being used as my dns and gateway on my first nic on that server. Everytime i change the ip address of my second nic to something other than 63.92.157.x I have no internet connection. What should i do or how do i trouble shoot this also i tried to connect my other ms pc's through this box to the internet but since i am trying to go through the ip address that the isp is giving me then i am getting a error message.
Is there a way to get internet access through my server which has the ip address that the isp gave me and make it look like it is connecting rather than a ip address that i gave a machine.
Thanks Joe
(Short answer: use IP masquerading or SOCKS).
What you're asking for is called "IP Masquerading" or "network address translation" (NAT). Technically IP masquerading is a particular form of network address/port translation.
I've written about this on a number of occasions, and a search on LG (http://www.linuxgazette.com/wgindex.html) shows over 120 matches on the phrase (ip;masq).
Here's a link to an LG article by Mark Nielsen and Andrew Byrd "Private Networks and Roadrunner using IP Masquerading LG #51" (http://www.linuxgazette.com/issue51/nielsen.html) that's probably just what you need to get started.
Also the LDP (Linux Documentation Project) has a reasonably up-to-date HOWTO on this topic:
http://www.linuxdoc.org/HOWTO/IP-Masquerade-HOWTO.html
... so you should read those and see if that explains it. (I can understand why one wouldn't know the magic keywords for this concept, and thus wouldn't have been able to find this).
If you get stuck on some of the assumptions that these articles and HOWTOs will make then you might want to read my article on "Routing and Subnetting 101" (http://www.linuxgazette.com/issue36/tag/a.html) which goes into related topics in some detail.
Keep in mind that you could also configure your Linux box as a "proxy" server (more formally it could be an "applications level proxy"). In this case your other machines never talk "directly" to the Internet, but the applications talk to a "proxy" application/server on your router (your Linux box). That proxy then performs the Internet requests on behalf of your applications and relays the results back to you.
There are many freely available proxy packages for Linux including NEC Socks 5, Dante, Delegate (all using the SOCKS standards), and specific proxies for specific applications (like squid which is a caching proxy). You could do a search on "proxy" or "applications;proxy" to read more about that.
The reason that IP masquerading has become somewhat more common and popular than applications proxying is that it is more transparent. When using applications proxying you have to configure each system and many individual applications to use the proxy. On the other hand proxying is technically a better, cleaner and probably more secure way to build a good network.
In either case you should be sure that you don't pick addresses "out of the blue." There are sets of addresses that are reserved for use behind proxying and IP masquerading firewalls and routers, and on other "disconnected" networks (those that will never interconnect to the Internet). Those are defined in RFC 1918. (RFCs are "request for comment" documents; proposals to the corpus of the Internet about how things should be done. They are basically drafts that become Internet standards).
RFC 1918 basically assures us that the IANA (Internet assigned numbers authority) and its delegates (like ARIN, the American Registry for Internet Numbers) will never issue the following address blocks to any organization on the Internet:
192.168.*.* 172.16.*.* through 172.31.*.* 10.*.*.*
So those are available for use on "disconnected" networks. (This also explains by most examples in textbooks and online technical discussions about IP use the 10.* and 192.168.* address ranges; most people don't remember the 172.... Class B set).
From Jonathan Marshall on Thu, 08 Jun 2000
I'm having an issue in which im not sure why ftp ing isn't going through the linux firewall to our isp that handles all the files. What should I check and look for to make sure ftping works through this linux firewall.
I have no clue thanks
Jonathan Marshall
Short form: Probably blocking all incoming TCP/IP connections and failing to use "passive" FTP clients.
It probably means that your firewall is improperly configured.
I'm going to guess that you can do some web browsing, and/or that ping or some other form of TCP/IP traffic is working between your client(s) and the target host (the FTP server).
In other words I'm going to assume that you are asking specfically about why FTP is NOT working because other stuff is working. If not then the problem could be anywhere in the realm of addressing routing, link layer and lower level networking.
The most common problem where "everything is working except FTP" has to do with the way that FTP works. Normal FTP (now sometimes called "active" FTP) works something like this:
- Your client connects to the FTP server. It sends TCP packets to port 21 of the remote. That connection is used to control the FTP session. Your commands (like 'ls' and 'get') are sent over that connection.
- The server makes connections back to your client every time it wants to send a stream of data. Thus the 'ls' listing that you asked for comes back over a separate TCP channel from the control connection.
This technique plays hell with simplistic packet filtering and is why "firewalls" are more complicated than just packet filtering.
You mention that you are using a Linux "firewall/router." Notice that the term "firewall" is pretty vague. It implies that you have this system configured to enforce some sort of policies about what sorts of traffic it will route into and out of your network.f However, that could be anything from some simple ipfwadm or ipchains rules through a gamut of different applications proxies, "stateful packet filtering" systems, and other software.
These days a lot of people refer to Linux systems which are simple IP masquerading routers as "firewalls." That's really a stretch. It seems quite likely that you are running through masquerading. If that's the case you should be aware that Linux requires a special loadable module in order to support normal FTP through a masqueraded route. It may be that the module isn't their, or that the kerneld/kmod (dynamic module loading mechanisms) aren't properly running or configured, etc. You should have your sysadmin check the error logs on this "firewall" and look for a file like:
/lib/modules/.../ipv4/ip_masq_ftp.o
... or for error messages in the logs that refer to such a beast. That little gizmo handles the active back "PORT" connections from that might be coming from your ISPs FTP server.
So, it sounds like you need to get someone to properly configure the firewall if you want to use traditional FTP. It also sounds like you have an ISP that has lackluster support (since any decent sysadmin should have been able to explain this to you).
Another option is to use "passive mode" FTP. This still stills two connections (control and data, as before). However, it basically means that the client requests that the server accept all of the connections --- so that no new connections will be "inbound" back to the client. Most newer FTP clients will support passive mode. If you're using the old "shell mode" FTP command try just issuing the command 'passive' at the FTP command's prompt. If it responds with a message like: "passive mode on" then you should be able to go from there.
Under ncftp (a popular FTP client that's almost more common on Linux than the old Berkeley shell mode program) you would try the command 'set passive on'
In any case search your man pages for "passive" and/or "PASV" (the protocol keyword) to see if that helps.
Note that most web browsers default to passive mode for all FTP transactions. So one of the common symptoms of this problem is that FTP works through a browser and fails otherwise.
There are a number of places where you can read more about Linux firewalls. One place to check is:
- Linux Administrators FAQ List: Firewalling
- http://www.kalug.lug.net/linux-admin-FAQ/Linux-Admin-FAQ-9.html
... and, of course:
- Firewall and Proxy Server HOWTO
- http://www.linuxdoc.org/HOWTO/Firewall-HOWTO.html
... and the home page of the:
- Freefire Projekt Startpage, English, Bernd Eckenfels
- http://sites.inka.de/sites/lina/freefire-l/index.en.html
... and Dave Wreski's:
- Linux Security Administrator's Guide
- http://www.nic.com/~dave/SecurityAdminGuide/SecurityAdminGuide.html
... and a bit about the Sinus Firewall package (which is under the GPL):
- SINUS Firewall Page
- http://www.ifi.unizh.ch/ikm/SINUS/firewall
... and the Juniper Firewall Toolkit (from Obtuse):
- Juniper
- http://www.obtuse.com/juniper
... and I'm sure that most of those links lead to many others.
So, your sysadmin and our ISP have no excuse for not learning more about firewalls, packet filtering and how to support simple requests and solve simple problems such as this.
From Amir Shakib Manesh on Thu, 08 Jun 2000
Dear ANswer Duy, I want to write a shell script, in which every 15 minutes it run a simple command, let say 'top -b'. Would you help me?
Well one way would be to make a cron entry like:
*/15 * * * * top -b
... which you'd do by just issuing the command: 'crontab -e' from your shell prompt. That should put you in an editor from which you can type this command.
Also is there any way to prevent from auto-log out in unix?
Sincerely,
Amir Shakib-Manesh
UNIX and Linux don't have an "auto-log out" mechanism by default. So if you are getting logged out after some period of inactivity then the details of how you'd bypass that depend on the mechanism that's being used by your sysadmins. Ask them.
(Other than that one of the possibilities is that they are setting a special variable in your shell, like TMOUT in bash. Read the bash man pages and search on TMOUT to see on that works. There are similar features in some other shells. That's a pretty common and relatively harmless way to encourage users not to leave their prompts unattended. Other methods involve some daemon like "idled" which runs as 'root' and watches for inactivity, pouncing on line hogs with 'kill' signals).
From Kevin Lampert on Mon, 12 Jun 2000
answer guy,
i have been going through your suggestions off the web for uninstalling
red hat, but i have an even bigger problem, i have no resecue disk and i
have no way of getting into red hat. The problem orginated because a
former employee loaded red hat on a pc and no one knows anyway to gain
access into the pc now since no one knows any of the user names or
paswords that he set. So, my question is "how do I get rid of red hat
with no rescue disk or no access into red hat?"
Any suggestions you have will be greatly appreciated.
kl
Well, here's a couple of ideas:
First you could break into the system. I've described "recovering lost passwords" on a few occasions and you can search the back issues or the FAQ for details on that. Here's the short form:
- Reboot the system. (Try [Ctrl]+[Alt]+[Del], then try [Ctrl]+[Alt]+[Backspacel] followed quickly by [Ctrl]+[Alt]+[Del]. If those fail you, power cycle it).
- As it boots after the BIOS messages and the initial keyboard LED flashes, toggle the [scroll lock] and/or [caps lock] keys. (That should bring up the LILO prompt, even if it wasn't showing up before --- unless it's been configured specially).
- At the LILO: prompt hit the [Tab] key. A list of boot "labels" should appear. Usually one of them will be named "Linux" or "linux" or something like that. Choose any one of those and type its name (case-sensitive) following by the following string:
init=/bin/sh rw
- Hit [Enter]
Now, if all of that went O.K. you should see the Linux kernel starting up. However, instead of going though the usual init process and running a whole mess of rc* scripts, it should just start a shell.
If you wanted to change the password and regain control of the system at this point you'd type the following commands (ignoring any errors for the moment --- some of them are just to account for common configurations that might not match yours):
mount /usr
passwd # and create a new password...
sync
mount -o remount,ro /
umount /usr
exec /sbin/init 6
... and wait for the system to shutdown and reboot (with your password setting safely saved).
However, you've said that you want to blindly remove Linux from this box. So, at the shell prompt you can type something like:
dd if=/dev/zero of=/dev/hda #DANGEROUS!!!!
... which will zero out the whole primary IDE hard drive.
... of course that's overkill. You could just add count=1 to that command to limit the damage to the MBR (master boot record) which is sufficient that any other OS you try to install should then consider this drive to be empty.
Of course there might be some glitches to this approach. Your former employee might have been a security nut and might have protected the boot sequence with a CMOS/Setup password. He or she might also have put in a LILO password. He or she might not have been using LILO at all -- there are a few alternative boot loader for Linux.
In the worst case you could use a screw driver, open the case, take out the hard drive and temporarily put it into another system, and use that to erase the drive. If you do have to go to that extreme, I suggest you take the drive, or the whole system, down to "PC Repair-o-rama" (any place that repairs PCs --- like CompUSA, Fry's, etc). They should be able to do the job for about $70 (and it would take way more than $70 of my time to explain all the possible complications of removing a drive, and temporarily installing it in another computer; especially since I don't even now if that system is SCSI, IDE, or something old or exotic like ESDI, SMD, or ST506 --- RLL/MFM).
A whole other approach would be to download a copy of Tom Oehser's Root/Boot (or any of several other mini-distributions of Linux that fit on a single floppy or a couple of floppies).
You could get Tom's Root/Boot image from http://www.toms.net/rb ... and he does have a .ZIP file with the Linux image, a RAWRITE.EXE utility and instructions on creating a Linux rescue diskette from an MS-DOS prompt.
If you use the root/boot diskette on this errant Red Hat system, (assuming that it doesn't CMOS/Setup passwords that prevent you from doing a floppy boot), then you can use that to wipe out the data on your hard drive using the same 'dd' command I described above. Notice that you should NOT use that command if there is ANYTHING on that hard drive that you want to save.
You can also use Tom's root/boot to change the passwords on your system --- and thereby regain control of it. To do that you'd insert the following commands before my "mount /usr" command above:
mount /dev/hda? /mnt cd /mnt chroot . /bin/sh
... where hda? might be hda1, hda2 or something like sda1, sda2
etc. (Explaining that would take a bit longer. hd* are all of the IDE drives, hda is the master on the primary IDE interface, hdb is the slave, hdc is the master on the secondary IDE, etc.; those might include the CD-ROM. sd* are all the SCSI hard drives on the system sda through sdz (if you had that many). However, SCSI CD-ROM are NOT included in that list, they get names like scd0, scd1 etc.).
... The command examples I'm giving here are not the BEST way to do this; they are simply the easiest set to explain such that they are most likely to work on the widest variety of systems.
With about 20 million copies of Linux installed, I guess the this knowlege has become de riguer even for NT, Netware, and MS Windows help desk specialists.
From Mark Dellaquila on Mon, 12 Jun 2000
Jim-
Is there any possible way to delete all linux partitions? I want to get
rid of them and reinstall the Mandrake Linux 7.0. Any help would be great. Thanks!
Mark Dellaquila
Boot into Linux (preferably using a rescue diskette or CD; possibly using Mandrake's installation CD and switching to VC 2 (the second virtual console on most Linux installation packages is a command prompt).
Then, for each drive you can run fdisk -l like:
fdisk -l /dev/hda fdisk -l /dev/hdb fdisk -l /dev/sda
... etc.
Look for Linux and Linux Swap partitions in each of these listings. On any of them that list a Linux or Linux Swap partition, run fdisk in interactive mode (just leave out the -l option), and delete those partitions.
If you want to wipe out a whole drive (e.g. you don't have any non-Linux stuff on it that you wish to preserve) you can use the 'dd' command with script zeros over all of it, or just the MBR. Here's a couple of example commands:
dd if=/dev/zero of=/dev/hda count=1 bs=512 ## just wipe the MBR dd if=/dev/zero of=/dev/hda ## blank everything!
... that's basically all there is to it.
From AaronWL on Sun, 18 Jun 2000
hi..
do you know where i could get a working uvfat for linux 2.2.x? i cant find any info on it anywhere.. and im getting desperate..
aaron
I presume you're looking for UMSDOS/VFAT support from your kernel. In other words you want to access Win '9x long file names and you want to be able to store UNIX/Linux meta-data (permissions, ownership and group association) on your MS-DOS filesystems.
The uvfat code is included with the mainstream Linux kernel sources. So, just compile a new kernel, and select "Y" and/or "M" (module) for the UMSDOS and VFAT support options.
Be sure to read the "The Linux Kernel HOWTO" (http://www.linuxdoc.org/HOWTO/Kernel-HOWTO.html) for details on building new kernels.
From The Answer Gang on Mon, 19 Jun 2000
Hi,
Is there any way to find out what package that s specific program belongs to? And where to download the source to build it? For example, I am trying to figure out how and where to find the source or package for mount program. I check the fileutils & sh-utils and can't find it within them.
Any help will be highly appreciated.
Meiji
That would be a "whence" command (which used to exist in some forms of UNIX and even in the old Yggdrasil distributions).
If you're using an RPM based system such as Red Hat, SuSE TurboLinux, etc, then you can use the following command to find the "owner" or "provider" of a given file:
rpm -qf $FILEPATH
Note that this should use the full name of the file. So in your case try:
rpm -qf $( which mount )
... (you can use backticks for the $(...) expression, this is just a less ambiguous syntax under most fonts).
On a dpkg (Debian-based) system such as Debian, Corel, Stormix, LibraNet etc, then you can use the following command:
dpkg -S $FILEPATH
... or you can simply do a grep -l /var/lib/dpkg/info/*.list
You can get a two-column printout from netscape by using the psutils packages. For letter-sized printouts, just change your "Print Command" in netscape to
pstops -q -w8.5in -h11in -pletter "2:0L@0.7(8.in,-0.1in)+1L@0.7(8.in,4.95in)" | lpr -h
The PSUtils are avalable at http://www.dcs.ed.ac.uk/home/ajcd/psutils/index.html
You will have to edit the Makefile and set PAPER=letter if you live in North America.
Here's something I stumbled across while installing Mandrake 7.1 this weekend. While the installer correctly detects that the machine has more than 64 MB RAM (in my case 128). However - it does _not_ adjust lilo or grub, which it installs as default.
So when you start up with manually editing the config files you only get 64 MB. Which may not be readily apparent (my 700 Mhz machine is like lightning even on 64 MB).
Naturally you may need to add "mem=8m" to your lilo or grub config file.
[See the Mailbag and 2-Cent Tips of the past few issues for a discussion of addingappend = "mem=128M"
to /etc/lilo.conf. Sometimes it's required and sometimes it isn't, depending on the BIOS. -Ed.]
Hello gazette!
2-cent tip for new vi users:
If you have to move your left-hand to hit bye
Searching for more information about the i810 chipset I came across
your [The Answer Guy's] discussion about it and Linux.
I had a similar problem with my Linux installation, where as Linux
installed fine and I could utilize the command line wihtout a problem. However
I had no graphics support, that is to say no XFree86.
The soultion to this porblem is to be found at support.intel.com, under the
i810 fourm site. They have the X server and Kernel module and complete
instrcutions for how to install and use the software. You must however read
the fourm posts as there are a few tricks to the setup procedure.
That being said, I would like to know when/if kernel support will be
provided for the i810 chipset. Actually I would rather learn how to find this
information for myself. If you teach a man to fish, etc....
Here are the disk usage numbers for a kitchen-sink Red Hat Linux 6.0 installation. I got these numbers by installing every package the system installer had to offer on a pair of stock Gateway E-3200's, then pulling the hard drive from one and attaching it in place of the other's CD-ROM drive (that's at hdc). The operating system on the transplanted hard disk was never booted, so as to preserve the exact post-install, pre-boot state of the installation. I fired up the box, mounted hdc, changed the current directory over and did du --max-depth=1 in "/" and "/usr". Then I did it all over again with a different pair of E-3200's to make sure my numbers were consistent. They were. (I was mildly surprised.) My main reason for doing all this is so that I can make educated decisions about my hard disk partitioning (I've had bad experiences following other people's advice on this subject), but of course the data is interesting even in a purely academic sense.
(1) As the result of round-off error, the sum of the individual entries in this column (these columns) in fact comes to 0.01% shy of what it should be.
Note that in the case of both tables, the sum of the component entries in the Disk Usage column (the subdirectories of "/" and "/usr", respectively) is one kilobyte less than the total (the "/" and "/usr" directories, respectively). I thought round-off error, did du --block-size=1 --max-depth=1 and found that the numbers still disagreed. On intuition, I subtracted the --block-size=1 sums from the --block-size=1 totals and found that the sums were exactly 1,024 bytes less than the totals. Aha! I conclude that the two directory contents listings themselves take up a kilobyte each, and this is supported by the fact that "/proc", "/.automount", "/misc" and "/usr/etc" are all four empty, but are reported by du to occupy one kilobyte each. Where does the kilobyte value come from? Dividing the du --block-size=1 entries each by 1,024, I found that they are all evenly divisible. (That's why I ended up making the tables in kilobytes instead of bytes... there is no round-off error here.) Noting that du measures disk usage and not actual file size, I expect that one kilobyte is the cluster size on my disks, but I don't know much about that. If so, that beats Hell out of FAT32... isn't FAT32 four-kilobyte clusters minimum? My curiosity stopped at this point, but if I was to go on, I'd say the next step would be to start exploring the ext2 file system structure... if directory contents listings take up space alongside "regular" files, then there is probably no file allocation table, and I'd be curious to poke around on an ext2 file system with a disk editor. :-)
Here are the disk usage numbers for a kitchen-sink Red Hat Linux 6.2 installation.
(1) As the result of round-off error, the sum of the individual entries in this column (these columns) in fact comes to 0.01% off of what it should be.
Note that in the case of both tables, the sum of the component entries in the Disk Usage column (the subdirectories of "/" and "/usr", respectively) is four kilobytes less than the total (the "/" and "/usr" directories, respectively). I did du -b --max-depth=1 and found that the sums were less than the totals by exactly 4,096 bytes. The directories "/proc", "/opt", "/.automount", "/misc" and "/usr/etc" are all five empty, but are reported by du to occupy four kilobytes each. This is peculiar... if I am correct about cluster size being the relevant issue, why does Red Hat 6.2 use four-kilobyte clusters on the same hardware on which Red Hat 6.0 uses one-kilobyte clusters?
Just use a boot floppy to boot up the machine into single user mode.
From there you can edit /etc/lilo.conf to your liking and run lilo, and
also from there you can delete the encrypted password from either
/etc/passwd or /etc/shadow (the latter if you're using shadow
passwords). Then when you login as root, there will be no password, just
hit enter. Be sure to immediately run passwd and give root a new
password.
Get a rescue disk like tomsrtbt, mount the root partition, and edit
/etc/passwd manually, removing the root password or setting it to the same as
on another computer (it's hashed). Then reboot using the boot disk you made
when you installed Linux, log in as root, and edit /etc/lilo.conf and run lilo.
Hi, noticed your answer regarding "public interfaces" in shared libraries
in the latest Linux Gazette, and I had a couple of comments. (I am a
programmer, and have written several libraries and shared libraries under
Linux.)
There are at least two good reasons to hide functions from public
interfaces:
1) If a function is internal to the library, and it may well disappear or
change incompatibly without warning in future versions, so that you don't
want to be worry about people using it.
Any library will almost certainly contain a large number of such internal
functions, and the code would be utterly unmaintainable if you couldn't
change them between releases because people depended on them.
Of course, it is usually sufficient to simply not document those functions
or declare them in your header files, so that programmers who find out
about them should know that they use them at their own risk. (Some
programmers are foolish enough to do so, even though it is almost never a
good idea. e.g. there was a well-known case where StarOffice had depended
upon internal glibc functions and therefore broke when glibc was upgraded.)
2) If you don't want to pollute the namespace.
If I have an internal function in my library called something generic, like
print_error, I run the risk of accidentally conflicting with a function of
the same name in a calling program, with unpredictable results. One way
around this is to prefix the function with the name of my library, calling
it e.g. foo_print_error if my library is libfoo. But this can be awkward
to do for every little internal function you write, and it is often
preferable to simply hide them from the linker.
There is a solution, however, provided by ANSI C: simply declare your
functions with the "static" keyword, and they will only be visible/callable
within the file they are defined in. This isn't perfect, I suppose,
because they also aren't visible to other files in the same library.
However, it covers the case where foo_ prefixes are most annoying: little
utility functions that are only called within one file.
In linux, a lot of information about the processes and the system in
general is found in the /proc directory. To get the load average as
output by top, use
cat /proc/loadavg
Information about the memory used by particular processes can
be found in /proc/ Well, Alex's reply is partly right, but I *have* seen a 'lovebug.sh',
so if you would allow your browser to execute it, it could do some
damage. Maybe; I have not tried it.
Assuming you are careful and do not read your email as 'root' the
damage that the virus can do is limited. That's what file permissions
are meant to accomplish.
You need to properly configure your video card. Definitely easier said
than done. You can run Xconfigurator, but I assume you've already done
that. A few tips might help though.
* Instead of running startx, run startx 2>&1 | tee startx.txt
This will tell you which modes were accepted and rejected by X at
startup.
* Make *sure* that you tell Xconfigurator the proper values for your max
and min horizontal and vertical scan rates. Using the defaults
will yield the low performance figures you are probably getting now.
* There's a pretty good writeup on how to configure X in Running Linux
from O'Reilly.
* A lot of the SiS cards are not standard. i.e., one card may be
different from another card of the same model. The point is that if
the card is properly configured, it still may not work. In that case,
see if you can find yourself a good Matrox card. A Millenium
II is cheap on e-bay these days and I consider it to be rock solid.
That card probably came as a default in your PC. Don't feel bad;
you had to buy a real modem too to replace that WinModem that
came with it.
Bob Hepple's tip on using multiple X servers when using an XDMCP session
manager was interesting and informative. I had never even heard of Xnest
and it definitely looks worth investigating.
Personally I stay away from the XDMCP session managers. I like being
able to use my computer without the overhead of a GUI, and I find the
text mode easier on my eyes. I still end up using the GUI quite a bit
and find times when running multiple X sessions, either using different
bit depths and/or resolutions, or for different users, is desirable.
The default startx script from the RedHat distributions has display 0
hardcoded into it. I think this is the default script from the people
who make X, but not being sure, this may not apply if you're not running
RedHat.
There is a line in the startx script which reads:
Replace it with:
This checks the locks that X sets when it starts up, and uses the next
available display.
The echo line isn't needed, but I like feedback.
When an X session is running, use Ctl-Alt plus one of the Function keys
to go to an available terminal, log in and run startx and a new X
session will start. Use the Ctl-Alt Function key combinations to go back
and forth between the various terminal and X sessions. You can even
start a new X session by running startx from an xterm ( or equivalent )
from within X, but this makes the new X session a child of the original
one, and when the first one is closed, it brings the second one down.
My comuter under linux redhat xwindow will only run 300x200
graphics. Even if I hit CTRL ALT + , it wont change. I have a SiS620
Card with 8mb. Can you please help. I have spent a lot of time on
the internet, It seems other people have the same problem but no one
can help.
Off-hand, I can think of two possible causes:
1. the "DontZoom" option is set
2. You have only configured the 300x200 resolution.
Both of these problems can be fixed by editing the XF86Config file. I
don't use Red Hat myself, so I don't know exactly where it is.
Normally, typing "locate XF86Config" should tell you the location.
Inside this file, you should look for:
If you find this, place a # in front of it.
If you don't find this line, it means your X server is set up to use
only 300x200 as screen resolution. I think the best way to fix this,
is to use Red Hat's X configuration tool, and to add the resolutions
you want.
Last year I bought one of these cheap(er) east-asian PC computers
(like many of us ?) with the Energy Star feature (i.e. No more need to
press any button to power off).
But this feature is implemented with M$ Win... and I've no idea of
the way they manage the hardware behind this process.
So, as I recently installed a Corel distribution, I would like to
know if there is any mean to power off directly from Linux, and not
Shutdown-And-Restart, Open-M$Windows and Quit-From-There (and
Decrease-My-Coffee-Stock ;-} )
What is the LAST thing you see when you shut down your computer? It should
be "System halted" or "Power down.". If it is "System halted", then auto-off is
disabled in the kernel, and you need to recompile it. If it is "Power down.",
but it doesn't, I'm not sure what the problem is, but I've seen it.
If it says "The system is halted", but does not then say "System halted" or
"Power down.", something else is wrong. One of my computers crashes half the
time, and hangs the other, on shutdown.
How to configure the typical home PC for mail services via your ISP? This
information, though widely available, is not well-known. Many popular Linux
books gloss over the subject, suggesting that Netscape (and Netscape alone) is
the way to go. Unfortunately, this eliminates many fun, geeky, options like
emacs' Rmail.
Most distibutions set up Sendmail and Fetchmail--but configure Sendmail for
a "typical" host machine.
But a home user _can_ figure out how to modify this combination for home
requirements without learning all of Sendmail. And it is relatively painless.
My advice? Read the following short document thoroughly; and follow its
instructions exactly
http://www.linuxdoc.org/HOWTO/mini/Sendmail-Address-Rewrite.html
Then select the mail client of your choice and mail like the big guys!
Have Fun!
Look at the files and directories under /proc. These are "virtual" files
that are updated by the kernel. As I understand it, most programs that
provide process info and the like merely decode and present info gleaned
from the /proc files.
Hello!
My letter with a title "DNS for home mail not working" was printed in # 52
"Mail Bag".
I appreciate the help and attention, but I believe that problem was really
with my service provider. Because my request for help was ignored by
provider we choosed another and installed the leased line. And here is
success story. I laid the printouts of JC Pollman and Bill Mote articles
before me, followed along - and all was working. Fetchmail got the mail from
our mailboxes, procmail and imap delivered mail, sendmail handled outbound
mail.
Thank you, Linux Gazette, Mr. Pollman and Mr. Mote for my first linux
success.
It has been a very exciting month. Since the
previous article about this project in June's issue of Linux
Gazette, I have received over 40 replies, a virtual host to set up a web
page and mailing list, and an Invitation to
Global Linux 2000 in Seoul,
Korea! Thanks to all of you who wrote and have helped out with resources, ideas and enthusiasm.
I have learned a lot and have a more
clear vision of the actions to take in order to achieve our goals. Also new dangers have arisen,
or at least I have acknowledged, to the Linux community, so now I have a deep sense of urgency
and total commitment to the project. New services ideas are on the works and some companies are
starting to join.
From the e-mail I have received to the friends and contacts I made in Global Linux 2000,
I'm impressed to see how many people want this project to come true. THANK YOU!
More HelpDex cartoons are on Shane's web site,
http://mrbanana.hypermart.net/Linux.htm.
OLinux: Make a short introduction about yourself (carreer,
job, name, age, and where you live)
Ronny Ko: I'm the Editor-in-Chief for
32BitsOnline.com. I'm in my twenties and
living in Vancouver, BC, Canada
OLinux: what was the group that started 32bits online and where they came from (how did they get united?)? this group had other online businesses by then ? what were the main ideas by then? who are they: can you describe them, their background?
Ronny Ko: The magazine was actually started as a home page. Back then, it was called Ronny's Review Page. Basically, it dealt with my personal software reviews. As more and more people came to the site, the number of contributors
increased. As it grew and given its OS/2 nature then, I decided to name it OS/2 Computing! Magazine.
The amount of content that OS/2 Computing! was able to published was
directly proportional to the amount of activity in OS/2. OS/2 was quickly
losing ground to Windows 95.
Because of this, we decided to change to 32BitsOnline.com. The focus is to
be the best alternative technical publication anywhere.
OLinux: Describe your daily task and routine as an editor? What are your main news and articles resources: Are there many volunteers or instead how many employees work for 32? How hard it is to present a quality content every day?give us an idea.
Ronny Ko: It takes a lot of hard work. This is especially hard in web journalism. My job as Editor-in-Chief is to oversee editorial direction. Along with a dedicated team, 32BitsOnline is its people.
OLinux: How did evaluate since 1996? What was its initial focus in terms of OS and has changed? How often are upgrades (layout, new features) made How much editorial has changes since linux became a major player and drew attention from the world?
Ronny Ko: 32BitsOnline had already entered the Linux space before Linux became a viable solution against Windows in the enterprise space.
OLinux: How is 32 marketing strategy and key alliances to keep and promote itself on the internet?
Ronny Ko: 32BitsOnline strives to create key alliances but first and foremost, 32BitsOnline first responsiblity is to our readership.
OLinux: How profitable can be advertisement via Web? Can a website survive or have profits through advertisement?
Ronny Ko: If content is paramount on the web, then it is great but, the creation of original content take a lot of time and resources. In a world where success is valued by how much revenues a company generates, 32BitsOnline/Medullas is holding its own as a private company.
OLinux: Can you tell us about technical aspects: servers used, internet links, 32bits computing network, linux distribution and its programs used, databases?
Ronny Ko: We believe in Open Source. 32BitsOnline/PenguinApps are run on RedHat Linux servers. Database is based on MySQL with power outtage solutions built-in to prevent our service from being affected.
OLinux: How did 32 get involved with Linux International and how 32 works and
helps this organization? How 32 bits online interact with GNU/Linux community? Does it participates actively in any other organizations?
Ronny Ko: 32BitsOnline strives to participate in every community event. By joining LI, we believe that a strong singular voice is more potent than each of our individual. Linux International was an easy decision.
OLinux: Does 32 take part in any important events like InstallFest, Expo and Conferences?
Ronny Ko: 32BitsOnline was a Media Sponsor in the last Linux World Expo. We're continuing to expand our support into other community events. If there are events which 32BitsOnline can sponsor, we welcome anyone interested to contact us at pr@32bitsonline.com.
OLinux: How do you see and project linux growth for the next five years? There are already an uncountable number of commercial, companies and community linux distribution. Do you this fragmentation and excessive number of options raises consumer's fear while choosing it?
Ronny Ko: It is true that today, there are major fragmentation in terms of the number of distributions. What I can see will be a trend towards consolidation whereby the smaller distributions will consoldate into larger ones. In my opinion, this will be the only way to present possible Linux users that there's "one" Linux. If you walk to your local computer superstore, you'll see countless distributions. If I'm new to Linux, how am I supposed to pick one distribution?
Techinically, Linux is extremely united. Linus Torvalds has done an impressive job in keeping fragmention in check.
With variety, consumers have excellent diversity. Much like nature, natural selection will eventually select for the strongest while the weaker will slowly die or consolidate.
OLinux: Talk about PenguinApps: when was it started and what is the main idea behind it? Does it heavy a good traffic: give us some numbers. how often is it updated?
Ronny Ko: PenguinApps is still a very new product. We realized very early on that Linux users needed a site in which software should be easy to find. PenguinApps is updated daily. But more importantly, it is a complementary sister site for 32BitsOnline. When users read an article, users can expect to be able to download the software right here from
PenguinApps.com.
This article is the first in a series of two, where the reader
will be introduced to the Journal File Systems: JFS, XFS, Ext3, and
ReiserFs. Also we will explain different features and concepts related
to the new file systems above. The second article is intended to review
the Journal File Systems behaviour and performance through the use of
tests and benchmarks.
The logical block is the minimum allocation unit presented by the file
system through the system calls. That means that, storing fewer bytes than
the logical block's within a file, would take a logical block's size of
disk space to appear allocated. Therefore, if our block size doesn't divide
a particular file (file size MOD block size != 0), the
file system would allocate a new block that won't be completely full, causing
a waste of space. That waste of space is internal fragmentation. Notice
that the bigger the logical block is, the bigger the internal fragmentation
should be.
External fragmentation is a situation in which logical blocks
of a particular file are scattered all over the disk, causing operations
over that file to be slower, since more hard-disk header movements are
needed.
Extents are sets of contiguous logical blocks used by several file systems
and even database engines. An extent descriptor is something like beginning, extent size, offset, where beginning is the block address where the
extent begins, the extent size is the size in blocks, and offset is the
offset that the first byte of the extent occupies within the file.
Extents enhance spatial locality, since the blocks within an extent
are all contiguous. That increase will lead to better scan times, as
fewer header movements need to be performed. Realise that using extents
reduces the external fragmentation drawback, since more blocks are kept
spatially together. But notice that extents usage isn't always a benefit.
In case our applications request extents near in size to logical block's,
we would lose the extents benefits, resulting in many small extents that would
merely appear as logical blocks. To close the performance increase benefits,
extents improve multi-sector transfer chances and reduce the amount of
hard disk cache misses.
Finally, I would like you to realise that extents also provide for a
way to organise large amounts of free contiguous space efficiently. Using
extents will help us reduce the amount of disk space required to track
free blocks, and will even enhance performance.
The B+tree structure has been used on databases indexing structures
for a long time. This structure provided databases with a scalable and fast manner
to access their records. B+tree stands for Balanced Tree. The + sign means
that the Btree is a modified version of the original Btree, or more precisely,
consists of maintaining pointers from each leaf node to the next, in order
not to sacrifice sequential accesses.
As Btrees and B+Trees have been inherited from database technology,
we are going to use a database analogy to explain them.
The B+trees have two different types of nodes: the internal nodes and
the leaf nodes. Both of them consist of sets of pairs like (key, pointer),
ordered by the key value in an ascending manner and a final pointer
which does not have a corresponding key. Whereas the internal node pointers
are used to point to others' internal or leaf nodes, the leaf node pointers
point to the final information directly. Every single pair's key is used
to organise the information within the B+Tree. In databases, each
record has a key field, a field where the value is used to distinguish
that record from the same kind of records. Btrees take advantage
of that key to index database records for better access times.
As we said earlier, an internal node pair (key, pointer) is used
to point out either another internal node or a final leaf node. In both
cases, the key that comes with the pointer will be greater than all the
keys stored in the target node. Therefore, records with an equal key value
to a certain pair's should be addressed by the next pair within the
node. This is the main reason for a final pointer with no corresponding
key to exist. Notice that once a key is used within a pair, there should
be another pointer to address the records with that key value. That final
pointer, is used in the leaf nodes to point to the next. That way, we can
still visit the contents organised sequentially.
B+Trees also have to be balanced. That means the length of the
path taking us from the tree's root to any leaf node should always be the
same.
Moreover, the nodes within a BTree must contain a minimum number
of pairs in order to exist. Whenever a node's content gets below that minimum, the
pairs contained would be shifted to another existing node.
In order to locate a specific record, we would do the following. Let's
suppose we are looking for a record with a certain key, "K".
We would begin
at the root node, and then begin sequentially scanning the keys stored
within. We scan throughout that node until we found a key that was
greater than "K". Then we go to the node (internal or leaf; we
don't know yet) pointed by the accompanying pointer. Once
there, if we were taken to another internal node, we repeat the same
operation. Finally, we get directed to a leaf node, where we
scan sequentially until we found the desired key "K". As fewer blocks have
to be retrieved to get the desired one, this technique is of lower
complexity than sequential scanning, where in the worst case, we should
visit all the entries.
The name of the file system SCO, System V and some other UNIXes used at
the beginning. The Linux kernel includes optional support for UFS. Most UNIXes
continue to use UFS, although now with custom minor enhancements.
A kernel layer that provides a unified application programming interface
for file system services, irrespective of which file system a file resides in.
All file system implementations (vfat, ext2fs, jfs, etc) must therefore provide
certain VFS routines in order to be usable under Linux. VFS is the kernel
layer that makes user applications able to understand so many different file
systems, even file systems that are comercial.
I think we all know what a write cache is; a buffer allocated
in the main memory intended to speed I/O operations up. This kind of buffer
is commonly used in file systems the disk cache and databases to increase
overall performance. The problem appears if there is a system crash, before
the buffers have been written to disk, that would cause the system to behave
in an inconsistent way after system reboot. Think of a file deleted in
the cache, but remaining in the hard disk. That's why databases and file
systems have the ability to recover the system back to a consistent state.
Although databases have recovered quickly for years, the file systems and
more precisely UFS-like ones tend to increase their recover time as file
system size grows. The fsck recover tool for ext2fs has to scan through
the entire disk partition in order to take the file system back to a consistent
state. This time-consuming task often creates a lack of availability for
large servers with hundreds of gigabytes or sometimes terabytes. This is
the main reason for the file systems to inherit database recover technology,
and thus the appearance of Journal File Systems.
Most serious database engines use what is called a transaction. A transaction
is a set of single operations that satisfy several properties. The so-called
ACID properties of transactions stands for Atomicity, Consistency, Isolation
and Durability. The most important feature for our explanation is the Atomicity.
This property implies that all operations belonging to a single transaction
are completed without errors or cancelled, producing no changes. This feature,
together with Isolation, make the transactions look as if they were
atomic operations that can't be partially performed. This transaction
properties are held on databases, due to the problems related to keeping
consistency while exploiting concurrency. Databases take advantage of this,
logging every single operation within the transaction into a log file.
Not only the operation names are logged in, but also the operation argument's
content before the operation's execution. After every single transaction, there
must be a commit operation, making the buffers be written to disk. Therefore,
if there is a system crash, we could trace the log back to the first commit
statement, writing the argument's previous content back to its position
in the disk.
Journal file systems use the same technique above to log file system
operations, causing the file system to be recoverable in a small period
of time.
One major difference between databases and file systems journaling
is that databases log users and control data, while file systems tend to
log metadata only. Metadata are the control structures inside a file
system: i-nodes, free block allocation maps, i-nodes maps, etc.
There are two major problems with old structures:
Most new file systems have widened their number of bits for some fields,
in order to overcome previous limitations. The new limits for those file
systems are:
Actually, the maximum block device size limits the file system
size to 2Tb, and there is also a VFS limit of 2GB for file sizes. The good
news is that we now have file systems able to scale up, and once the 2.4
kernels come out, I am sure the limits will be extended. Notice also that
JFS and XFS are commercial file systems ports; they were designed for other
operating systems where these limitations didn't exist.
Most file systems maintain structures where free blocks are tracked.
The structures often consist of a list, where all the free blocks' numbers
are kept. That way, the file system is able to satisfy the applications
storage requests. UFS and ext2fs use what is called a bitmap, for free
blocks tracking. The bitmap consists of an array of bits, where each bit
corresponds to a logical block within the file system's partition. Each
block's allocation state would be reflected in its related bit. Therefore,
a logical "1" value could mean the logical block is being used,
and a "0" could mean the block is free. The main problem with this
kind of structure is that as the file system size grows, the bitmap would
grow in size as well, since every single block within the file system must
have a corresponding bit within the bitmap. As long as we use
a "sequential scan algorithm" for free blocks, we would notice a
performance decrease, since the time
needed to locate a free block would grow as well (worst-case complexity
O(n), where n is the bitmap's size). Notice that this bitmap approach isn't
that bad when the file system size is moderate, but as size grows, the
structure behaves worse.
The solution provided by the new-generation file systems is
the use of extents together with B+Tree organization. The extents approach is
useful since it can be used to locate several free blocks at a same time. Also,
extents use provide a way to reduce the structure's size, since more logical
blocks are tracked with fewer information. Therefore, a bit for each block is
no longer needed. Furthermore, with extents use, the free block's structure
size no longer depends on the file system size (structure size would depend on
the number of extents maintained). Nevertheless, if the file system were so
fragmented that an extent existed for every single block in the file system,
the structure would be bigger than the bitmap approach's. Notice that the
performance should be significantly increased if our structure kept the free
blocks only, since fewer items had to be visited. Also, with extents use, even
when they were organised into a list and sequential scan algorithms were used,
the performance would be increased, since the structure would pack several
blocks within an extent, reducing the time to locate a certain number of free
blocks.
The second approach to overcome the free blocks problem is the use
of complex structures that lead to lower-complexity scan algorithms. We
all know there are better ways of organising a set of items that will need
to be located later than the use of lists with sequential scan algorithms.
The B+Trees are used since they are able to locate objects quickly. Thus,
the free blocks are organised into B+Trees instead of lists, in order to
take advantage of better scan algorithms. When several free blocks are
requested by the applications, the file system would traverse the main
"free blocks B+Tree" in order to locate the free space required. Also,
there is a "B+Trees + Extents" approach, where not blocks but extents
are organised within the tree. This approach makes different indexing
techniques possible. Indexing by extent size, and also by extent position, are implemented
techniques that make the file system able to locate several free blocks,
either by size or by their location, quickly.
All file systems use a special fs object called directory. The directories,
from the file system view, is a set of directory entries. These directory
entries are pairs (i-node number, file name), where the "i-node number"
is the number of the i-node fs internal structure used to
maintain file-relevant information. Once an application wants to look for a certain
file within a directory, given its file name, the "directory entries structure"
needs to be traversed. Old file systems organised the directory entries
within a directory into a list, leading then to sequential scan algorithms.
As a consequence, with large directories where thousands of files and other
directories are stored, the performance would be really low. This problem,
as the one described with the free blocks, is tightly related to the structure
used. New-generation fs need better structures and algorithms to locate
files within a directory quickly.
Solution provided:
The file systems being reviewed use B+Trees to organise the directory
entries within a directory, leading to better scan times. In those fs,
the directory entries for every single directory are organised into a B+Tree,
indexing the directory entries by name. Thus, when a certain file under
a given directory is requested, the directory B+Tree would be traversed
to locate the file's i-node quickly. Also, new fs usage of B+Trees is file
system dependent. There are file systems that maintain a B+Tree for each
single directory, while others maintain a single file system B+Tree for
the whole file system directory tree.
Some old file systems were designed with certain patterns of file usage
in mind. Ext2fs and UFS were designed with the idea that the file
systems would contain small files mainly. That's why the ext2fs and UFS
i-nodes look as they do. For those of you who still don't know what an
i-node is, we are going to explain the i-node structure briefly.
An i-node is the structure used by UFS and ext2fs to maintain file-dependent information. The i-node is where the file permissions, file type,
number of links, and pointers to the fs blocks used by the file are maintained.
An i-node contains some direct pointers that are pointers (block addresses)
to a file system's logical blocks used by the file it belongs to. i-nodes
also contain indirect pointers, double-indirect pointers and even a triple-indirect pointer. Indirect pointers are pointers (addresses) to blocks
where other pointers to logical blocks are stored. Thus, double-indirect
pointers are pointers to blocks that contain indirect pointers, and triple-indirect pointers are pointers to blocks containing double-indirect pointers.
The problem with this addressing technique is that as the file size grows,
indirect, double-indirect and even triple-indirect pointers are used.
Notice that the use of indirect pointers leads to a higher number of disk
accesses, since more blocks have to be retrieved in order to get the block
required. This would lead to an increasing retrieval time as file sizes
grow. You could be wondering why ext2fs designers didn't use direct pointers
only, as they have been proven faster. The main reason is that i-nodes
have a fixed size, and the use of only direct pointers would take i-nodes
to be as big in size as the number of direct pointers that could be used,
wasting much space for small files.
In order to minimise the use of indirect pointers, we could think of
using bigger logical blocks. This would lead to a higher information per
block ratio, resulting in fewer indirect pointers usage. But, bigger
logical blocks increase the internal fragmentation, so other techniques
are used. The use of extents to collect several logical blocks together
is one of those techniques. Using extents instead of block pointers would
cause the same effect as bigger blocks, since more "information per addressed
unit" ratio is achieved. Some of the reviewed file systems use extents
to overcome the large file addressing problems. Moreover, extents can
be organised within a B+Tree indexing by their offset within the file,
leading to better scan times. New i-nodes usually maintain some direct
pointers to extents, and in case the file needs more extents, those would
be organised within a B+Tree. In order to keep performance high when accessing
small files, the new-generation file systems store file data within the
i-node itself. Consequently, whenever we get a file's i-node, we would also
get its data. This is an especially useful technique for symbolic links,
where the data within the file is really small.
(1) JFS uses a different approach to organise the free blocks. The
structure is a tree, where the leaf nodes are pieces of bitmap instead of
extents. Actually the leaf nodes are the representation of the binary buddy
technique for that specific partition (Binary Buddy is the technique used
to track and then collect together contiguous groups of free logical blocks,
in order to achieve a bigger group). As we said when discussing the bitmap-based technique, every single bit on the bitmap corresponds to a logical
block on disk. The value of a single bit could then be "1", meaning
the block is allocated, or it could be "0", meaning the block is
free. The pieces of bitmap, each of which contains 32 bits, could be understood
as a hex number. Therefore, a value of "FFFFFFFF" would mean that the
blocks corresponding to the bits on that sub-bitmap are all allocated.
Finally, making use of that allocation number and other information, JFS
builds a tree where a group of contiguous blocks of a certain size can be
located quickly.
(2)This file system's core is based on B*Trees (an enhanced version
of B+tree).The main difference is that every file system object is placed
within a single B*Tree. That means there aren't different trees for
each directory, but each directory has a sub-tree of the main file
system one. That sort of use requires Reiserfs to have more complex indexing
techniques. Another major difference is that Reiserfs does not use extents,
though they are planned to be supported.
(3)ReiserFS organizes every file system object within a B*Tree.
Those objects, directories, file blocks, file attributes, links, etc.
are all organised within the same tree. Hashing techniques are used
to obtain the key field needed to organise items within a BTree. The best
of it is that by changing the hashing method applied, we are changing the
way the fs organises the items, and their relative position within the
tree. There are hashing techniques that help maintain spatial locality
for items related (directory attributes with directory entries, file attributes
with file data, etc.).
Let's suppose we create a new file, and write a
couple of bytes at the beginning. Everything is okay until then. What about if
we now write at offset "10000" within that file? The file system
should now look for as many blocks as needed to cover the gap between offset 2
and offset 10000. That could take a while. The question now is,
why should the fs allocate those blocks in the middle, if we were not interested
in them? The answer to that question is the sparse file support provided
by the new file systems.
The sparse file support is tightly related to the extent addressing
technique for the file's blocks. The sparse file support takes advantage of
the field "offset within the file" of extent descriptors. Thus, whenever
the file system must look for free blocks just to fill the gap opened by
a situation like the one described above, the file system just sets up
a new extent with the corresponding "offset within the file" field. Thereafter,
whenever an application tries to read one of the bytes within the gap,
a "null" value should be returned, as there is no information there. Finally,
the gap would be filled in by other applications that wrote at offsets
within the gap.
When we discussed the internal fragmentation and file system performance,
we said administrators often have to choose between performance
and space waste. If we now look at the first table, we would see that new
fs are able to manage blocks up to 64KB in size. That size of block and
even smaller would produce a significant waste of space due to internal
fragmentation. In order to make the use of big block sizes feasible, ReiserFS
implements a technique that solves the problem.
As we said earlier, ReiserFS uses a B*Tree to organise the file
system objects. These objects are the structures used to maintain file
information access time, file permissions, etc. In other words, the
information contained within an i-node-, directories and the file's
data. ReiserFS calls those objects, stat data items, directory items and
direct/indirect items, respectively. The indirect items consist of pointers
to unformatted nodes. Unformatted nodes are logical blocks with no given
format, used to store file data, and the direct items consist
of file data itself. Also, those items are of variable size and stored
within the leaf nodes of the tree, sometimes with others in case there
is enough space within the node. This is why we said before that file
information is stored close to file data, since the file system always
tries to put stat data items and the direct/indirect items of the same
file together. Realise that opposed to direct items, the file data pointed
by indirect items is not stored within the tree. This special management
of direct items is due to small file support.
The direct items are intended
to keep small file data and even the tails of the files. Therefore, several
tails could be kept within the same leaf node, producing an important decrease
of wasted space. The problem is that using this technique of keeping
the file's tails together would increase external fragmentation, since
the file data is now further from the file tail. Moreover, the task of
packing tails is time-consuming and leads to performance decrease. This
is a consequence of the memory shifts needed when someone appends data
to a file. Anyway, the tails packing technique can be disabled if the administrator
wants to do so. Consequently, it's once again an administrator choice.
One major problem of "UFS-like" file systems is the use of a fixed number
of i-nodes. As we explained before, the i-nodes contain the information
related to every file system object. Thus, a fixed number of i-nodes constrains
the maximum number of objects that can be maintained within the file system.
In case we used all the i-nodes of the file system, we would have to back up
the partition, and then reformat with a higher number of i-nodes. The
reason for this fixed number is that "UFS" uses fixed-size structures to
track i-nodes state the same manner as free blocks. Also, "UFS" allocates
i-nodes at well-known positions for the file system, so no i-node to logical
blocks mapping is needed. The problem appears when system administrators
have to guess the maximum number of objects their file systems should manage.
Notice that it is not always a good policy to create the biggest number
of i-nodes possible, since the disk space needed for the i-nodes is reserved
(can't be used for other purposes), and this would waste much space.
To overcome that problem, dynamic i-node allocation appeared. The dynamic
allocation of i-nodes avoids the need for system administrators to guess
the maximum number of objects at format time. But the use of dynamic techniques
leads to other problems: i-node to logical block mapping structures, i-node
tracking structures, etc. The file systems reviewed use B+Trees to organise
the allocated i-nodes of the file system. Furthermore, JFS uses "i-node
extents" that form the leaf nodes of the B+Tree and keep up to 32 i-nodes
together. There are also structures that help allocate i-nodes close
to other file system objects. Consequently, the use of dynamic i-node is
complex and time-consuming, but helps broaden old file systems' limits.
*(4) As we explained in "the ReiserFS internal fragmentation solution"
section, ReiserFS makes use of stat_data items to store file-dependent
information. The number of hard links, the file owner id, the owner group
id, file type, permissions, file size, etc, are all stored within a stat_data
item for the corresponding file. The stat_data item then replaces the inode's
usage, except for the pointer to file blocks. Furthermore, the ReiserFS
items are created dynamically and organised within the main file system
B*tree, which leads us to dynamic inode allocation. Finally, every single
file system item has a related key field, which serves to locate the
item within the B*tree. This key has a number of bits at the end,
dedicated to item-type identification and to let us know if the item is
an stat_data, direct, indirect, etc. Therefore, we could say that inode
organisation is performed by the B*tree usage.
*(5) Currently, ReiserFS sparse files support is not as fast as
it was intended to be. This problem is scheduled to be fixed with ReiserFS
release 4.
The author would like to thank Stephen C. Tweedie, Dave Kleikamp,
Steve Best, Hans Reiser, the JFS and the ReiserFS mailing list guys for
the fruitful conversations and answers.
I'm a command line guy. I know on a modern Linux system, I can
point and click my way through the world like I were illiterate, or a
Windows user, but I'm most comfortable in a Linux virtual console with
my Bash prompt. I was using Linux happily for two years before I ever
installed X (which I did only when the Worldwide Web got to where it
was unusable without a graphical browser). I used to keep my mouse on
the floor.
But still, there are times when typing out commands is really
annoying, like to read my mail twenty times a day. Infamous
two-character Unix commands, aliases, and word completion can only go
so far to ease the keystroke burden. So I set up my F2 key to
bring up the mail in one touch. F1 edits a certain file to
which I refer throughout the day. Other keys type out option strings,
filenames, and directory names that I used to type a lot.
I can put any command or part of a command on any key on the keyboard,
and with the Alt and Control shifts, plus that pointless numeric
keypad, not to mention the F keys, there are plenty from which to
choose.
If you don't know how to do this, read on; it's not hard. But I'm going
to give a little background on keyboard stuff first.
I've only worked with the IBM standard keyboard attached to an IBM
(ISA) type computer, and some of the gritty details below may not
apply to your keyboard. But I know the basic techniques work on any
Linux keyboard.
Bash gets all of its commands (by "command," I mean your response to
its command prompt) via the GNU Readline library. Readline is a
subroutine library any program can use get a line of input from the
keyboard. The advantage to a program of using Readline instead of
just doing an ordinary file read of the terminal is that the Readline
code lets the user do fancy editing of the line and perform a variety
of magic to build up the line the way he wants before Readline passes
it on to the program. All that command line editing that you do at a
Bash prompt, such as backspace, delete word, history recall, and
insert, are done not by Bash itself, but by the Readline subroutine
that Bash calls.
Bash (also a GNU product) is the premier user of Readline and tends to
get credit for all these fancy line editing functions (there are about
sixty of them), and in fact they are described in the Bash man page.
(And why not, if millions of users think amazon.com is a feature of
AOL?) But all Bash does is call routines in the Readline library, and
many other programs call the same routines and have the same line
editing capability. Gdb, for example, and Postgresql's SQL prompt
(Psql), and some Ftp clients.
Readline gets a stream of characters from the terminal (and it can
be any old terminal — not just a Linux virtual console) and
recognizes certain sequences and executes certain functions when it
sees them. For example, when it sees E, it inserts
You get to choose what Readline does when it sees some character
sequence via a Readline configuration file, which is normally called
.inputrc in your home directory.
The Readline function we will be using is the one to insert a string
into the line being built. To make the first example easy, we will do
something ridiculous: Assign the string ps -a --forest to the
character z. Once we do this, we will not be able to type
the letter z in any command, so it is truly ridiculous.
To do this, we add the following to our ~/.inputrc (if it
doesn't already exist, just make this the only line in a new file):
After doing this, you should find that when you hit the
z key, the characters ps -a --forest appear in
your command line buffer. Hit Enter and the ps
command executes. You will find that you don't have to type
But let's be more reasonable and put this ps command on
the F1 key. That's more complicated because pressing the
F1 key does not cause a single typeable character to be sent to
Readline. Instead, it causes a terminal-type-dependent sequence of
characters to be sent. Let's concern ourselves with a Linux console
only, and one that's using the default Linux console configuration.
In that case, F1 sends the four characters Escape,
[, [, and A.
But don't take my word for it. You can prove it by using
Readline's quoted-insert function, which you should find
bound to Control-V. quoted-insert means put the
following character into the line instead of executing any function
that might be assigned to it. You need this to keep Readline from
trying to interpret that Escape character. So at a Bash
prompt, type Control-V followed by F1. As Readline
places the Escape and the three characters after it in the
input line, it naturally echoes them so you can see them. The Escape
character probably types out as ^[, which means
Control-[, which is another name for Escape. This trick
is the easiest way to find out the exact sequences generated by
essentially any key on your keyboard.
Knowing that F1 sends
Escape-[-[-A, we just need to
put that into ~/.inputrc. Putting an Escape character
into a file isn't pretty with any editor. Readline helps you out by
accepting \e in the configuration file to represent
Escape. So replace that z assignment above with the
following in ~/.inputrc:
Now if you're up for something more sophisticated than logging out
and in again, just hit Control-X Control-R. That should
reload the Readline configuration file. Now press F1 and you'll get
ps -a --forest.
But having to hit Enter after F1 ruins everything.
It's like having to get up to reach the TV remote.
Readline makes a special accomodation for this: End your inserted
string with a Carriage Return (that's Control-M) and
Readline submits the line to the caller (Bash) after it inserts the
rest of the string. If you don't think this is a special
accomodation, because Carriage Return is simply what one types
to submit a command, think again. It's Readline that interprets a
Carriage Return that you type and decides to submit the
command. Readline interprets character sequences you type, not
sequences that it inserts because you typed something else. You'll
find that no other control characters in your insert string do
anything other than insert a control character into the line.
It's too bad Readline doesn't provide a printable sequence like
\s to say "submit this," because you'll have to figure out
how to make your editor deal with a control character in your file.
(Hint: in Emacs, check out Control-Q).
Anyway, put the following in your ~/.inputrc, reload, and
you'll see that you have a one-touch ps command.
^M here means Control-M.
For Alt and Control shifted keys, use the syntax C-x and
M-x in ~/.inputrc (M is for Meta, a
forerunner of the Alt key).
See the Readline User's Guide, available wherever fine Info documents
are hyperlinked on your system, for all the details. The man page for
the Readline subroutine also works.
Now I should point out a few cases where things won't work as you
expect because your keystrokes are interpreted at a level below
Readline.
First of all, the tty device driver (that's a driver a level above
the actual keyboard device driver) recognizes a few special
characters, as controlled by the stty program. Readline
turns off most of this tty interference by placing the console in raw
tty mode, but Control-S, Control-Q, Control-C,
and Control-Z are likely never to make it to Readline, being
hijacked by the tty driver and acted on accordingly.
Then there's the keyboard driver. It lets you customize every key,
and I don't mean at the same level as Readline. You can make the left
shift key generate a q character if you're feeling a little
psychotic. More important, the keyboard driver assigns certain
console functions to certain keystrokes, which means those keystrokes
will not generate anything that gets sent up to the tty driver, and
then to Readline. For example, the driver normally associates
Alt-F1 with "switch to Virtual Console 1." So don't even try
to program Readline to insert the name of your Napster pirated music
directory when you press Alt-F1.
Under X (in, say, an xterm window), the Linux keyboard
device driver is mostly bypassed, with the X server substituting its
own driver. So the keys won't necessarily generate the same character
stream, to be seen by Readline, as you would see from a regular Linux
virtual console.
If you're interested in the wide world of keyboard mapping, start
with the
Keyboard-And-Console HOWTO and also read the Readline User's Guide
and of course documentation for X.
Deep within the <Shrug> What the heck; I've already gone parasailing and scuba-diving
this month (and will shortly be taking off on a 500-mile sail up the
Gulf Stream); let's keep living La Vida Loca!
The built-in parsing capabilities of As an example, let's say that you need to differentiate between
lowercase and capitalized filenames in processing a directory - I ended
up doing that with my backgrounds for X, since some of them look best
tiled, and others stretched to full-screen size (file size wasn't quite
a good-enough guide). I "capped" all the names of the full-sized pics,
and "decapped" all the tiles. Then, as part of my random background
selector, "bkgr", I wrote the following:
${#parameter} - return length of the parameter value.
${parameter#word} - cut shortest match from start of parameter.
${parameter##word} - cut longest match from start of parameter.
${parameter%word} - cut shortest match from end of parameter.
${parameter%%word} - cut longest match from end of parameter.
${parameter:offset} - return parameter starting at 'offset'.
${parameter:offset:length} - return 'length' characters of parameter
${parameter/pattern/string} - replace single match.
${parameter//pattern/string} - replace all matches.
There's actually a bit more to it - things like variable indirection,
and parsing arrays - but, gee, I guess you'll just have to study that
man page yourself. Just consider this as motivational material.
So, now that we've looked at the tools, let's look back at the code -
Note that, since we're matching the entire string, ${fn%%[A-Z]*}
would work just as well. If that seems confusing - if _all_ of the
above seems confusing - I suggest lots and lots of experimentation to
familiarize yourself with it. It's easy: set a parameter value, and
experiment, like so -
There are times - say, in testing for a range of error conditions
that set different variables - when we need to know whether a
specific variable is set (has been assigned a value) or not. True, we
could test it for length, as I did above, but the utilities provided
by ${parameter:-word} - If parameter is unset, "word" is substituted.
${parameter:=word} - If parameter is unset, set it to "word" and
${parameter:?word} - Display "word" or error if parameter is unset.
${parameter:+word} - "word" is substituted if parameter is not
unset.
Another built-in capability of Let's look at what this might involve. Here's a clip of a notional
phonebook to be used for the job:
The array processing capabilities of Note that the above script can be easily generalized - as an example,
you could add the ability to specify different phone-lists, criteria, or
actions, right from the command line. Once the data is broken up into an
easily-addressable format, the possibilities are endless...
You owe the Oracle a twelve-step program." The "man" pages for "Introduction to Shell Scripting - The Basics" by Ben Okopnik, LG #52
A few months ago, I wrote a
review of an HTML-editor ported from
Windows: CoffeeCup. I found a quite a few bugs in that version, and don't
actually know if a new one has been made available for Linux. I know that new
Windows versions has been poping up regularly. So the search has been going on
for a reliable and powerful HTML editor in which I can do all my HTML page
construction. I can say I've found a good candidate for such a beast. It isn't
Homesite--but then, it is isn't finished yet.
Bluefish is in version 0.3.4 as
of this writing (early June 2000). So according to version number it is very
early yet, but it is still a quite powerful editor.
As can be seen from the picture, it has all the button rows, tabs and menues
one could ask for. The standard button row is for creating a new document,
opening, saving, undo/redo, etc. All standard stuff. Then we've got a series of
tabs which are generally well laid out but unfortunately not configurable at
this time. Unfortunate, because the quick bar is supposed to contain the most
used tags - but lacks the Heading tags, which are on the Fonts tab. While,
structurally logical, it is not user-friendly. While I'm on about weird
thing about the tabs, I'll mention an irritating bug that's been around for
some versions: the Form wizard. It's long, very, very long. I
have to scroll horizontally two screenfuls to find the close button. And
while the other Wizards are pretty useful (New Page, Font,
Tables, Frames, Lists, CSS, PHP, etc.), the Forms wizard is broken and
should be avoided. Even if it were working, it creates only the opening and
closing tags. While the button rows and wizards are good, you can also add tags
using the menus. These are well laid out, and I believe they contain tags
not available through the buttons and wizards. Another bug which has surfaced
in this version is that the cursor disappears from time to time, making it
hard to know exactly where you are in the text.
Regarding about menus, I can't say much about the help system. There isn't one,
at least not in the version I am running (there was one earlier?).
Even though that may be a disappointment, I didn't miss it much. One thing that
is missing is Imlib support, which is necessary for the Image wizard.
Imlib isn't installed standard on Red Hat. Without that you
have to write you own image tags. Which is a pain. (Not as bad as writing your
own table tags, but still a pain). Fortunately, you just install it off the CD
and then you may happily create images in you HTML documents in no time.
To summarize, Bluefish is a very strong HTML editor, with nice looks as
well. It will be interesting to see where this one is going. It is not Homesite
(the well known Allaire editor for Windows) but it might get there.
In the
last article, we installed Linux with
only those packages we absolutly needed. (If you have not read my previous
article, you should do so now, as it is the base from which this is built on.)
Now comes the detail work, turning your gateway into fortress. The first thing
to understand is there is no way to be completely secure. There is just not
enough time to do it all, Corporations employ huge IT departments, whose sole
purpose in life is to secure their networks, and still they get cracked. Just
accept it and get on with your life. Our real goal here is to keep honest
people honest, keep the Script Kiddies out and slow the rest down, giving you
opprotunity to discover them. Ideally, this should be done right after the
clean install, before the system ever gets put on the Internet. This article
assumes you know something about Linux, how to install it, how to edit
various configuration files, and that you can log in as root.
I also assume you are setting up a firewall system and have
no intention of running DNS, DHCP, web, ftp or telnet server.
If you intend to run any of these services, of these services,
I recommend setting up seperate machines. Setup a DMZ on your
network, a system which is secured but allows connections
from system outside your network. This way if an intruder
does penetrate your server, he will have to start all over
to penetrate your firewall system and you will hopefully
discovered his breakin before he is able to get access to
your internal network.
In the world of Computer Security, Knowledge is Power. Frankly
the Security Experts are always one step behind the Crackers,
most security issues are not discovered by the Experts,
but by the Crackers and are plugged only after they have been
exploited. You need to keep up to date on new problems, at the
very least you should be updating the packages as they come out.
Type "rpm -qa > packages.txt", this gives you a list of the
packages and version numbers installed on your system, then go to
Redhat's web site and download the updated packages. While you
are there you should read the security advisories and implement
any changes they suggest. If you are really proactive, subscribe
to both the BugTraq and CERT mailing lists.
Since this article is aimed at the home cable modem user, I
will assume physical security is not a problem. If you have
child or a nosey baby sitter, consider using the BIOS password
protection built into most computers.
Besides the root account and the special accounts, which I'll
go into in a moment, there should be only one user account. The
user and the root accounts should have good passwords. A good
password is one that is at least 8 characters long, has a mix of
small letters, capital letters and numbers, and is not a
dictionary word. It is also a good idea to change these passwords
from time to time and do not write the passwords on a sticky note
and put it on the monitor where everyone can see it. Use different
passwords on each computer on your network, that way, if one system
is cracked an intruder will still not have access to the other
systems on the network. Again, because password cracking takes
time, you will hopefully discover the cracker before he gets too
far.
Along this same line, there are several special purpose
accounts, which are installed by default with most Linux
distributions, for our purposes these accounts are useless and
pose a security risk, so we will remove them using the userdel
command. The syntax for this command is "userdel username",
substituting username with the appropriate account name. The
accounts we want to remove are; adm, lp, sync, shutdown, halt,
news, uucp, operator, games, gopher, and ftp. We also want to
remove the associated groups with groupdel, the syntax is the
same. Groups to delete are; adm, lp, news, uucp, games, dip,
pppusers, popusers, and slipusers.
This is without a doubt the most important section.
Poorly-maintained configuration files are the highest risk factor on any
system. In this section you will be typing many of the same
commands over and over again, this is a good opportunity to write
a shell script to make this easier. What we want to do, after
we are finished with each file, is to first make sure it's
owned by root; second, that the only account which can read and write to
it is root; and third, that it's unalterable even by root. This keeps the
files from being accidentally deleted or changed and also
prevents the file from being linked to, which could be a security
risk. Type "touch secure-it", then type "chmod +x secure-it", now
Open the file in your text editor of choice and put these lines
in: (text version)
Now save the file and copy it to /usr/sbin by typing "cp
secure-it /usr/sbin". Now when we are finished with a file we
can lock it down simply by typing "secure-it filename".
The last thing you want to do is setup your system to warn you
of any changes to your system. If any intruders do get in and
plant a Trojan or create a new account, we want the system to be
able to tell what was altered. There are several good programs
available for this, the easiest to implement that I've found is
fcheck, which can be downloaded from,
http://sites.netscape.net/fcheck/fcheck.html.
Follow the instructions for installing and configuring the
software, it is very straight forward. Once this is done, you
will want it to run at least once a day and redirect the results to
a file in the root directory. This can be done through crond,
to setup a cron job, type "crontab -e" this will open vi, now
type the following line:
It is relatively safe to put the system on the internet. Once
this is done you will want to test your security.
Gibson Research Corporation
provides a port scanning service.
In a perfect world, all the ports should be in stealth mode,
meaning the ports do not respond to requests at all and will
appear as though there is no system at your IP address. In a
pinch the ports should be closed, meaning the port responds,
but will not take requests; closed ports are still vulnerable
to some types of attacks. Open ports are vulnerable ports,
if any of your ports are open go back to the inetd.conf file,
make sure it is empty, check to make sure apache, wu-ftpd or
similar are not installed, also review your ipchains settings
to ensure it is denying packets properly. It is a good idea to
do this regularly to ensure an intruder has not opened a port
for his personal use.
Again, as with my last article, I'd like to point out, this is
not the end-all and be-all of Linux security, this is only a
starting point. I have simplified this down to the very basics,
there are many more things which could be done. Whether or not
you should seek out these solutions, depends on what you are
protecting. For a home user this will probably do just fine,
however even a small business with more machines and data to
protect, you should do more research and implement as much
security as is possible. Better yet, hire a Network Security
Consultant to implement it for you.
In this series of articles I intend to explore the varying
implementations of strings in languages that are common on the Linux
platform. The first article will explore the regular expression
library provided with GNU libc. In future articles I hope to look at
other common libraries and languages - hashing functions in Java and
strings in KDE versus string in Gnome.
Each language has its strengths and its weaknesses. I hope that by
doing a little grunt work on your behalf, I'll be able to give you a
brief overview of the abilities and weaknesses of the common languages
and their libraries with respect to string handling.
I won't be talking about internationalization and localisation in this
series of articles, since those subjects are worthy of volumes of
study - not a short summary.
The GNU C library is the most basic system element on any Linux
installation from a programmer's perspective. Most higher level
libraries are based on libc, and most of what we think of as the "C
language" are really functions in libc.
Strings in C are just null terminated arrays of chars or wide
chars. This is the simplest and most efficient implementation of
strings in terms of computer resources, but probably the trickiest and
least efficient implementation in terms of programmer resources. Since
strings are either constants (ie literals) or pointers, the programmer
has the power to manipulate the strings down to the bit level and has
all kinds of opportunities to optimise their code (for example
this snippet). On the other hand, null termination of strings and
the absence of in-built length checking mean that problems such as
infinite loops and buffer-overflows are inevitably going to appear in
code.
The GNU C library is rich in string manipulation functions. There are
standard calls to copy, move, concatenate, compare and find the length
of a string (or a section of memory). In addition to these, libc also
supports tokenization and regular expression searches.
Regular expressions are a powerful method for searching for text that
matches a particular pattern. Most users will have first encountered
the idea of regular expressions while using the command line, where
characters such as '*' have a special meaning (in this case, matching
zero or more characters). To illustrate the power of regular
expressions and how they are used, we will implement a simple form of
grep.
Mygrep.c uses the powerful regex.h library for the task of searching
through a text file for a line that matches the given pattern.
Libc makes the use of regular expressions comparitively easy. Of
course, it would be much easier to use a language with regular
expression matching as part of its core definition (such as perl) for
this example, but the C library does have the advantage of easy
integration with existing code and maybe speed (although in languages
such as perl the regular expression matching is highly optimised).
If you examine the program listing, you will see that mygrep.c
consists of a main function that handles the user options and two
functions that perform the actual regular expression matching. The
first of these functions, logically, is the function do_regex(). This
function takes in as its parameters a pointer to a regular expression
structure, a string holding the pattern to search for and a string
holding the filename. The first task that do_regex() performs is to
"compile" the regular expression pattern into the format native to the
GNU library by calling regcomp(). This format is a data structure
optimised for pattern matching, the details of which are hidden from
the user. Next, the file to be scanned is opened, then the file handle
and the compiled regular expression are passed to match_patterns() to
execute the search and output the results.
Match_patterns() scans through each line of the file, looking for
patterns that match the regular expression. We begin scanning the
lines one by one - note that we have assumed that the lines are less
than 1023 bytes long (the array called "line" is 1024 bytes long and
we need one byte for the null termination). If the input is more than
1023 bytes long, then the line is wrapped over and interpreted as a
new line until the '\n' character is met. The function regexec() scans
the line for a set of characters that match the user specified
pattern. Every set of characters that matches the regular expression
forces regexec() to return 0, at which point we print out the line and
the line number that match. If a regular expression matches more than
once, then the line is printed out more than once. The offset from the
beginning of the line is updated so that we do not match on the same
pattern again.
This example, while fairly trivial, illustrates how powerful the GNU C
library can be. Some of the more salient features of the library that
we have used include:
Snazzy title, eh? Well, not snazzy, but informative. As it
suggests, this article (or tutorial, if you will) is all about
OOP in the computer language C++. Okay, let's get to the intros.
The name's Williams, Mike Williams. My mission? To teach novice
programmers, such as yourselves about the art of programming.
Through the months, I'm hoping to take you through a variety of
programming techniques, starting right here, right now with C++.
Are you sitting comfortably? Then I shall begin.... OOP is undoubtedly one of the most complex programming
techniques to explain. In fact, it's not so much a 'technique'
rather than a whole new method of looking at programming itself.
There are entire books on the subject, and it's well beyond the
scope of this article to introduce you to every philosophy and
implication of OOP. To understand OOP, you must first understand
what programming was like before OOP. Back then, the basic definition of programming was this : a
program is a sequence of logical instructions followed by the
computer. And that's it. All well and good, but let's face it,
it's hardly inspiring. Until now, that is. It's been hiding in
the background for quite some time now, but OOP has finally taken
off. In an OO programming language, the emphasis is placed far
more on the data, or the 'objects' used and how the programmer
manipulates them. Before OOP, numbers were simply an address in
memory; a sequence of bytes that meant nothing. Now, however,
through OOP they have become far more than that. The program is
now a solution to whatever problem it is you have, but now it is
done in the terms of the objects that define that problem, and
using functions that work with those objects. Confused? Don't
worry, you won't need to understand OOP to use it within your
programs. Indeed, the best way to learn what OOP is all about is
through using it in your programming. All the examples within this article can be compiled in the
GNU C++ compiler. To invoke it, type: g++ <filename> at the BASH prompt. I'm assuming that you have a reasonably up
to date compiler, although it shouldn't make too much of a
difference if you don't. Oh, and by the way, you can't use the
GNU C compiler - it won't work (just thought I'd mention that.)
You will, of course, need a text editor. 'Emacs' is a very
powerful editor, and I suggest you use that. This article is aimed at people who already have a reasonably
understanding of the C++ language, but want to further that
understanding by learning about OOP in C++. If you're a complete
beginner, I suggest you read one of the hundreds of C++ tutorials
lying around on the internet. A good place to start would be http://www.programmingtutorials.com/.Good
luck. Hundreds of years ago, in Britain (specifically England),
there was civil unrest. People were angry - the poor people to be
more specific. They noticed that some people were richer than
them, they did not like it. What to do about this problem? How to
keep the people happy? Religion had already gone some of the way,
but even the promise of eternal utopia if the poor behaved
themselves in life didn't seem to work. Capitalism already had
sunk its powerful jaws into the world, and a new idea was needed
to keep the masses happy. That idea became known as 'class'. The
basis was that if everyone understood their place and role in
society, they would feel secure and happy, and would not
challenge the authority. It worked. There was the upper class
(who were rich), the middle class (who were not so rich), and the
poor sods class (who could barely afford to live). Quite unfair,
but nevertheless it became reality. What has this got to do with
C++ you ask? Well in C++, all Object Orientation comes in the
form of classes. But enough of that; we're programmers, not
social scientists. Up to this point in your use of C++, you've used only the
basic types of variables : int, float, bool, double, and so
forth. These are called simple data types. However, they are very
linear in what we can 'model' with them. Let's take an example.
Let's say we wanted to represent a real life object, say a house.
Obviously, we would have to examine the various attributes of a
house : the number of rooms it has, its street number and whether
or not it has a garden (okay, so there are more attributes, but I
won't go into them now). In C++, we could show the house like
this: And it would work fine for this particular example. But
suppose we wanted many houses? Suppose we wanted to make the
program more complicated than this? Suppose we wanted to define
our own data type to represent the house. C++ allows us
to do this through the use of classes. Continuing with our example of the house, let's have a look at
how we could 'model' a house using a C++ class: Let's take a look at what each line does. The second line
declares a new class and calls it 'house'. We then open the class
definition with the curly brace '{'. The next line declares that
all the 'members' (any data type that belongs to the class) that
follow it to be 'public (I'll explain what this means later). We
then go onto declare two variables of the basic type 'int'
(integer). The next statement declares the garden member to be of
type bool (booleon- either a 1 or a 0). Finally, we end the class
with closing curly brace '}' and a ;. We have now declared a new
data type or class called 'house', which we can use within our
program. To use it, we start the main() function, which is where
the execution of the program begins and ends. The first thing we
do in the function is to declare the variable my_house to be of
type house, which is the class we defined at the beginning of the
program. Now, we this variable gains new dimensions; it now has
many more attributes than a simply int or float type. From our
class definition, we gave the house class three variables :
number, rooms and garden. The variable we just declared,
my_house, has all of these attributes. In the second line of our
main function, we define the number member of the object my_house
to be of value 40. We then go onto define the values for the
other three data members of my_house, before ending the function
with the return value 0. At this point, you're sitting there wondering what the big
fuss is about these classes. After all, wouldn't it be simpler to
use the non-OO method? Well, it would in this particular
instance, since we're only talking about a very small program
that does very little. However, once you start to write more and
more complicated programs, you will find not only that classes
are useful, but that they are essential. It's all well and good being able to declare a some variables,
but how do we make use of them? The answer comes of course in
functions. In C++, classes can have member functions. These are
declared in a similar fashion to member variables. To illustrate
how they work, let's take an example. A square perhaps. First we
must model the data based on the attributes of a square. It has a
length, a width, and of course an area. Of course, you find the
area of a square by multiplying the length by the width. To do
this, we could use a member function: This example should output the number 10. The square class is
very similar to the house class we saw earlier. Firstly, we
declare two member variables of type int : length and width. We
then go onto declare a function, area(), which will return a int
value. You declare the function exactly as you would outside a
class. In this case, we make area() return the value of the
member variables length and width when multiplied. We then end
the class, and start with the main function, which should pretty
much explain itself. Of course, if you had a lot of functions to put in the class,
they would all become rather messy. To overcome this, we use
something called the scope resolution operator. Let's say we
wanted to declare the area() function outside of our original
function definition. Firstly, we would declare the class square,
and in it the function area, as shown above. However, we would
not insert the function code in at this point, so the class
definition would look like this: To define the member function area() outside of the class
defintion, we would write this: This would produce the same output. While we're on the subject of member function definitions, you
should learn the difference between public and private members of
a class. Members that are declared to be public can be accessed
from any function within the entire program. The simplest way to
explain is with an example. Suppose we declared the class square
just like it was above, and tried to access the length variable
from within the function main, which is not a member function of
the class: The compiler would have no problem with this, and would output
the value 2. However, let's say we change the square class so it
looked like this, and all the members were private: If we tried to run the function main() shown above, the
compiler would generate an error. Private members can only be
accessed through member functions. It gets a bit tedious declaring the value of each member
variable of a class using the method shown below: For each member of mysquare, we have to seperately declare and
initialize its value. Of course, not only is this tedious, but
it's also easy to overlook the initialization of each member,
particularly when your classes become more complex. One way
around this is to use a class constructor. A class constructor is
a function that is initialized whenever the class is used: This would produce the output 10. Firstly, we declare the
class constructor by giving it the same name as the class itself.
>From now on, this function will execute itself whenever the
class is used. We declare it so that it takes two values, both of
type int. The next change comes in the function main(). Whenever
we delcare an object to be of type square, we add a function
definition. In this case, we gave the variables length1 and
length2 the values 5 and 2. The constructor then takes these two
variables, and assigns their values to the member variables
length and width, and, as they say, the rest writes itself. It goes without saying that you can use arrays with classes.
Obviously, this opens up scope for far declaring far more
variables in a shorter time. There isn't a huge amount to go
through on this subject, so let's take a simple example: There's nothing really complicated within this example, so I
won't go through it. Obviously, you can do a lot more with
arrays, and it doesn't take a genius to work out other ways to
use them. Well that's all for this month. If I get a chance, next week
I'll continue this very topic, and go further into OOP
programming with C++ to look at ideas such as pointers, class
desctructors, inheritance and organizing your program code into
files. Happy programming! Oh, and one more thang... If you have any comments/criticisms/flames
about this article [or life in general ] please send them to me. I'll be more than happy to read them and respond,
perhaps with childish name calling. Who knows.
And now for something that's completely non-Linux.
My Ireland/UK trip was a blast. Dublin's bridges remind me of
St Petersburg, and the streets are bustling at all hours. I saw The Business
(an oi band), and then went to a 3-day scooter rally in Carlow (a hundred miles
SW). Neither of these events were planned--I just happened to be at the right
place at the right time. In Belfast I took the black taxi tour of the
political murals, bought a book about the history of the Troubles, and hooked
up with the singer of another oi band, Runnin' Riot. Then I skipped across the
water to Scotland.
Edinburgh is amazing! I've been in six countries in
Europe, but have seen no city that matches Edinburgh. There's a drained loch
in the center of town, and next to it, high up on a hill, is the castle. I
connected with an old friend and met several new ones. The four days I
was in Edinburgh was not nearly enough.
Edinburgh also has a cybercafe that's open 24 hours, costs only 1 pound
for 2-4 hours, and any minutes you don't use can be applied later using their
anonymous logins. Plus it has five hundred PCs with LCD screens. I wouldn't
mind seeing more cybercafes like that. But it must've cost the owner a bundle.
Then I spent a night each in Manchester and Cambridge, and five days in
London. I'd been in London once before, so I knew what to expect. Still, the
city was a bit big for me, and hard to get a handle on. I spent some time in
Camden Town, took a picture of the Elephant & Castle statue
,
bought lots of clothes at The Merc and Lonsdale stores (cause you can't get
that stuff in the States, at least not in the Northwest), and my two friends
from Edinburgh and Cambridge came down one evening. Then I flew back to
Vancouver and caught the train to Seattle. Just in time to begin the July
Linux Gazette!
Michael Orr
Intel i810
Mon, 26 Jun 2000 21:32:56 -0700 (PDT)
From: GregV <Kvgov@aol.com>
[Forwarded from The Answer Guy column. -Ed.]
Disk-space usage of Red Hat 6.0 & 6.2
Mon, 26 Jun 2000 21:32:56 -0700 (PDT)
From: Edward Livingston-Blade <sb:sbcs@bigfoot.com>
Disk Usage Percentage Disk Usage Percentage Percentage
Directory (recursive) of Installation Directory (recursive) of Installation of /usr
/lost+found 12 KB 0.00% /usr/X11R6 72,925 KB 7.06% 7.62%
/dev 111 KB 0.01% /usr/bin 92,961 KB 9.00% 9.71%
/etc 2621 KB 0.25% /usr/dict 404 KB 0.04% 0.04%
/tmp 17 KB 0.00% /usr/doc 156,448 KB 15.15% 16.35%
/var 22,858 KB 2.21% /usr/etc 1 KB 0.00% 0.00%
/proc 1 KB 0.00% /usr/games 48 KB 0.00% 0.01%
/bin 5448 KB 0.53% /usr/include 11,129 KB 1.08% 1.16%
/boot 6581 KB 0.64% /usr/info 6357 KB 0.62% 0.66%
/home 3131 KB 0.30% /usr/lib 266,192 KB 25.77% 27.81%
/lib 30,454 KB 2.95% /usr/local 20 KB 0.00% 0.00%
/mnt 3 KB 0.00% /usr/man 25,169 KB 2.44% 2.63%
/root 8 KB 0.00% /usr/sbin 11,142 KB 1.08% 1.16%
/sbin 4364 KB 0.42% /usr/share 243,235 KB 23.55% 25.41%
/usr 957,156 KB 92.68% /usr/src 51,959 KB 5.03% 5.43%
/.automount 1 KB 0.00% /usr/libexec 1141 KB 0.11% 0.12%
/misc 1 KB 0.00% /usr/i386-glibc20-linux 12,319 KB 1.19% 1.29%
/ 1,032,768 KB 100.00%(1) /usr/i386-redhat-linux 237 KB 0.02% 0.02%
/usr/cgi-bin 39 KB 0.00% 0.00%
/usr/i486-linux-libc5 5429 KB 0.53% 0.57%
/usr 957,156 KB 92.68%(1) 100.00%(1)
Disk Usage Percentage Disk Usage Percentage Percentage
Directory (recursive) of Installation Directory (recursive) of Installation of /usr
/lost+found 16 KB 0.00% /usr/X11R6 85,540 KB 6.19% 6.50%
/proc 4 KB 0.00% /usr/bin 108,556 KB 7.85% 8.25%
/var 15,256 KB 1.10% /usr/dict 408 KB 0.03% 0.03%
/tmp 32 KB 0.00% /usr/doc 164,712 KB 11.91% 12.51%
/dev 124 KB 0.01% /usr/etc 4 KB 0.00% 0.00%
/etc 5896 KB 0.43% /usr/games 52 KB 0.00% 0.00%
/bin 5760 KB 0.42% /usr/include 18,200 KB 1.32% 1.38%
/boot 2452 KB 0.18% /usr/info 6776 KB 0.49% 0.51%
/home 10,388 KB 0.75% /usr/lib 412,912 KB 29.86% 31.37%
/lib 21,232 KB 1.54% /usr/local 80 KB 0.01% 0.01%
/mnt 12 KB 0.00% /usr/man 20,280 KB 1.47% 1.54%
/opt 4 KB 0.00% /usr/sbin 16,860 KB 1.22% 1.28%
/root 28 KB 0.00% /usr/share 384,752 KB 27.82% 29.23%
/sbin 5184 KB 0.37% /usr/src 70,132 KB 5.07% 5.33%
/usr 1,316,324 KB 95.19% /usr/libexec 2100 KB 0.15% 0.16%
/.automount 4 KB 0.00% /usr/i386-glibc20-linux 14,052 KB 1.02% 1.07%
/misc 4 KB 0.00% /usr/i386-redhat-linux 252 KB 0.02% 0.02%
/tftpboot 48 KB 0.00% /usr/kerberos 5152 KB 0.37% 0.39%
/ 1,382,772 KB 100.00%(1) /usr/boot 8 KB 0.00% 0.00%
/usr/i486-linux-libc5 5492 KB 0.40% 0.42%
/usr 1,316,324 KB 95.19%(1) 100.00%
Tips in the following section are answers to questions printed in the Mail
Bag column of previous issues.
ANSWER: Missing root password
Tue, 30 May 2000 21:47:43 -0400
From: Sean <snmjohnson@iclub.org>
Pierre Abbat <phma@oltronics.net> writes:
ANSWER: Limiting "Public Interfaces" on Share Libraries
Tue, 30 May 2000 23:56:27 -0400
From: Steven G. Johnson <stevenj@alum.mit.edu>
ANSWER: calculate cpu load
Wed, 31 May 2000 16:35:02 +0200
From: Ernst-Udo Wallenborn <wall@phys.chem.ethz.ch>
I would like to know how one can calculate cpu load and memory used by
processes as shown by 'top' command. It would be nice if anyone can
explain me how you could do these by writing your own programs, or by
any other means.
ANSWER: Linux and the love bug
Wed, 31 May 2000 16:39:48 +0200
From: Vic Hartog <hartog@best.ms.philips.com>
ANSWER: resolution
Wed, 31 May 2000 12:06:01 -0400
From: Steven W. Orr <steveo@world.std.com>
My comuter under linux redhat xwindow will only run 300x200 graphics.
Even if I hit CTRL ALT + , it wont change. I have a SiS620 Card with
8mb. Can you please help. I have spent a lot of time on the internet, It
seems other people have the same problem but no one can help.
ANSWER: Getting the most from multiple X servers - using startx script
Sat, 03 Jun 2000 14:37:14 -0400
From: James Dahlgren <jdahlgren@netreach.net>
display=:0
let DISP=0
while ls -a /tmp/ | grep -q ".X$DISP-lock" ; do
let DISP+=1
done
echo "Using Display:$DISP.0"
display=:$DISP
ANSWER: 300x200 graphics
05 Jun 2000 21:23:16 +0200
From: Guy "Iggy" Geens <ggeens@iname.com>
Option "DontZoom"
ANSWER: Energy star support
Thu, 8 Jun 2000 14:32:47 -0400
From: Pierre Abbat <phma@oltronics.net>
ANSWER: How to properly configure mail on home machines
Fri, 16 Jun 2000 13:59:39 -0400
From: Mark J Solomon <msolomon@nuomedia.com>
ANSWER: TOP Calculations
Sun, 25 Jun 2000 07:16:03 -0700
From: Mark Davis <chaos@glod.net>
ANSWER: Success story (Was: DNS for home mail not working)
Mon, 26 Jun 2000 16:05:45 +0300
From: Alexandr Redko <redial@tsinet.ru>
This page written and maintained by the Editor of the Linux Gazette.
Copyright © 2000, gazette@ssc.com
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
0800-LINUX: Creating a Free Linux-only ISP, Part II
By Carlos Betancourt
News and Report from Korea
subscribe open-isp
The web page is the place to go for news related to the project, announcements, statistics about
dial-up growth in different countries, and core documents of the project. We are setting up a
web discussion forum, so you can leave your comments on the web site.
Original site:
http://open-isp.linuxbe.org/gfx/logo2.jpg
Well, we didn't have time for honeymoon, as I flew to Korea the next day. Back from Seoul on
June 17th, with a killing case of jet-lag, wedding party all day long on 18th and then honeymoon for one week
in Barcelona, Spain, the next day. Very intense month. Believe me.
I was invited by the LinuxGreenhouse to join them at
Global Linux 2000, during June 14-17.
There, I had the opportunity to meet people from the FSF, Linux start-ups, hardware manufacturers,
Gnome Hackers, Gimp Artists, Korean Linux companies, well-known Linux distributors and integrators,
as well as Free Software personalities, to name just a few. There was a lot of excitement and
enthusiasm. I had meetings with several companies' representatives, presenting the project and
answering all their questions about how to profit by supporting the project.
For instance, it would be a great opportunity for Korean companies to expose their hardware solutions
to public view in European an American markets. There are lots of small and medium hardware integrators/manufacturers
who focus only on the Asian market, due to the high costs of setting up offices in Europe or North America, without
mentioning the huge costs of publicity and public relations they have to pay to compete in such
markets.
By supporting the Open ISP project with their hardware (it can also be another kind of contribution),
they would receive free publicity with the success of the project, becoming one of our Sponsors.
They are building all kinds of smart hardware solutions, with Linux of course, and investing in this project
would provide them an opportunity to demonstrate in with our networking requirements how versatile
their hardware is. Instead of dedicating their money on advertisements and offices overseas, they could invest
it on the Open ISP, getting the market exposure they need to get new clients.
And this is just a subset of the benefits they would get by sponsoring the project.
One of the most frequently asked questions, in general, was: how are we going to generate money
to pay for the users' phone calls? There are several ways to achieve that goal, some of them
already outlined in the Services section
of the web site. In the early stages of the project I was only thinking of Belgium, the country
where I live, to create the Open ISP. This is a small country, but with a
large rate of growth of dial-up access, and expensive phone bills for Internet users. However, after receiving mails
from all around the world, I can see that not only there is a lot of interest in other countries
to make this project happen, but I've also acknowledged new opportunities to make the project a success
if it's done in different places.
In the case of the European Union, we can receive funding
from the government itself. The EU has launched the
eEurope Initiative, which "proposes ambitious targets to bring the benefits of
the Information Society within reach of all Europeans." As you can read in the
Draft Action Plan, the
Open ISP's project vision fits perfectly well within their goals. The EU, through the
IST programme, has a
budget of 3.6 billion Euros to "Promote a user-friendly information society".
New services Ideas
One of the most exciting ideas developed during the Global Linux 2000 week was another new way
to generate the necessary money to pay for the phone calls. Maybe some of you have already read
about the i-opener and/or the
Virgin Webplayer.
For those of you who don't know about them, these are flat screen Internet
appliances with built-in 56Kbps modems to let you surf the web and check your e-mail; no hardisk, no floppy.
In the case of the i-opener, it costs 99$ and you have to use Netpliance's own ISP.
You have to sign-up an agreement to use their service when you purchase it, so you
can not use it with another ISP, and if you want to use it for another purpose you still
have to pay for the monthly service.
In the case of the Virgin Webplayer, the ISP is Prodigy, and you pay a yearly fee and agree to three years
of service. If you are one of the first 10,000 persons to sign-up you get the first year for free,
but in order to qualify you must match an undisclosed consumer profile which
is uncovered through a series of personal questions that "coincidentally" include inquiries
about your musical taste and travel habits. Other conditions: you have to use the appliance 10 hours per month,
which implies that YOUR WILL HAVE TO EAT A LOT OF ADS. BTW, if you cancel at anytime before
the 3-year period there is a hefty penalty.
The good news is that there's a Korean company willing to donate the hardware for our own
Internet Appliance. And the people at Henzai is working on an
embedded version of Gnome with small footprint for embedded devices, and I already consulted one
of Henzai's Officers about the possibility and requirements to use it in our IA.
So, we can provide, for a low monthly fee, a Web device for people to incorporate to the Internet
and Linux revolution! With the prohibitive prices of Internet access nowadays a lot of people is
missing the Internet revolution. People unwilling to buy a computer, or with no money to do so,
could benefit of having free Internet access and a new computer, in one shot. And Linux, our beloved
Free Operating System, will be in the middle. Imagine all the implications of this.
The above-mentioned Internet Appliance companies are shooting themselves on the foot. They have a
great idea, and they can have a profitable business model, but they will only reach a small fraction
of the market. The privacy implications of their business model is going to send away potential
clients. Also, the hacker community can't benefit from this hardware, as it is tied-up to their long-term
agreements. When the i-opener was launched, lots of
hackers could buy it without signing any service agreement, and
installed Linux on these machines. As Netpliance was losing money on the hardware, they stopped
selling it without the service agreement. This generated a long history
of hate/love between the hacker community and them.
After that, Netpliance rectified its position and embraced the hacker community, mainly motivated by
Kalin R. Harvey's article analyzing
the situation. In the case of the Webplayer, hackers can not modify it, because they would be
breaking the 10-hours-per-month-online condition.
Fortunately we are not a company, but a community service. We won't be losing money with hardware,
and all the money we get from the service can be dedicated to pay for the phone calls. We can provide
this Internet Appliance with a better set of hardware and software (e.g. integrated ethernet port), offering more flexibility to fit the user's needs,
and if someone wants to buy it
for further modification, we can sell it at the nominal price, giving the money back to our sponsor
and keeping a small fraction for the project.
We won't need to sell advertisements, so people won't be invaded, and user's personal information won't
get in hands of third parties.
This way, people is really going to join the Internet revolution, thanks to Linux and Free Software.
Of course, if you happen to already have a computer you will receive the service 100% free of charges, if you
use Linux on it. It's up to new users to decide.
Families with no computers at home, or with a low budget now will have an option. Imagine all
the new kids involving in the Linux community!
With projects like OFSET and their
educational software this machine could become a low-cost learning/homework machine.
The background philosophy of Free Sofware will be unveiled to broader population.
No Internet, No Linux: The Phantom Menace
During this month I have discovered new dangers to our community. As the old slogan says:
"The computer is the network". None denies the importance of using the Internet. It is a basic need
nowadays. Like the telephone or public transportation. But who provides us with Internet Access?
Mainly for-profit ISPs. I'm not against them, but lately the market is a little overpopulated of
ISPs and in order to stay in business, they are developing very interesting, even clever, ways to
be more attractive to new users and stay profitable. And prices are going down. It is a very good thing to happen.
However, they are doing their business in detrimental of my community, our Linux community. They are
not only ignoring us, they are cutting the options for new people to join Linux.
Governments are making efforts to estimulate competition by opening the communications market. They
are doing so in order to bring prices down by market forces; then, more people can join the Internet.
And it's working very well.
Lots of ISPs are emerging, others are merging into big world-wide corporations, some are disappearing.
All of their strategies are based on MS-Windows support. Of course, we are smart enough to still be able to
connect to their services, there's no doubt on it. But what about the new computer users, those who
still don't know about the existence of Linux and Free Software? If they want Internet access, all
they get are Windows-only CD-ROMs to set up their accounts in a friendly easy way. Well, first they must be
able to afford the phone bills generated by Internet access.
Most of them don't have an idea until
the next month, when the evil and huge bill arrives. It-is-painful. I know a good deal of persons
in this situation. But wait, don't forget that all those ISPs and CD-ROMs claim "Free Internet Access".
It has become a trend not to charge for the ISP service, and a lot of companies are doing so, thus
claiming "Free Internet Access".
Instead of charging for their service, they get their revenue by forcing the user to see advertisements, or by generating
phone calls: they get a piece of the cake from the monthly phone bill the customer pays. This happens in countries
where local phone calls are not free, as is the case of most European and Latin-American countries.
In the US, for instance, strategies vary, as local phone calls are free. That's why services like
Virgin Webplayer and the i-opener
are only available in the US. And, surely, you don't pay the phone calls, but then you have to pay
for the ISP service.
Advertisements-sponsored connections are more common everyday. What you have to do to get your "Free"
Internet access, is to load a dialing application which displays ads while you surf. They have to
request you, of course, personal data, in order to provide "personalized" ads. Even if you feel OK with
the privacy implications of this and with the ads constantly displaying before you, what Operating
System do you think you have to use? Yeah, MS-Windows. They don't provide a Linux version, so if you
want to use their service, you need to use MS-Windows. You just don't have an option.
This is a big elephant in front of our noses, and we have to acknowledge it to see the great
danger it poses to our movement. And to every one involved in it, including companies.
Ads-sponsored ISPs are growing very fast. Dangerously fast.
"Fortunately", people still have to pay for
phone calls, so they must think it twice before using those services, or just choose another kind of "Free" ISP.
However, as part of an aggressive move, they are providing now even
FREE PHONE CALLS service. It's happening now in the UK, while you read this article, and the
number of them is growing. With regular ISPs, even though they have Windows-only CD-ROMs (BTW, distributed
freely in large amounts in all kind of places: magazines, shops, mail, etc.), we can still use
their service with Linux. But with no ads-client available for Linux, how do you think we are going
to use those services? The ISP world is changing very fast, and the rules are changing as well.
If we don't take an action now, very soon we will have our options virtually eliminated. We can still
keep using Linux, but without Internet access.
[Of course, dial-up access is not the only way to connect to the 'Net, but until other technologies (as cable)
are wide-spreaded, phone links are the more commonly used and universal medium to connect to the Internet from home.]
Course of Action
- Ease of replication around the world.
- High Quality service.
- Flexibility: utilization of all different kinds and brands of donated hardware.
In order to reach achieve it, we must create an Engineering Task Force, to discuss and develop
the technical issues.
Provide more information about different countries' market conditions and ISP offers available.
A good starting point is the EuroISPA
(Internet Service Providers Association of Europe) and the
Internet Associations and Organizations directory.
Please, don't hesitate on sending more pointers to relevant information.
Final Thoughts
The discussion has not ended here. New issues will arise while others are being resolved. There is a
lot of food for thought. And it is time to take actions by ourselves. We can not leave the way
we use our bandwidth on the hands of chance. We must sow the seeds for our future generation
of our community, as well.
For Linux related companies, it's a great opportunity to invest in the growth of their consumer base.
To reach more people, to penetrate new markets, and to guarantee the future existence and success of
the community they make business within, will assure their own future presence on the market.
This is a subject concerning all of us in the Free Software world: individuals and companies. The
freedom we concern about, should not only be achieved within our personal computers, but in the
way we interconnect these computers together. Linux is an achievement of the communications era,
a child of the Internet, and bandwidth is our oxygen supply.
Let's just make sure that that supply keeps feeding us and the new souls to come.
Copyright © 2000, Carlos Betancourt
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
HelpDex
By Shane Collinge
Copyright © 2000, Shane Collinge
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
Interview with Ronnie Ko of 32BitsOnline
By Fernando Ribeiro Corrêa
This and other interviews are on the
OLinux site
Copyright © 2000, Fernando Ribeiro Corrêa
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
Journal File Systems
By Juan I. Santos Florido
INTRODUCTION
As Linux grows up, it aims to satisfy different users and potential
situations' needs. During recent years, we have seen Linux acquire different
capabilities and be used in many heterogeneous situations. We have Linux
inside micro-controllers, Linux router projects, one floppy Linux distribution,
partial 3-D hardware speedup support, multi-head Xfree support, Linux
games and a bunch of new window managers as well. Those are important features
for end users. There has also been a huge step forward for Linux
server needs mainly as a result of the 2.2.x Linux kernel switch. Furthermore,
sometimes as a consequence of industry support and others leveraged by
Open Source community efforts, Linux is being provided with the most important
commercial UNIX and large server's features. One of these features is the
support of new file systems able to deal with large hard-disk partitions,
scale up easily with thousands of files, recover quickly from crash, increase
I/O performance, behave well with both small and large files, decrease the
internal and external fragmentation and even implement new file system
abilities not supported yet by the former ones.
GLOSSARY
Internal fragmentation
External fragmentation
Extents
B+Trees
B+Tree diagram: the leaf node's keys are
ordered within the tree improving scan times, since the scan is no longer
sequential. Leaf nodes are chained using pointers to each other.
[In the diagram, the keys are file names. The bottom row above the
red boxes contains a key for every file in the directory: these are the leaf
nodes. Above these are the internal nodes, keys that have been chosen by the
system to make finding other keys faster. -Ed.]
UNIX File System (UFS)
Virtual File System (VFS)
THE JOURNAL
What is a Journal File System?
How does it work?
KNOWN PROBLEMS--SATISFYING THE SCALABILITY NEEDS
UNIX File System (UFS) and ext2fs were designed when hard disks
and other storage media weren't as big in capacity. The growth in storage media capacity
led to bigger files, directories and partition sizes, causing several
file-system-related problems. These problems are a consequence of the internal
structures those file systems laid over. Yet, although those structures were adequate
for old files and directories' average sizes, they have proven inefficient
for new ones.
New-generation file systems have been designed to overcome those problems,
keeping scalability in mind. Several new structures and techniques have
been included in those fs. Therefore, we are going to explain deeper the
problems described above and the file system techniques used to
overcome them.
Solving the inability
Currently fixed 4KB
Avoiding inadequate use
The free blocks structure
Large number of directory entries
Large files
(small files)
Ext3fs
isn't a file system designed from scratch; it lies over ext2fs, so it doesn't
support any of the techniques above. The point is that Ext3fs provides
ext2fs with journaling support, while preserving backwards compatibility.
OTHER IMPROVEMENTS
There are other limitations on "UFS-like" file systems. Amongst
these are the inability to manage sparse files as a special
case, and the fixed number of i-nodes problem.
Sparse files support
The ReiserFS internal fragmentation solution
Dynamic i-node allocation
REFERENCES
File system home pages
Bibliography
Copyright © 2000, Juan I. Santos Florido
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
How To Make A Hotkey On The Linux Console
--or--
Why Bill Gates can have my keyboard when he pries it from my cold, dead hands
By Bryan Henderson
Introduction
Background - How keystrokes become a command
Making a hotkey
Things That Don't Work
More information
Copyright © 2000, Bryan Henderson
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
The Deep, Dark Secrets of Bash
By Ben Okopnik
"There are two major products that come out of Berkeley: LSD and UNIX. We
don't believe this to be a coincidence."
-- Jeremy Anderson
bash
man page lurk terrible things, not to be
approached by the timid or the inexperienced... Beware, Pilgrim: the
last incautious spelunker into these mysterious regions was found, weeks
later, muttering some sort of strange incantations that sounded like
"nullglob", "dotglob", and "MAILPATH='/usr/spool/mail/bfox?"You have
mail":~/shell-mail?"$_ has mail!"'"
(He was immediately hired by an Unnamed Company in Silicon Valley for
an unstated (but huge) salary... but that's beside the point.)
Parameter Expansion
bash
are rather minimal
as compared to, say, perl
or awk
: in my best
estimate, they're not intended for serious processing, just "quick and dirty"
minor-task handling. Nevertheless, they can be very handy for that purpose.
fn=$(basename $fnm) # We need _just_ the filename
[ -z ${fn##[A-Z]*} ] && MAX="-max" # Set the "-max" switch if true
xv -root -quit $MAX $fnm & # Run "xv" with|without "-max"
# based on the test result
Confusing-looking stuff, isn't it? Well, part of it we already know:
the [ -z ... ] is a test for a zero-length string. What about the
other part, though?
In order to 'protect' our parameter expansion result from the cold,
cruel world (e.g., if you wanted to use the result as part of a
filename, you'd need the 'protection' to keep it separate from the
other characters), we use curly brackets to surround the whole
enchilada.
$d is the same as ${d}
except that the second variety can be combined with other things
without losing its identity - like so:
d=Digit
echo ${d}ize # "Digitize"
echo ${d}al # "Digital"
echo ${d}s # "Digits"
echo ${d}alis # "Digitalis"
Now that we have it isolated from the world, friendless and all
alone... oops, sorry - that's "_shell_ script", not "horror movie
script" - I lose track once in a while... Anyway, now that we've
separated the variable out via the curly braces, we can apply a few
tools incorporated in bash
(capable little bugger, isn't it?) to
perform some basic parsing of its value. Here is the list:
(For this exercise, let's assume that $parameter="amanuensis".)
EXAMPLE: ${#parameter} = 10
EXAMPLE: ${parameter#*n} = uensis
EXAMPLE: ${parameter#*n} = sis
EXAMPLE: ${parameter%n*} = amanue
EXAMPLE: ${parameter%%n*} = ama
EXAMPLE: ${parameter:7} = sis
starting at 'offset'.
EXAMPLE: ${parameter:1:3} = man
EXAMPLE: ${parameter/amanuen/paralip}
= paralipsis
EXAMPLE: ${parameter//a/A} = AmAnuensis
(For the last two operations, if the pattern begins with #, it will
match at the beginning of the string; if it begins with %, it will match
at the end. If the string is empty, matches will be deleted.)
[ -z ${fn##[A-Z]*} ]
Not all that difficult anymore, is it? Or maybe it is; my thought
process, in dealing with searches and matches, tends to resemble
pretzel-bending. What I did here - and it could be done in a number
of other ways, given the above tools - is to match for a max-length
string (i.e., the entire filename) that begins with an uppercase
character. The [ -z ... ] returns 'true' if the resulting string is
zero-length (i.e., matched the [A-Z]* pattern), and $MAX is set to
"-max".
Odin:~$ experiment=supercallifragilisticexpialadocious
Odin:~$ echo ${experiment%l*}
supercallifragilisticexpia
Odin:~$ echo ${experiment%%l*}
superca
Odin:~$ echo ${experiment#*l}
lifragilisticexpialadocious
Odin:~$ echo ${experiment##*l}
adocious
...and so on. It's the best way to get a feel for what a certain tool
does; pick it up, plug it in, put on your safety glasses and gently
squuueeeze the trigger. Observe all safety precautions as random
deletion of valuable data may occur. Actual results may vary and
*will* often surprise you.
Parameter State
bash
for the purpose provide convenient shortcuts for such
occasions:
(Here, we'll assume that our variable - $joe - is unset or null.)
EXAMPLE: ${joe:-mary} = mary ($joe
remains unset.)
return it.
EXAMPLE: ${joe:=mary} = mary ($joe="mary".)
EXAMPLE:
Odin:~$ echo ${joe:?"Not set"}
bash: joe: Not set
Odin:~$ echo ${joe:?}
bash: joe: parameter null or not set
EXAMPLE:
Odin:~$ joe=blahblah
Odin:~$ echo ${joe:+mary}
mary
Odin:~$ echo $joe
blahblah
Array Handling
bash
, a basic mechanism for
handling arrays, allows us to process data that needs to be indexed, or at
least kept in a structure that allows individual addressing of each of its
members. Consider the following scenario: if I have a phonebook/address list,
and want to send my latest "Sailor's Newsletter" to everyone in the "Friends"
category, how do I do it? Furthermore, say that I also want to create a list of
names of the people I sent it to... or some other processing... i.e., make it
necessary to split it up into fields by length, and arrays become one of the
very few viable options.
Name Category Address e-mail
Jim & Fanny Friends Business 101101 Digital Dr. LA CA fr@gnarly.com
Fred & Wilma Rocks friends 12 Cave St. Granite, CT shale@hill.com
Joe 'Da Fingers' Lucci Business 45 Caliber Av. B-klyn NY tuff@ny.org
Yoda Leahy-Hu Friend 1 Peak Fribourg Switz. warble@sing.ch
Cyndi, Wendi, & Myndi Friends 5-X Rated St. Holiday FL 3cuties@fl.net
Whew. This stuff obviously needs to be read in by fields - word
counting won't do; neither will a text search. Arrays to the rescue!
#!/bin/bash
# 'nlmail' sends the monthly newsletter to friends listed
# in the phonebook
#
At this point, we have the "phonelist" file loaded into the four arrays
that we've created, ready for further processing. Each of the fields is
easily addressable, thus making the stated problem - that of e-mailing
a given file to all my friends - a trivial one (this snippet is a
continuation of the previous script):
bash
would create the arrays automatically, since we'll
# use the 'name[subscript]' syntax to load the variables -
# but I happen to like explicit declarations.
declare -a name category address email
# Count the number of lines in "phonelist" and loop that
# number of times
for x in $(seq $(grep -c $ phonelist))
do
x=$(($x)) # Turns '$x' into a number
line="$(sed -n ${x}p phonelist)" # Prints line number "$x"
name[$x]="${line:0:25}" # Load up the 'name' variable
category[$x]="${line:25:10}" # Etc.,
address[$x]="${line:35:25}" # etc.,
email[$x]="${line:60:20}" # etc.
done
# Continued below ...
# Continued from above ...
for y in $(seq $x)
do
# We'll match for the word "friend" in the 'category' field,
# make it "case-blind", and clip any trailing characters.
if [ -z $(echo ${category[$y]##[Ff]riend*}) ]
then
mutt -a Newsletter.pdf -s 'S/V Ulysses News, 6/2000' ${email[$y]}
echo "Mail sent to ${name[$y]}" >> sent_list.txt
fi
done
That should do it, as well as pasting the recipients' names into a file
called "sent_list.txt" - a nice double-check feature that lets me see
if I missed anyone.
bash
extend a bit beyond
this simple example. Suffice it to say that for simple cases of this sort, with
files under, say, a couple of hundred kB, bash
arrays are the way
to go. For my own curiosity, I created a list of names that was just over
100kB, using the "phonelist" from the above example -
for n in $(seq 300); do cat phonelist >> ph_list; done
- and ran it on my aging Pentium 233/64MB. 24 seconds; not bad for
1500 records and a "quick and dirty" tool.
Wrapping It Up
bash
, besides being very capable in its role as a command-line
interpreter/shell, boasts a large number of rather sophisticated tools
available to anyone that needs to create custom programs. In my opinion,
shell scripting suits its niche - that of a simple yet powerful
programming language - perfectly, fitting between command-line utility
usage and full-blown (C, Tcl/Tk, Python) programming, and should be part
of every *nix user's arsenal. Linux, specifically, seems to encourage
the "do it yourself" attitude among its users, by giving them access to
powerful tools and the means to automate their usage: something that I
consider a tighter integration (and that much higher a "usability
quotient") between the underlying power of the OS and the user
environment. "Power to the People!"
Until next month -
Happy Linuxing!
Quote of the Month
"...Yet terrible as UNIX addiction is, there are worse fates. If
UNIX is the heroin of operating systems, then VMS is barbiturate
addiction, the Mac is MDMA, and MS-DOS is sniffing glue. (Windows
is filling your sinuses with lucite and letting it set.)
--The Usenet Oracle
References
bash
, builtins
"Introduction to Shell Scripting" by Ben Okopnik, LG #53
"Introduction to Shell Scripting" by Ben Okopnik, LG #54
Copyright © 2000, Ben Okopnik
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
Bluefish HTML Editor
By Martin Skjøldenrand
Copyright © 2000, Martin Skjøldenrand
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
Building a Secure Gateway, part II
By Chris Stoddard
Introduction
System Updates and Security Advisories
Physical Security
User Accounts and Passwords
Configuration files
#!/bin/sh
# Change owner to root
chown root.root $1
# Change permissions so only root has access
chmod 600 $1
# Make the file unalterable
chattr +i $1
ALL: ALL
echo "" > /etc/issue
echo "$R" >> /etc/issue
echo "Kernel $(uname -r) on $a $SMP$(uname -m)" >> /etc/issue
cp -f /etc/issue /etc/issue.net
echo >> /etc/issue
Before you save and close the /etc/rc.d/rc.local file, we want
to keep the system from responding to ICMP requests, such as ping
and traceroute, so we add the following lines right after the
#!/bin/sh line:
echo 1 > /proc/sys/net/ipv4/icmp_echo_ignore_all
echo 1 > /proc/sys/net/ipv4/tcp_syncookies
This will make your system all but invisible to the outside
world; the Script Kiddies can't crack what they can't find. The
second line helps protect your system from SYN Denial of Service
Attacks. Go ahead and save the file and exit. Please note, this
will also keep you from pinging the system, but should not
interfere with other functions, such as ssh or IP forwarding.
Finally, lock it down.
nospoof on
This will cause the system to reject any requests coming from a
source outside your network, claiming to be a system on the
inside of your LAN, this type of a attack is called IP Spoofing.
Go ahead and lock it down with secure-it.
Other files we don't need to alter, but need to be locked down
are, /etc/services, /etc/passwd, /etc/shadow, /etc/group and
/etc/gshadow. If you plan to change your passwd or add a user
you will have to run "chattr -i filename" on /etc/passwd,
/etc/shadow, /etc/group and /etc/gshadow or you will get an error
message.
/dev/hda1 / ext2 defaults 1 1
/dev/hda1 /boot ext2 defaults 1 2
/dev/cdrom /mnt/cdrom iso9660 noauto,owner,ro 0 0
/dev/hda5 /home ext2 defaults 1 2
/dev/hda6 /tmp ext2 defaults 1 2
/dev/sda1 /usr ext2 defaults 1 2
/dev/hda7 /var ext2 defaults 1 2
/dev/hda8 swap swap defaults 0 0
/dev/fd0 /mnt/floppy msdos noauto,owner 0 0
none /proc proc defaults 0 0
none /dev/pts devpts gid=5,mode=620 0 0
We want to change the /home and /tmp lines to read as follows:
/dev/hda5 /home ext2 rw,nosuid,nodev 1 2
/dev/hda6 /tmp ext2 rw,nosuid,nodev,noexec 1 2
/dev/hda1 / ext2 defaults 1 1
Change it to this,
/dev/hda9 / ext2 defaults,usrquota 1 1
Add the following lines to /etc/rc.d/rc.local
/sbin/quotacheck -avug
/sbin/quotaon -avug
now type "touch /quota.user" and then "chmod 700 /quota.user"
and reboot the system. There may have some error messages about
quota; ignore them. Once the system is back up, you will need
to set the quota for, what should be the only user account, type
the following command, replacing "username" the the name of your
user account, type "edquota -u username". This should bring up
the vi text editor showing something similar to this.
Quotas for user username:
/dev/hda1: blocks in use: 7, limits (soft = 0, hard = 0)
inodes in use: 6, limits (soft = 0, hard = 0)
By setting a block limit, you are limiting how much drive
space the user can consume in KB, by setting the inodes you
will be limiting the amount of files the user can have.
Soft limits when exceeded will warn the user, hard limits
are absolute. Unless you have a very good reason to set
them higher, such as you plan on transfering MP3's to this
machine, I suggest setting the limits fairly low, something
like 10 MB of disk space and 100 files. Edit the lines so
they look like like this, then save the file and exit.
Quotas for user username:
/dev/hda1: blocks in use: 7, limits (soft = 5120, hard = 10240)
inodes in use: 6, limits (soft = 50, hard = 100)
This will set a soft limit of 50 files taking up 5 MB and an
absolute limit of 100 files consuming 10 MB of drive space.
-rwsr-xr-x 1 root root 35168 Sep 22 23:35 /usr/bin/chage
-rwsr-xr-x 1 root root 36756 Sep 22 23:35 /usr/bin/gpasswd
-r-xr-sr-x 1 root tty 6788 Sep 6 18:17 /usr/bin/wall
-rwsr-xr-x 1 root root 33152 Aug 16 16:35 /usr/bin/at
-rwxr-sr-x 1 root man 34656 Sep 13 20:26 /usr/bin/man
-r-s--x--x 1 root root 22312 Sep 25 11:52 /usr/bin/passwd
-rws--x--x 2 root root 518140 Aug 30 23:12 /usr/bin/suidperl
-rws--x--x 2 root root 518140 Aug 30 23:12 /usr/bin/sperl5.00503
-rwxr-sr-x 1 root slocate 24744 Sep 20 10:29 /usr/bin/slocate
-rws--x--x 1 root root 14024 Sep 9 01:01 /usr/bin/chfn
-rws--x--x 1 root root 13768 Sep 9 01:01 /usr/bin/chsh
-rws--x--x 1 root root 5576 Sep 9 01:01 /usr/bin/newgrp
-rwxr-sr-x 1 root tty 8328 Sep 9 01:01 /usr/bin/write
-rwsr-xr-x 1 root root 21816 Sep 10 16:03 /usr/bin/crontab
-rwsr-xr-x 1 root root 5896 Nov 23 21:59 /usr/sbin/usernetctl
-rwsr-xr-x 1 root bin 16488 Jul 2 10:21 /usr/sbin/traceroute
-rwxr-sr-x 1 root utmp 6096 Sep 13 20:11 /usr/sbin/utempter
-rwsr-xr-x 1 root root 14124 Aug 17 22:31 /bin/su
-rwsr-xr-x 1 root root 53620 Sep 13 20:26 /bin/mount
-rwsr-xr-x 1 root root 26700 Sep 13 20:26 /bin/umount
-rwsr-xr-x 1 root root 18228 Sep 10 16:04 /bin/ping
-rwxr-sr-x 1 root root 3860 Nov 23 21:59 /sbin/netreport
-r-sr-xr-x 1 root root 26309 Oct 11 20:48 /sbin/pwdb_chkpwd
As you can see right hand side shows the permissions of each
file: anything with an "s" in it has its SUID bit set. By
disabling SUID bit, only root will be able to run the program.
What needs to be done now is decide which ones can have the SUID bit
safely turned off--many of these programs require it for normal
operations. Many, however, should be run only by root
anyway. How you turn the SUID bit off is with the following
command:"chmod a-s filename". My suggestions for this step
are; /usr/bin/chage, /usr/bin/gpasswd, /usr/bin/wall,
/usr/bin/chfn, /usr/bin/chsh, /usr/bin/newgrp, /usr/bin/write,
/usr/sbin/usernetctl, /usr/sbin/traceroute, /bin/mount,
/bin/umount, /bin/ping, and /sbin/netreport.
Checking system integrity
1 0 * * * /usr/local/fcheck/fcheck -a > /root/fcheck.txt
Replace the path to check with your own path, save and exit.
Now at 12:01 every night, fcheck will run and the output will be
placed in /root/fcheck.txt. If at any time fcheck detects altered
files which you cannot account for, immeaditly remove the package
from the system and reinstall it from the RedHat CD. Anytime you
make a change to a file, you will need to rerun "fcheck -ca" and
build another baseline.
Finished
Copyright © 2000, Chris Stoddard
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
Regular Expressions in C
By Ben Tindale
Scope
The Gnu C Library and Regular Expressions
Mygrep.c
bash> ./mygrep -f mygrep.c -p int Line 17: int
match_patterns(regex_t *r, FILE *FH) Line 36: printf("Line %d:
%s", line_no, line); Line 52: printf("In error\n");
bash>
In particular, we explored the capable GNU
regular expression library, regex.h, which simplifies the inclusion of
regular expression matching into your program, and
provides a safe and simple interface to these capabilities.
Copyright © 2000, Ben Tindale
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
An Introduction to Object-Oriented Programming in C++
By Michael Williams
So what is OOP?
What you'll need and who this is for
A Historical Interlude
Data types
int number, rooms;
bool garden;
Classy!
class house
{
public:
int number, rooms;
bool garden;
};
main()
{
house my_house;
my_house.number=40;
my_house.rooms=8;
my_house.garden=1;
return 0;
}
Member Functions
class square
{
public:
int length, width;
int area()
{
return length*width;
}
};
main()
{
square my_square;
my_square.length=5;
my_square.width=2;
cout<<my_square.area();
return 0;
}
Function Definitions Outside the Class Definition
class square
{
public:
int length, width;
int area();
};
int square::area()
{
return length*width;
}
Public or Private?
main()
{
sqaure my_square;
my_square.length=2;
cout<<my_square.length;
return 0;
}
class square
{
private:
int length, width;
int area();
};
Class Constructors
main()
{
sqaure my_square;
my_square.length=2
my_square.width=3
}
class square
{
public:
int length, width;
square(int length1, int width1)
{
length=length1;
width=width1;
}
int area()
{
return length*width;
}
};
main()
{
square my_square(5, 2);
cout<<my_square.area();
return 0;
}
Arrays and Classes
class person
{
public:
int age, house_number;
};
main()
{
person alex[5];
for(int x(0); x<5; x++)
{
alex[x].age=x;
alex[x].house_number=x;
cout<<"Age is "<<alex[x].age<<endl
<<"House number is "<<alex[x].house_number<<endl;
}
return 0;
}
In closing
Copyright © 2000, Michael Williams
Published in Issue 55 of Linux Gazette, July 2000
"Linux Gazette...making Linux just a little more fun!"
The Back Page
About This Month's Authors
Fernando Correa
Fernando is a computer analyst just about to finish his
graduation at Federal University of Rio de Janeiro. Now, he has built
with his staff the best
Linux portal in Brazil and have further
plans to improve services and content for their Internet users.
Juan Ignacio Santos Florido
I am a computer engineering student at the E.T.S.I.Inf in Malaga, Spain.
I've used Linux since kernel 1.2 and enjoy turning the Linux internals
upside down.
Bryan Henderson
Bryan Henderson is an operating systems programmer from way back,
working mostly on large scale computing systems. Bryan's love of
computers began with a 110 baud connection to a local college for a
high school class, but Bryan had little interest in home computers
until Linux came out.
Ben Okopnik
A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building
networks and hacking on hardware and software whenever he runs out of cruising
money. He's been playing and working with computers since the Elder Days
(anybody remember the Elf II?), and isn't about to stop any time soon.
Martin Skjöldebrand
Martin is a former archaeologist who now does system
administration for a 3rd world aid organisation. He also does web
design and has been playing with computers since 1982 and Linux since
1997.
Chris Stoddard
I work for Dell Computer Corporation doing "not Linux" stuff. I have been
using computers since 1979 and I started using Linux sometime in 1994,
exclusivly since 1997. My main interest is in networking implementations,
servers, security, Beowulf clusters etc. I hope someday to quit my
day job and become the Shepard of a Linux Farm.
Ben Tindale
I'm working full time for Alcatel Australia on various xDSL technologies
and writing Java based web apps. I've currently taken a year off from
studying to work, and have just sold my share in an internet cafe I
helped to found. So much to learn, so little time :)
Michael Williams
Currently studying for society's latest waste of his valuable time--GCSE
examinations, Mike's attention have recently been turned towards Linux and
open source Software. Since the tender age of eight, when he got his first
commodore 64, Mike has been programming. Mike loves C++ , hates Micro$oft,
Sony, and anything else that represents any form of establishment. He would
like to say hi to his mom, Alan, Dai and yes even RK. Rescently, RK claimed
to have blown up a cow, using but a strip of magnesium and a match (don't
worry RK, the men in white suits will be along verrryyy soon....)
Not Linux
Editor, Linux Gazette, gazette@ssc.com
This page written and maintained by the Editor of the Linux Gazette.
Copyright © 2000, gazette@ssc.com
Published in Issue 55 of Linux Gazette, July 2000