
           Linux Gazette... making Linux just a little more fun!
                                      
         Copyright  1996-98 Specialized Systems Consultants, Inc.
     _________________________________________________________________
   
                       Welcome to Linux Gazette! (tm)
     _________________________________________________________________
   
                                 Published by:
                                       
                               Linux Journal
     _________________________________________________________________
   
                                 Sponsored by:
                                       
                                   InfoMagic
                                       
                                   S.u.S.E.
                                       
                                    Red Hat
                                       
                                   LinuxMall
                                       
                                Linux Resources
                                       
                                    Mozilla
                                       
                                   cyclades
                                       
   Our sponsors make financial contributions toward the costs of
   publishing Linux Gazette. If you would like to become a sponsor of LG,
   e-mail us at sponsor@ssc.com.
   
   Linux Gazette is a non-commercial, freely available publication and
   will remain that way. Show your support by using the products of our
   sponsors and publisher.
     _________________________________________________________________
   
                             Table of Contents
                          December 1998 Issue #35
     _________________________________________________________________
   
     * The Front Page
     * The MailBag
          + Help Wanted
          + General Mail
     * More 2 Cent Tips
     * News Bytes
          + News in General
          + Software Announcements
     * The Answer Guy, by James T. Dennis
     * Basic Emacs, by Paul Anderson
     * Creating A Linux Certification, Part 3, by Dan York
     * 1998 Editor's Choice Awards, by Marjorie Richardson
     * Happy Hacking Keyboard, by Jeremy Dinsel
     * Getting Started with Linux, by Prakash Advani
     * The GNOME Project, by Miguel de Icaza
     * IMAP on Linux: A Practical Guide, by David Jao
     * Linux Installation Primer, Part 4, by Ron Jenkins
     * Linux - the Darling of COMDEX 1998?, by Norman M. Jacobowitz
     * My Hard Disk's Resurrection, by Ren Tavera
     * New Release Reviews, by Larry Ayers
          + Two Small Personal Databases for Linux
     * Product Review: Partition Magic 4.0, by Ray Marshall
     * Quick and Dirty RAID Infromation, by Eugene Blanchard
     * The Back Page
          + About This Month's Authors
          + Not Linux
       
   The Answer Guy
   The Muse will return next month.
     _________________________________________________________________
   
   TWDT 1 (text)
   TWDT 2 (HTML)
   are files containing the entire issue: one in text format, one in
   HTML. They are provided strictly as a way to save the contents as one
   file for later printing in the format of your choice; there is no
   guarantee of working links in the HTML version.
     _________________________________________________________________
   
   Got any great ideas for improvements? Send your comments, criticisms,
   suggestions and ideas.
     _________________________________________________________________
   
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                The Mailbag!
                                      
                    Write the Gazette at gazette@ssc.com
                                      
                                 Contents:
                                      
     * Help Wanted -- Article Ideas
     * General Mail
     _________________________________________________________________
   
                        Help Wanted -- Article Ideas
     _________________________________________________________________
   
   Date: Sun, 01 Nov 1998 21:08:28 -0800
   From: Ted, brwood@worldstar.com
   Subject: printing issues as users
   
   I have the following problem that nobody seems to give me a good
   answer to. If I print as root, everything is good. If I try to print
   as a user I get "lpr: connect error, permission denied" "jobs queued,
   but daemon could not be started'
   
   This is under Red hat 5.1. Any tips ?
   
   --
   Ted Brockwood
     _________________________________________________________________
   
   Date: Mon, 2 Nov 1998 19:34:49 GMT
   FROM: richard.c.hodges@lineone.net
   SUBJECT: Help Wanted!
   
   I have a PII (350MHz) running with an AGP ATI 3DRage graphics card
   (which works fine) and a Sound Blaster 16 PnP (which also works fine).
   But, I'd be buggered if I can get my internal SupraExpress 56k modem
   to work.
   
   I have set the port (cua2 - COM3 in Windows) to IRQ11 (as it is under
   Mr. Gates' OS) and the memory but it won't work. I tried changing the
   modem initialization strings and still nothing. Minicom says that
   there is no connection (!?).
   
   If someone can help me, I would be most grateful as I want to use
   Netscape under X because I want to use less of Windows because it's no
   good and expensive and hey, who likes expensive stuff eh?
   
   Thanks for your time
   
   --
   Richard Hodges
     _________________________________________________________________
   
   Date: Tue, 3 Nov 1998 11:47:43 +0100 From: Carlo Vinante,
   Vinante@igi.pd.cnr.it
   Subject: K6-2 Troubles on Linux
   
   First I would thanks all the people and Linux Gazette who answered me
   in a my previous mail.
   
   I've another request to do now ..... that is :
   
   I've upgraded my system from a K5 @ 133 MHz to a K6-2 3D @ 266 MHz
   processor ... and, as wrote on the Linux HOWTOs "... with the older
   version of K6 we have to disabled the cache memory ... ".
   
   So, my fault was that I didn't read the HOWTO prior to buy the new
   processor, but I'm asking to myself if "... is really a K6-2 an
   "older" version of K6 family ... " ?
   
   The system runs anyway, but is a 'little slow' :-( Is the cache
   disabling the only way to fix this problem ? If not, which kind of K6,
   I can 'safely' use ?
   
   Thanks in advance to all the Linux people. Have Fun :)
   
   --
   Carlo Vinante
     _________________________________________________________________
   
   Date: Tue, 03 Nov 1998 12:51:31 +0530
   From: Prakash Advani, prakash@bom5.vsnl.net.in
   Subject: Questions
   
   I'm interested in setting up Sendmail so that it routes mail over the
   Internet for users who are not on the system.
   
   What I have done is setup a Web site and a Linux server on my
   Intranet. Both have the same domain name. I can download mail and
   distribute it internally using fetchmail and procmail. I can also send
   mails to users on the Internet as well as users within the network.
   
   What I would like Sendmail to do is check if the user is a valid user
   on the system. If so it should deliver the mail internally, otherwise
   it should route the mail over the Internet.
   
   Any help would be greatly appreciated.
   
   --
   Prakash
     _________________________________________________________________
   
   Date: Wed, 04 Nov 1998 19:01:02 +0000
   From: Roberto Urban, roberto.urban@uk.symbol.com
   Subject: Help Wanted - Installation On Single Floppy
   
   My problem seems to be very simple yet I am struggling to solve it. I
   am trying to have a very basic installation of Linux on a single
   1.44MB floppy disk and I cannot find any documents on how to do that.
   
   My goal is to have just one floppy with the kernel, TCP/IP, network
   driver for 3COM PCMCIA card, Telnet daemon, so I could demonstrate our
   RF products (which have a wireless Ethernet interface - 802.11 in case
   you are interested) with just a laptop PC and this floppy. I have
   found several suggestions on how to create a compressed image on a
   diskette but the problem is how to create and install a _working_
   system on the same diskette, either through a RAM disk or an unused
   partition. The distribution I am currently using is Slackware 3.5.
   
   I would appreciate every help in this matter.
   
   --
   Roberto Urban
     _________________________________________________________________
   
   Date: Sat, 07 Nov 1998 13:01:39 +0100
   From: Bob Cloninger, bobcl@ipa.net
   Subject: Dual HP Ethernet 10/100VG
   
   These are PCI controllers that seem to have some ISA characteristics.
   Everything I found said multiple PCI controllers could share a single
   driver, but that apparently isn't the case for this controller. I was
   never able to force the probe for the second card.
   
   The first two (alias) lines were added by the X-Windows configuration,
   and I added the two (options) lines to /etc/conf.modules.

 alias eth0 hp100
 alias eth1 hp100
 options eth0 -o hp100-0
 options eth1 -o hp100-1

   "eth1" popped right up on the next reboot. This is well documented for
   ISA controllers, but I couldn't find it associated with PCI anywhere.
   Desperation + trial and error...
   
   I'm an experienced system administrator, but new to Linux. Is this
   something I overlooked in the documentation or web sites?
   
   --
   Bob Cloninger
     _________________________________________________________________
   
   Date: Thu, 05 Nov 1998 13:45:44 +0100
   From: Tony Grant, tg001@dial.oleane.com
   Subject: ISDN on Linux
   
   I am looking for help from a person who has an ISDN connection running
   on Red Hat 5.1, 2.0.35, Intel (K6 -2) with USR sportster internal
   card. I have managed to run ISDN on both S.u.S.E. and Red Hat but
   since I have upgraded my machines from P166 to AMD K6-2 300 MHz it
   doesn't work anymore...
   
   --
   Tony Grant
     _________________________________________________________________
   
   Date: Wed, 18 Nov 1998 15:33:18 -0500
   From: terrence.yurejchu, ktwy@dragonbbs.com
   Subject: So How do I get the most from Linux
   
   I have made an extensive, and personal (money-wise) commitment to
   Microsoft and Windows and ... (from MS). I can say I am not entirely
   pleased, but then I began in the days of CP/M and never enjoyed the MS
   flavor to it. I like the idea of Unix/Linux but I do have all this
   software that is for the MSWin platform.
    1. Do I have to give it all up?
    2. I know that Sun had/(have) a software that enabled Unix to run
       MSWin software, anything like that available for Linux?
       
   Thanks,
   Terry Yurejchuk
   
     (There is a project called WINE that allows you to run some Windows
     software on Linux. Unfortunately, it's way behind. However, Corel
     seems to be backing getting it more up to date so this may change
     soon. Also, you can set up your computer to run both Windows and
     Linux using LILO to pick which operating system to run when you log
     on, or you can network the two systems using Samba. So no need to
     give up anything. --Editor)
     _________________________________________________________________
   
   Date: Sat, 14 Nov 1998 15:56:27 -0700 (MST)
   From: Michael J. Hammel, mjhammel@graphics-muse.org
   Subject: Re: graphics for disabled
   
   In a previous message, Pierre LAURIER says:
   I'm just a new user of Linux, without too much time to consider
   learning it. I'm just having a quick question : Do you know of
   specific developments that have been made on X environments (KDE,
   GNOME or others) that are giving specific features for visually
   impaired people.
   
   No, I don't know of anything like this thats specifically planned for
   the desktop designs.
   
   - control of the pointer device with the keyboard
   
   You can do that now if you use the IBM "mouse" - the little post thats
   placed right in the keyboard. But that depends on your definition of
   "control". If what you're really looking for is to use the tab key,
   for example, to move from application to application then you can
   already do that with some window managers. Then the applications need
   to have proper traversal configuration (done in the source code, not
   from the user's perspective) to allow movement of keyboard focus
   within the application.
   
   - customizing the pointer with any kind of shape, color...etc
   
   Doable, but I don't know to what level KDE or GNOME supports this. It
   would have to be done in the Window Manager in order for it to be
   applicable to all applications.
   
   - features that help retrieve the cursor on the screen (key stroke,
   blinking etc...)
   
   I take it you mean "find it" - make it stand out visually so you can
   tell where its at. Again, this would be a function of the window
   manager. None of them currently do anything like this. At least not
   that I know of.
   
   - instant zooming of the screen (by function key for example)
   
   This would be a function of the X server, not the window manager or
   GNOME or KDE. None of the X servers have a "zoom" per se, but they all
   support on the fly resolution switching via the keyboard.
   
   - changing screen color/ resolution etc on the fly
   
   Resolution switching can be done with CTRL-ALT-BACKSPACE with the Xi
   Graphics server. I think XFree86 does the same. But with either you
   have to configure the server properly for this to work properly. I
   don't use this feature so couldn't explain how its done.
   
   By "changing color" I take it to mean the color of the background
   and/or borders/frames around windows. This would be a function of the
   window manager. CDE (a commercial environment that uses the Motif
   Window Manager, aka mwm) supports this. I don't think any other window
   managers support it just yet but they might.
   
   and I'm just here mentioning feature for disabled people, not for
   blind ones. But one way or the other the IT community needs to
   remember that computer can be a fantastic tool also for these peoples.
   
   True. The problem is finding someone who both understands what the
   issues are and has an interest in doing the work (or organizing the
   work to be done, either by the OSS community or by commercial
   vendors).
   
   I'm sorry I was taking this time, if you're not a person that can help
   here, just pass along this message to anyone that could help.
   
   I'll forward this reply to the Linux Gazette. They'll probably post it
   and maybe someone with better information than I will contact you.
   
   --
   Michael J. Hammel
     _________________________________________________________________
   
   Date: Thu, 12 Nov 1998 07:33:32 -0800
   From: Sergio E. Martinez, sergiomart@csi.com
   Subject: article idea
   
   I'm just writing in with an idea for a quick article. I've been using
   the GNOME desktop. I'm a relative Linux newbie though, and I think
   that many of your less experienced readers could probably benefit from
   a short article about window managers. These are some things I
   currently don't quite understand:
    1. Terminology: The differences (if any) among a GUI, a window
       manager, a desktop, and an interface. How do they differ from X
       windows?
    2. Do all window managers (like GNOME or KDE or FVWM95) run on top of
       X windows?
    3. What exactly does it mean for an application to be GNOME or KDE
       aware? What happens if it's not? Can you still run it?
    4. What exactly do the GTK+ (for GNOME) or Troll (for KDE) libraries
       do?
    5. How does the history of Linux (or UNIX) window managers compare to
       that of say, the desktop given to Win98/95 users? How,
       specifically, does Microsoft limit consumer's choices by giving
       them just one kind of desktop, supposedly one designed for ease of
       use?
    6. What's happening with Common Desktop Environment? Is it correct
       that it's not widely adopted among Linux users because it's a
       resource hog, or not open source?
       
   These are some questions that might make an enlightening, short
   article. Thank you for your consideration.
   
   --
   Sergio E. Martinez
     _________________________________________________________________
   
   Date: Wed, 25 Nov 1998 08:52:09 +0200
   From: Volkan Kenaroglu, volkan@sim.net.tr
   Subject: I couldn't install my sound card :)
   
   I am new on using Linux. Recently installed Debian 1.3 on my system
   both at work and home. But I couldn't install my sound-card (Opti-931)
   even though it says Debian 1.3 support the card. Actually during the
   installation it did not ask me if I've sound card on my computer. Nor
   dit it detect. :( Please help me.
   
   --
   Volkan
     _________________________________________________________________
   
   Date: Wed, 25 Nov 1998 14:27:43 +0800
   From: ngl@gdd.cednet.gov.cn
   Subject: whether Xircom is supported?
   
   I install Red Hat5.1 in notebook computer which has Xircom card,but in
   Red Had5.1,no Xircom driver, I want to known whether Red Hat5.2
   supports this card.
   
   Thanks!
     _________________________________________________________________
   
   Date: Mon, 09 Nov 1998 17:06:47 +1300
   From: Maximum Internet, lakejoy@wk.planet.gen.nz
   Subject: PPP Linux list
   
   We unsubscribed to the PPP Linux list but are still receiving the mail
   even though we received a reply saying that our unsubscribing was
   successful. What do we do? Thank you
     _________________________________________________________________
   
   Date: Wed, 11 Nov 1998 09:56:16 +0100 (MET)
   From: Gregor Gerstmann (s590039), gerstman@tfh-berlin.de
   Subject: Linking
   
   I would appreciate, if somebody would write something about linking
   separately translated Fortran and C programs (don't ask me why), with
    1. main in Fortran
    2. main in C.
       
   Another problem is, that after some installations, at shutdown I
   always get: 'locale not supported by C library, locale unchanged'.
   Something similar I get when I translate an .rpm into an .tgz file
   with alien.
   
   --
   Gregor
     _________________________________________________________________
   
   Date: Sun, 29 Nov 1998 14:52:05 +0000
   From: "Dicer", Dicer@crosswinds.net
   Subject: Help wanted: ATX Powerdown
   
   How is it possible to shutdown my atx-motherboard under linux instead
   of doing a reboot or halt? Any sources or programs known?
   
   --
   Felix Knecht
     _________________________________________________________________
   
                                General Mail
     _________________________________________________________________
   
   Date: Sun, 1 Nov 1998 20:05:53 -0500
   From: Ed Roper, eroper@hrfn.net
   Subject: Securing your system?
   
   Regarding the article in the Nov 1998 issue of Linux Gazette, entitled
   "Securing Your System": What are you guys doing in the editing dept.?
   Since when did "TELNET" read the .rhosts file? One can accept this
   typo if it appeared maybe once, but it occurred several times. This is
   perhaps one of the worst cases of misinformation I have ever seen in a
   computer-related article.
   
     (Sorry about that. Perhaps you don't realize but there are no "guys
     in the editing" department. Articles are posted as they come
     without fee or warranty. If there is a mistake, someone usually
     lets us know, as you have, and we print the correcting letter. You
     are the only one who wrote about this particular article. Thanks.
     --Editor) 
     _________________________________________________________________
   
   Date: Mon, 02 Nov 1998 12:07:52 -0800
   From: Dave Stevens, dstevens@mail.bulkley.net
   Subject: Dan Helfman
   
   I am a computer dealer with a strong interest in Unix as an operating
   system, in Linux as a very good Unix implementation, and a regular
   reader of the Linux Gazette web site. In the November issue at
   www.linuxgazette.com is a reference to a series of postings at
   http://www.nerdherd.net/oppression/9810/ucla.shtml.
   
   These postings detail an issue that has arisen with Mr. Dan Helfman's
   use of your residence network facilities. Not having any other
   information, I am proceeding on the assumption that the statements
   made there are accurate.
   
   If, indeed, they are accurate, I am afraid they portray UCLA's
   administration in a damn poor light. Arbitrariness, secretiveness,
   powermongering and really outstanding stupidity seem to characterize
   the administration's motives and actions, while Mr. Helfman appears to
   have conducted himself with both taste and restraint. I am a
   university person myself and I must say I had rather hoped the kind of
   bullshit I had to deal with in my own student days had been improved
   on in the intervening decades.
   
   How unfortunate that UCLA has learned nothing.
   
   You ought to restore a network connection to Mr. Helfman immediately
   and tender him a public apology now.
   
   If my information is wrong or some reasonable solution has developed,
   no-one would be happier than I.
   
   Dave Stevens
     _________________________________________________________________
   
   Date: Wed, 04 Nov 1998 13:28:59 +0100
   From: Francois Heizmann, francois_heizmann@hp.com
   Subject: Comments for improvements?
   
   In the main page you're requesting "great" ideas for improvements...
   
   Well ! I'm sad to say you did a perfect job... :-)
   
   Please keep on going that way.
   
   Cheers,
   Francois Heizmann
     _________________________________________________________________
   
   Date: Sat, 21 Nov 1998 22:51:52 -0700
   From: Evelyn Mitchell, efm@tummy.com
   Subject: Linux Demonstration at Park Meadows CompUSA
   
   This afternoon, Kevin Cullis, Business Account Manager at the Denver
   Park Meadows CompUSA, graciously invited several Northern Colorado
   Linux advocates and consultants to help him set up a demonstration
   Linux system.
   
   Attending were Lynn Danielson of CLUE, George Sowards, Brent Beerman,
   Fran Schneider, Alan Robertson of the High Availability Linux Project,
   and Sean Reifchneider and I of tummy.com, and Pete who has been
   advocating Linux to Kevin for several years.
   
   Kevin started out describing some of the opportunities he sees for
   Linux in small and home offices, and was quite enthusiastic about
   using Linux as a tool to leverage information in Intranets, Internets,
   and Extranets (VPNs). We discussed the strengths and weaknesses of
   Linux as a desktop machine, particularly the different style of
   administration required between Windows or Macintoshes and Linux, and
   the ways in which the Linux community, particularly Wine, is moving
   closer to achieving binary compatibility with Wintel applications. We
   also discussed how reliability is the biggest selling factor for those
   power users who are sick of the Blue Screen of Death.
   
   We installed Red Hat 5.2 using server mode as a fresh install first,
   and Kevin was absolutely delighted with how simple it was. Three
   questions and 20 minutes.
   
   While the applications were loading for Red Hat, Sean hooked up the
   machine we brought loaded with Red Hat 5.2, KDE, Enlightenment and
   Applix. Kevin was very impressed with KDE, I suspect because he was
   expecting a much different interface. He could see selling a KDE
   system to someone who had only used Windows or Macintoshes without any
   problem.
   
   We then installed Caldera 1.3 on the first machine, as a dual boot.
   The installation was only slightly more complicated than the Red Hat
   server mode.
   
   This is only the beginning of the journey, though. Lynn Danielson will
   be guiding Kevin through the basics of administering and demonstrating
   these systems. On December 10th many of the participants today will be
   meeting again at the Boulder Linux Users Group Mini-Expo to get a look
   a much broader range of Linux applications.
   
   As Sean said, a good Saturday of advocacy.
   
   Evelyn Mitchell
     _________________________________________________________________
   
   Date: Wed, 18 Nov 1998 11:06:59 +0000
   From: Harry Drummond, in4831@wlv.ac.uk
   Subject: re: Linux easy/not easy/not ready/ready YIKES
   
   I have a lot of sympathy with Tim Gray's remarks on the intelligence
   of the user, but (inevitably) I also have reservations.
   
   I'm not a computer professional of any kind, but I bought a BBC
   computer way back in 1983 and taught myself to program. I then learned
   two other flavors of Basic, then QuickBasic, and currently Delphi for
   a hobby application I've been selling since 1989. I also taught myself
   HTML (and taught others afterwards). And while I haven't yet got to
   grips with Linux because the latest version of my application is due
   out again, I have the two versions of Linux recently distributed on UK
   magazines and I'm at least 90% confident of installing it. The other
   10% will be the challenge.
   
   But in common with many users, I apply the maxim "when all else fails
   read the manual" (ironic when I write a manual for my own
   application). As a result, I have spent months programming things that
   I then learned could have been done much more simply *if I'd only
   known the command.* Well, at the time I didn't! And the very wealth of
   material can be a hindrance if you cannot yet slot all the bits into
   the right place in your mind. It's also enormously frustrating to work
   with manuals, etc. (when you *do* read them!), that gloss over the
   particular point that causes trouble. In some cases, the problem is
   more imaginary than real - but it's real enough to the beginner until
   he/she cracks it.
   
   I work in a University Library where we do our best to get students
   using computers. Some need only a hint, some will never understand
   more than a tiny fragment. But we've produced the briefest handouts we
   can (1 sheet of paper) and still had the student begging for help when
   the answer was plainly written in the handout clutched in their
   fingers. People commonly want people for help, not documents.
   
   Finally, some people don't want education, they want to cut straight
   to the answers. If we're honest, we all do it at different times. I've
   got stacks of software that came on magazine discs. Unless they really
   fascinate me, the only ones likely to survive a five-minute
   exploration are those that convince me I can make them work with
   minimum effort. With me, as with many users, it isn't intelligence
   that's in question, it's commitment to the task in hand. And that
   determines whether the user is into exploration and education, or just
   picking up a work-ready tool for an hour.
   
   I'll see you with my newbie questions shortly!
   
   --
   Harry Drummond.
     _________________________________________________________________
   
   Date: Wed, 18 Nov 1998 10:07:36 +0000
   From: Harry Drummond, in4831@wlv.ac.uk
   Subject: Not Linux
   
   I read your remarks on Jonathan Creek with interest, but appreciate
   them while you can. They only make about 6 episodes at a time, with (I
   think) two series in all so far. I suspect the concept was a one-off
   series to test the water and was successful enough to do more.
   
   My wife and I (as ordinary viewers) are confidently looking forward to
   a third series in due course, but we've seen some very promising ideas
   survive only one series. Britain also has a large percentage of
   viewers who would quickly switch to soaps, game shows, or - if they
   stretched themselves - Dallas et al. That does tend to kill shows that
   have promise but need to build.
   
   Things like Jonathan Creek, Morse and so forth are probably no more
   common on our screens than they are on yours. But you *are* right
   about beautiful people. Using 'ordinary' people has the downside of
   making the programmes look more ordinary to us, but more closely
   linked to reality as well. For viewers abroad, of course, there is
   always an exotic flavour as well - something the native (in any
   culture) usually misses.
   
   Happy viewing!
   
   Harry Drummond
     _________________________________________________________________
   
   Date: Fri, 13 Nov 1998 00:41:51 +0000
   From: "I.P. Robson", dragonfish@messages.to
   Subject: Link : Cheers..
   
   I just want to say that's a really sexy link at the top of the index
   page... and even I can't miss it now... Hopefully I'll never forget to
   download an issue now..
   
   And even though you already know I think you guys are the best, I have
   to tell you again....
   
   Thanks :)
   Pete Robson
     _________________________________________________________________
   
   Date: Mon, 30 Nov 1998 12:54:46 -0800
   From: Geoffrey Dann, gdann@nfesc.navy.mil
   Subject: Telnet vs Rlogin
   
   In issue 34, article "Securing Your Linux Box", the author mentions
   TELNET using the .rhosts file. In the few systems I've used (BSD4,
   SunOs, Solaris, Linux), "rlogin" uses the .rhosts file, but "telnet"
   does not.
   
   Other than that, great article! thanks..
   
   --
   Geoff
     _________________________________________________________________
   
             Published in Linux Gazette Issue 35, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Next 
   
      This page is maintained by the Editor of Linux Gazette, gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
      
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                               More 2 Tips!
                                      
               Send Linux Tips and Tricks to gazette@ssc.com 
     _________________________________________________________________
   
  Contents:
  
     * NumLock - On at Startup 
     * Environment configuration using zsh 
     * XWindow servers for MS PCs 
     * Simultaneous color depths for X 
     * Netscape 
     * Hard Disk Duplication - Update 
     * Back Ups 
     * ANSWER: Re: suggestion for Linux security feature 
     * ANSWER: Re: How to add disk space to Red Hat 5.1? 
     * ANSWER: Re: Win95 peer-to-peer vs. Linux server running Samba 
     * ANSWER: Re: Single IP Address & Many Servers. Possible? 
     * ANSWER: Re: Help Modem+HP 
     * ANSWER: Re: Suggestion for Linux security features 
     _________________________________________________________________
   
  NumLock - On at Startup
  
   Date: Mon, 02 Nov 1998 09:37:58 -0500
   From: Brian Trapp, bmtrapp@acsu.buffalo.edu
   
   Here's a 2 cent tip for others trying to turn NumLock on at startup
   (I'm using Red Hat 5.1, Linux 2.0.34)
   
   Dennis van Dok wrote in to let us know there's a program called
   "setleds" that will handle this kind of activity. The "Linux FAQ"
   http://theory.uwinnipeg.ca/faqs/section7.html#q_7_10 has this to say
   about how to set this up automatically.
   
     Question 7.10. How do I get NUM LOCK to default to on ?
     Use the setleds program, for example (in /etc/rc.local or one of
     the /etc/rc.d/* files): for t in 1 2 3 4 5 6 7 8
     do
     setleds +num < /dev/tty$t > /dev/null
     done
     setleds is part of the KDB package (see Q7.9 `How do I remap my
     keyboard to UK, French, etc. ?').
     Alternatively, patch your kernel. You need to arrange for
     KBD_DEFLEDS to be defined to (1 << VC_NUMLOCK) when compiling
     drivers/char/keyboard.c.
     
   Steve Head also wrote in saying he thought there was a setting in the
   X11 configuration file to change this, but I haven't had a chance to
   try that yet.
   
   Again -- the Linux community comes through. Thanks to all who helped.
   
   Brian Trapp
     _________________________________________________________________
   
  Environment configuration using zsh
  
   Date: Wed, 04 Nov 1998 02:27:39 +0100
   From: Gerard Milmeister, gemi@bluewin.ch
   
   It may happen that I want to use a software package which includes
   lots of binaries, sometimes even hundreds of them as is the case with
   BRLCAD. These packages live in their own directories, for example
   /usr/local/brlcad/bin, /usr/local/brlcad/lib etc. I don't want to cp,
   mv or ln the binaries in a common place like /usr/local/bin as they
   would clutter up these directories and, more important, name clashes
   can arise. Furthermore these packages require environment variables to
   be set, and it would be cumbersome to configure these in a personal
   .zshrc file.
   
   The following method using zsh may help to quickly set up an
   environment appropriate for the specific package.
   
   The idea is, that calling a script, e.g. brlcad_setup, living in a
   common place, will make a new instance of shell properly set up. Using
   zsh it is possible to modularize the configuration, such that it is
   possible build up a general configuration tool.
   
   Example:
   In the directory /usr/local/brlcad I put the following shell script,
   linked into /usr/local/bin:
   
   brlcad_setup:
#!/bin/sh
export BRLCADHOME=3D/usr/local/brlcad # (*)
export PATH=3D$BRLCADHOME/bin:$PATH   # (*)
export MANPATH=3D$BRLCADHOME/man      # (*)
export ZDOTDIR=3D/usr/local/lib/zsh   # (**)
export PSNAME=3Dbrlcad                # (**)
exec zsh                            # (1) (**)

   In /usr/local/lib/zsh there is a replacement .zshenv file:
. ~HOME/.zshrc
export PSLOCAL=3D$PSNAME:$PSLOCAL
PS1="[$PSLOCAL%n]:%~:$"

   This is called at (1) in place of the user's .zshenv and will set up
   the prompt, so that the user is able to see in what environment he
   works. The lines (*) are the customization for the particular package.
   The lines (**) can be used as a template for other configuration
   scripts, with PSNAME set to the name of the package. I have created
   scripts for gpm (Modula-2 compiler, name clash with the console mouse
   driver), brlcad and bmrt.
   
   Example session:
[gemi]:~:$brlcad_setup =

[brlcad:gemi]:~:$bmrt_setup =

[bmrt:brlcad:gemi]:~:$gpm_setup =

[gpm:bmrt:brlcad:gemi]:~:$exit
[bmrt:brlcad:gemi]:~:$exit
[brlcad:gemi]:~:$exit
[gemi]:~:$

   At each level, the PATH configuration and other environment variables
   are available for the packages displayed in the prompt, and will
   disappear as soon as a shell is exited.
   
   --
   Gerard
     _________________________________________________________________
   
  XWindow servers for MS PCs
  
   Date: Fri, 6 Nov 1998 17:09:58 +1300
   From: Mark Inder, mark@tts.co.nz
   
   A while ago I inquired about X Windows servers for PC's so that I
   could run my Linux GUI on my PC for administration etc.. I got about
   32 replies. Great support! I have summarized the replies here in case
   anybody else is interested. I tried the MI/X and VNC. I found MI/X
   tricky and not very solid, and VNC to be amazingly flexible. Try
   viewing your own desktop from another PC while viewing that PC's
   desktop.
   
   Replies:
     * XAppeal from ftp://ftp.xtreme.it/pub/xappeal
     * There's a freeware X server at http://www.microimages.com/
     * $99 XwinPro, http://www.labf.com/
     * StarNet Communications Corporation, http://www.starnet.com/
     * Yahoo has a Page that has links to various X servers (Mix and
       Starnet ones are listed here also):
       http://www.yahoo.com/Computers_and_Internet/Software/Platforms/X_W
       indow_System/
     * try the list at http://www.rahul.net/kenton/xsites.html#XMicrosoft
     * There are all kinds of shareware X servers for win32, take a look
       at http://www.winfiles.com for a listing. The best server you'll
       probably find is Hummingbird Software's eXceed.
     * Try looking for a product called XWIN32, it's not shareware but it
       is quite cheap ( compared to exceed and the like ).
     * Try getting the demo X-Win32 from http://www.starnet.com/
     * Here you will find a lot of info about X:
       http://www.rahul.net/kenton/xsites.framed.html
     * Check http://www.starnet.com/ and poke "Product Demos"
       
   --
   Mark Inder
     _________________________________________________________________
   
  Simultaneous color depths for X
  
   Date: Tue, 10 Nov 1998 16:47:34 -0500
   From: Adam Williams, awillia1@chuma.cas.usf.edu
   
   With this technique you can run several X servers simultaneously with
   different color depths. This comes in handy for interoperating
   programs that only support a few color depths or previewing images in
   different color depths, all without quitting the current session or so
   much as opening a Control Panel.
   
   Create a startx file for every bit depth called startx8, startx16, or
   startx24. Give yourself execute permission on those.
   
   In each startx file put the following, which is a slightly modified
   version of the default startx:

#!/bin/sh

bindir=/usr/X11R6/bin

userclientrc=$HOME/.xinitrc
userserverrc=$HOME/.xserverrc
sysclientrc=/usr/X11R6/lib/X11/xinit/xinitrc
sysserverrc=/usr/X11R6/lib/X11/xinit/xserverrc
clientargs=""
serverargs=""
display=:0
depth=8

if [ -f $userclientrc ]; then
    clientargs=$userclientrc
else if [ -f $sysclientrc ]; then
    clientargs=$sysclientrc
fi
fi

if [ -f $userserverrc ]; then
    serverargs=$userserverrc
else if [ -f $sysserverrc ]; then
    serverargs=$sysserverrc
fi
fi

whoseargs="client"
while [ "x$1" != "x" ]; do
    case "$1" in
        /''*|\.*)       if [ "$whoseargs" = "client" ]; then
                    clientargs="$1"
                else
                    serverargs="$1"
                fi ;;
        --)     whoseargs="server" ;;
        *)      if [ "$whoseargs" = "client" ]; then
                    clientargs="$clientargs $1"
                else
                    serverargs="$serverargs $1"
                    case "$1" in
                        :[0-9])  display="$1" ;;
                   esac
                fi ;;
    esac
    shift
done

serverargs="$serverargs $display -auth $HOME/.Xauthority -bpp $depth"
mcookie=`mcookie`
xauth add $display . $mcookie
xauth add `hostname -f`$display . $mcookie

echo "xinit $clientargs -- $serverargs"

exec xinit $clientargs -- $serverargs

   Change the display and depth variables to different numbers for every
   startx file.
   
   For example:b4 For an 8 bit server set depth=8 and display=:0
   For a 16 bit server set depth=16 and display=:1
   For a 24 bit server set depth=24 and display=:2
   
   You can even have several startx files for the same bit depth as long
   as the display variables are different.
   
   Now you can start up an 8 bit server with startx8. Open an xterm and
   type startx16 to get a 16 bit server without quitting the 8 bit
   server. You can switch between servers using the Ctrl-Alt-F keys.
     _________________________________________________________________
   
  Netscape
  
   Date: Tue, 10 Nov 1998 08:25:13 -0600
   From: Jim Kaufman, hsijmk@harmonic.com
   
   You recently published the following tip:
   
     Nevertheless, Netscape seems to create a directory nsmail in the
     user's home directory every time it starts and doesn't find it,
     even if mail is not used. This is annoying. Here's a trick which
     doesn't make this directory go away, but at least makes it
     invisible.
     
     I didn't find a GUI equivalent to change this setting so you have
     to do the following:
     Edit the file ~/.netscape/preferences.js and change all occurrences
     of 'nsmail' to '.netscape'. The important thing here is, of course,
     the leading dot before 'netscape'.
     
   My recommendation is to edit ~/.netscape/preferences.js and change the
   occurrences of 'nsmail' to '~/Mail' That way, netscape can display
   mail if I choose, or I can use another mail reader (elm, mutt, pine,
   etc.) configured to use the same directory.
   
   --
   James M. Kaufman
     _________________________________________________________________
   
  Hard Disk Duplication - Update
  
   Date: Mon, 9 Nov 1998 23:41:06 -0800
   From: Michael Jablecki, mcjablec@ucsd.edu
   
   The Ingot program did not work well for me. Power Quest has, IMHO, a
   superior product for less money -- drive image. Good stuff!
   http://www.powerquest.com
   
   --
   Michael
     _________________________________________________________________
   
  Back Ups
  
   Date: Sun, 25 Oct 1998 03:46:10 +0000
   From: Anthony Baldwin, ab@spkypc.demon.co.uk
   
   Here's my two cent tip:
   Disk space is relatively cheap, so why not buy a small drive say
   500Meg which is used for holding just the root /lib /bin /sbin
   directories. Then setup a job to automatically back this up to another
   drive using "cp -ax" (and possibly pipe it through gzip and tar). This
   way when the unthinkable happens and you loose something vital, all
   you have to do is boot from floppy mount the 2 drives and do a copy.
   This has just saved my bacon while installing gnu-libc2
   
   --
   Anthony Baldwin
     _________________________________________________________________
   
    Tips in the following section are answers to questions printed in the Mail
    Bag column of previous issues.
     _________________________________________________________________
   
  ANSWER: Re: suggestion for Linux security feature
  
   Date: Sun, 01 Nov 1998 01:10:10 -0700
   From: Warren Young, tangent@cyberport.com
   
   In regards to a letter you wrote to the Linux Gazette:
   
     A. only that user could access their own cache, cookies, pointer
     files, etc.
     
   I will first assume that you already have the computer basically
   secured: you are not logging in as "root" except to maintain the
   system, and the "regular user" account you are using does not have
   permission to write files to any other area of the hard disc than your
   own home directory. (I will ignore the "temporary" and other "public"
   directories.)
   
   The first step is to set the security permissions on your home
   directory and its subdirectories. I won't go into the details here
   (that's best left to a good introductory Linux text), but you can have
   the system disallow other users from reading and/or listing the
   contents of your directories, as well as disallowing write access.
   (Under Red Hat Linux 5.0, the default is to disallow others _all_
   access to your home directory, but subdirectories you later create
   aren't protected in this way.) Do the same for your existing files.
   
   Next, learn to use the "umask" command. (This command is part of your
   shell -- find out what your "login shell" is, and then read its manual
   to find info about this command.) The umask command sets the "default
   file permissions" for new files. For example, you can make the system
   create new files and directories such that only you can read them or
   write to them.
   
   One other thing you should look into is an encrypting file system
   driver. I seem to recall hearing of such a thing for Linux, but I
   can't recall any details.
   
     I do not know how deleted files could be safeguarded in this way
     
   It's possible to patch the OS so that the "unlink()" system call
   always overwrites the file with zeros or something before it removes
   the file from the file system. That would make the system run slower
   at times, but that might be a worthwhile tradeoff for you. That should
   be a fairly easy change to make to the kernel, given that the source
   code is available. If you don't know how to do this and are unwilling
   to learn, try asking on the Net for someone to do this for you. You
   can probably find someone who's willing just because it's an
   interesting thing to do.
   
     B. these files - the whole lot of them - could be scrubbed, wiped,
     obliterated (that's why it's important for them to be in a known
     and findable place) by their owner, without impairing the function
     of the applications or the system, and without disturbing similar
     such files for other users.
     
   You list as criteria (to paraphrase) "without disturbing the system
   for others", so the kernel idea above wouldn't work. Instead, you
   would probably want a utility to do the same thing as the kernel idea:
   overwrite the file (perhaps multiple times) with junk, and then remove
   it. This, again, shouldn't be too hard to write, and I wouldn't be
   surprised if it doesn't already exist. Such things already exist for
   most other operating systems.... You could even make it a fancy
   drag-and-drop X Windows application so you just drag files to it like
   a Mac/Win95 "trash can" and it securely deletes the file.
   
     C. it would be nice too if there were a way to prevent the copying
     of certain files, and that would include copying by backup programs
     (for example, I'm a Mac user and we use Retrospect to back up some
     of our Macintoshes; there's a feature to suppress the backing up of
     a particular directory by having a special character (a "bullet",
     or optn-8) at the beginning or end of the directory name.) But if
     this could be an OS-level feature, it would be stronger.
     
   This sort of feature does not belong in the operating system because
   "backup" is not part of the operating system, it's an add-on. The
   reason that it's an add-on is because you want to allow each
   individual to choose their own backup solution based on their own
   needs, desires and preferences. I may want to use the BRU backup
   program, while another might prefer "afio", and a third person may be
   a raving "tar" fan.
   
   The point is, it's not part of the OS, so several different backup
   programs have emerged, each with a different style and feature list.
   The price of this freedom and flexibility is that a feature like
   "don't back this file up" is something that each program will
   implement differently. It can't be part of the OS under this model,
   and I don't think we want to change this.
   
     If I'm user X, and I want to get rid of my computer, or get rid of
     everything that's mine on the computer, I should just be able to
     delete all of my data files (and burn them or wipe them or
     otherwise overwrite that area of the disk), which I can surely do
     today. But in addition, I should know where to go to do the same
     thing with whatever system level files might be out there,
     currently unbeknownst to me, and be able to expunge them also,
     without affecting anything for anyone else.
     
   The safest method is to erase the hard disk with a "government level
   wipe" program. Many of these exist for DOS -- you can create a DOS
   disk for the sole purpose of booting up and wiping your system. Then,
   install a fresh copy of the OS. This is the only way you can be sure
   that everything sensitive is off of the machine.
   
   The only other option is for you to learn where all of the "individual
   configuration" files are kept -- that is, those files that make your
   setup unique. Following the security suggestions above can help,
   because then applications can't store something where you can't find
   it -- the OS won't let it, and thus everything is either under your
   home directory, or somewhere you put it as "root". But, you may miss a
   file, so the "wipe the HD" is the only foolproof method.
   
   Good luck,
   Warren -- http://www.cyberport.com/~tangent/
     _________________________________________________________________
   
  ANSWER: Re: How to add disk space to Red Hat 5.1?
  
   Date: Wed, 4 Nov 1998 20:43:35 -0800 (PST)
   From: R Garth Wood, rgwood@peace.netnation.com

0 init 1
1 mount your drive on /mnt **(see below)
2 cp -dpR /usr /mnt
3 umount /mnt
4 mount your drive on /usr
5 init 2
6 rejoice

   ** recompile your kernel. make sure you have the options needed in the
   HOWTO: http://sunsite.unc.edu/pub/Linux/docs/HOWTO/mini/ZIP-Drive
   
   --
   R Garth Wood
     _________________________________________________________________
   
  ANSWER: Re: Win95 peer-to-peer vs. Linux server running Samba
  
   Date: Wed, 4 Nov 1998 20:36:15 -0800 (PST)
   From: R Garth Wood, rgwood@peace.netnation.com
   
   The advantages are:
     * It won't go down
     * You don't have to use a good machine
     * you can print from UNIX as well
     * you can do other things on it as well
       
   --
   R Garth Wood
     _________________________________________________________________
   
  ANSWER: Re: Single IP Address & Many Servers. Possible?
  
   Date: Wed, 4 Nov 1998 20:27:50 -0800 (PST)
   From: R Garth Wood, rgwood@peace.netnation.com
   
   Look into the programs "redir" and "rinetd".
   
   --
   R Garth Wood
     _________________________________________________________________
   
  ANSWER: Re: Help Modem+HP
  
   Date: Fri, 20 Nov 1998 03:24:36 -0800
   From: "David P. Pritzkau", pritzkau@leland.Stanford.EDU
   
     In issue 33 of the Linux Gazette you wrote:
     I have already spent hours trying to fix my Supra336 PnP internal
     modem and my HP DeskJet 720C under Linux! The result is always the
     same, no communication with the modem and no page printed on the HP
     printer! Could someone help me, I am close to abandon!
     
   I've had the same problem with the HP820 printer. It turns out that
   the '20 series printers use a protocol called PPA unlike the PCL
   protocols that HP uses for its other printers. Basically Windows uses
   software to print to these printers. Fortunately there's somebody out
   there who was able to figure out some of that protocol (since HP isn't
   releasing any info). This person created a PPA to PBM converter to
   allow printing under Linux. Right now you can only print in black and
   white, but that's better than nothing. If you are shopping for a
   printer and plan to use Linux, you should avoid the '20 series HP
   printers like the plague. Here's the URL where you can find more info
   about the converter and download it. It comes with sample scripts to
   setup the printing. Keep in mind that you must change the 'enscript'
   command in the scripts to 'nenscript' because enscript is a commercial
   program. Also take out the '-r' switch since 'nenscript' doesn't
   support it. Hope this helps.
   
   http://www.rpi.edu/~normat/technical/ppa/index.html
   
   --
   David P. Pritzkau
     _________________________________________________________________
   
  ANSWER: Re: Suggestion for Linux security features
  
   Date: Fri, 13 Nov 1998 11:17:18 +0100
   From: Roger Irwin, irwin@mail.com
   
   Linux already does most of what you said (example, netscape cache
   cookie files are kept in a .netscape file in your home that cannot be
   accessed by other users).
   
   As for delete, this can easily be done by a user file that opens the
   file for random access and writes x's everywhere before deleting. Have
   seen such utilities around for virtually all platforms (as it only
   requires ANSI C calls, you could easily write a command that compiles
   onto any platform. It is slow, and could be slightly improved by being
   done in kernel space. If you want to try, I suggest that you start by
   reading Alessandro Rubini's book "Writing Linux Kernel Device
   Drivers". This will give you an easy and gentle introduction to
   programming in Kernel space. Once you have got the hang of that, you
   should read through the documentation for the e2fs. Then implement a
   simple draft version. Once you have it working, post it to the Linux
   Kernel development mailing list, and the kernel hackers will guide you
   from there.
   
   DO NOT approach the kernel list with ideas you are thinking about
   doing. It is not that they are unresponsive, but there are a lot of
   Linux users and with a lot of ideas, they could easily be submerged.
   In order to avoid time wasters, they are forced to adopt a 'first show
   me the code' attitude. This is not a bad thing as when one starts to
   actually implement something (rather than dream about it) you begin to
   realize WHY it has not yet been done.
   
   Once you actually have something, even a first draft that only vaguely
   works, you will find kernel developers very responsive and helpful.
   
   --
   Roger
     _________________________________________________________________
   
             Published in Linux Gazette Issue 35, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page maintained by the Editor of Linux Gazette, gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
      
   News Bytes
   
  Contents:
  
     * News in General
     * Software Announcements
     _________________________________________________________________
   
                              News in General
     _________________________________________________________________
   
  January 1999 Linux Journal
  
   The January issue of Linux Journal will be hitting the newsstands
   December 10. This issue focuses on our Reader's and Editor's Choice
   awards. Included with the magazine this month is a 24-page supplement
   on Enterprise Solutions in which we interview Netscape's Jim
   Barksdale, Corel's Michael Cowpland and IBM's Paraic Sweeney. Check
   out the Table of Contents at
   http://www.linuxjournal.com/issue57/index.html. To subscribe to Linux
   Journal, go to http://www.linuxjournal.com/ljsubsorder.html.
     _________________________________________________________________
   
  LinuxWorld Conference & Expo
  
   Boston, MA (September 30, 1998) -- International Data Group (IDG), the
   IT media and information company, today unveiled plans to launch a
   global product line of events and publications to address the needs of
   the rapidly growing Linux market. IDG World Expo, the world's leading
   producer of IT-focused conferences and expositions, will produce
   LinuxWorld Conference & Expo, the first international exposition
   addressing the business and technology issues of the Linux operating
   environment. IDG's Web Publishing unit, one of the first online-only
   IT publishers, will launch LinuxWorld, an online-only magazine for the
   more than seven million technologists requiring in-depth information
   on implementing Linux and related technologies in diverse
   environments.
   
   The first LinuxWorld Conference and Expo will be held March 1-4, 1999
   at the San Jose Convention Center.
   
   For more information:
   http://www.idg.com/
     _________________________________________________________________
   
  Sun loans Ultra30 Systems to Debian
  
   Date: Fri, 06 Nov 1998 14:29:56 -0500
   Sun Microsystems (http://sun.com/) has loaned three UltraSPARC systems
   to Debian project. They are 64-bit Ultra30 workstations, each with
   with an UltraSPARC-II/250MHz CPU (1M-ECache), 128MB RAM, 4.3GB Seagate
   SCSI drive and a Creator graphics card. One system is installed at
   Kachina Technologies, Inc. and will be publicly available to Debian
   developers. The other two systems are used by developers to develop
   boot related packages and other low level tools.
   
   There is a port specific web page that contains information on the
   work in progress at http://www.debian.org/ports/sparc64/. People
   interested in helping with the Debian UltraLinux effort should check
   there for the current port status.
   
   For more information:
   Debian GNU/Linux, http://www.debian.org/, press@debian.org
     _________________________________________________________________
   
  Dallas/Ft. Worth area Linux show
  
   Date: Wed, 11 Nov 1998 05:16:53 -0600
   There is an online survey at http://linux.uhw.com/ to get the needs
   and wants of a DFW area Linux show. We want to find out what people
   who want to attend want in the show before we do the hardcore
   planning.
   
   Pass the word please to those who may want to go.
   
   For more information:
   Dave Stokes, david@uhw.com
     _________________________________________________________________
   
  O'Reilly Announces Open Source Conferences
  
   SEBASTOPOL, CA--O'Reilly & Associates announced today that it is
   expanding its support of Open Source software by presenting the
   O'Reilly Open Source Conferences--Perl Conference 3.0 plus several new
   technical Conferences on mission-critical Open Source software--on
   August 21-24, 1999 at the Monterey Convention Center in Monterey, CA.
   
   For the first time, programmers, webmasters, and system administrators
   can find--under one roof--a spectrum of high-end technical sessions,
   presented by the key developers in each technology. In real-world
   applications, users draw on several Open Source technologies to get
   the job done. At the O'Reilly Conferences on Perl, Linux, FreeBSD,
   Apache, Sendmail and other Open Source technologies, attendees can
   move freely between Conferences, choosing from a rich panoply of
   sessions on these interrelated technologies. In addition, each
   Conference is preceded by in-depth tutorials.
   
   Linux Journal is a major sponsor of O'Reilly's Linux Conference.
   Publisher Phil Hughes said, "Since the early days, O'Reilly has been
   documenting Linux and the Open Source utilities that Linux users
   depend on. They're very close to the technical community, and they'll
   bring that inside perspective to their Linux Conference. We're looking
   forward to working with them."
   
   For more information:
   http://conferences.oreilly.com
     _________________________________________________________________
   
  New Mailing List for Linux in Education
  
   Date: Fri, 13 Nov 1998 10:17:01 -0500
   The SEUL Project (http://www.seul.org/) has started a mailing list,
   seul-edu, to cover all aspects of educational uses of Linux. In
   addition to the discussion, resources are available that should enable
   the development (with the help of interested volunteers) of various
   open source software that can make Linux more desirable to educators
   and parents interested in using Linux for their children's education.
   Currently the list is made up of educators, writers, and some
   developers.
   
   You can see the archives of the mailing list, as well as current plans
   and contacts for the project, at
   http://www.seul.org/archives/seul/edu/. Before the creation of
   seul-edu, the discussion took place on the seul-pub mailing list; you
   can see those discussions in the October and November archives of that
   list at http://www.seul.org/archives/seul/pub/.
   
   To subscribe to seul-edu, just send a message to majordomo@seul.org
   with no subject and with "subscribe seul-edu" in the message body.
     _________________________________________________________________
   
  Linux Boot Camp Announcement
  
   Date: Sat, 7 Nov 1998 10:08:19 -0800
   Four days of intensive hands-on technical training. Certification is
   provided for the full boot camp.
   
   Schedule:
   *Understanding & Administering Linux*
   January 12-13, 1999 San Jose, CA
   January 26-27, 1999 Carlsbad, CA
   February 1-2, 1999 Raleigh, NC
   February 22-23, 1999 Chicago, IL
   March 29-30, 1999 Dallas, TX
   April 20-21, 1999 Phoenix, AZ
   May 18-19, 1999 Atlanta, GA
   June 15-16, 1999 Washington, DC
   June 22-23, 1999 Carlsbad, CA
   
   *Integrating Linux with Windows 95/98/NT*
   January 14, 1999 San Jose, CA
   January 28, 1999 Carlsbad, CA
   February 3, 1999 Raleigh, NC
   February 24, 1999 Chicago, IL
   March 31, 1999 Dallas, TX
   April 22, 1999 Phoenix, AZ
   May 20, 1999 Atlanta, GA
   June 17, 1999 Washington DC
   June 24, 1999 Carlsbad, CA
   
   *Securing your Box in One Day*
   January 15, 1999 San Jose, CA
   January 29, 1999 Carlsbad, CA
   February 4, 1999 Raleigh, NC
   February 25, 1999 Chicago, IL
   April 1, 1999 Dallas, TX
   April 23, 1999 Phoenix, AZ
   May 21, 1999 Atlanta, GA
   June 18, 1999 Washington DC
   June 25, 1999 Carlsbad, CA
   
   For more information:
   Deb Murray, dmurray@surfnetusa.com
   http://www.uniforum.org/web/education/bootcamp.html
     _________________________________________________________________
   
  Linux wins PC World Denmark award for "Innovation of the Year - Software"
  
   Date: Mon, 16 Nov 1998 08:59:20 GMT
   Linux wins PC World Denmark award for "Innovation of the Year -
   Software" Copenhagen, 1998-11-12
   
   On behalf of the entire Linux community, the Skne Sjlland Linux User
   Group (SSLUG) today received the PC World Denmark 'Product of the
   Year' award in the category "Innovation of the Year - Software". The
   award was accepted by Peter Toft and Henrik Strner from SSLUG.
   
   The "Innovation of the Year" award is given to products or
   technologies which have shown significant innovativeness and impact
   through out the year. PC World editors motivated their choice thus:
   
   "Linux. The 'Ugly Duckling' that turned into a beautiful swan and
   became - to put it briefly - the most widely used operating system for
   Internet servers world wide, despite the marketing muscle of the
   larger companies. NT has a tremendous hold on the market, but Linux is
   gaining new followers every day, and continues to find new uses
   wherever a stable, economical and versatile operating system is
   needed."
   
   The other two nominees in the "Innovation of the Year - Software"
   category were
   * Microsoft Windows 98
   * Mirabilis ICQ
   
   For more information: Peter Toft, ptoft@sslug.dk
   Henrik Strner, storner@sslug.dk
     _________________________________________________________________
   
  ISVs -- please join the LSB effort
  
   With the Linux standardization well on its way with the Linux Standard
   Base (LSB) effort headed by Daniel Quinlan, various vendors brought up
   the issue that there needed to be a way for independent software
   vendors to get their input into the standards effort. After some
   discussion, it was decided to add a mailing list to the LSB effort
   that was specifically for ISVs.
   
   This list will make it possible for ISVs to hash out what they see
   needed in the Linux standard and then present their joint effort to
   the LSB group for consideration. This approach will make it easier for
   LSB to meet the needs of all the vendors.
   
   If you are an ISV and want to join the list, send your e-mail address
   to Clarica Grove (clarica@ssc.com) with ISV in the subject line of
   your message. She will add you to the list and we can get our part of
   the effort underway. If it is unlikely we are familiar with the
   product you have developed, please include a brief description.
   
   For more information:
   Phil Hughes, phil@ssc.com
     _________________________________________________________________
   
  Linux Game Development Center
  
   Date: Mon, 16 Nov 1998 09:39:14 GMT
   Linux has games. Linux has good games. But that other operating system
   has several orders of magnitude more good games than Linux. That's
   bad. And difficult to overcome, as it's not only because of technical
   reasons. But we, the free software community, have have a long history
   of solving problems and shipping around obstacles. There is no reason
   why we should not be able to solve this issue, too.
   
   In essence we are suggesting that this new Linux Game Development
   Center be a kind of meta-project. It would be dedicated to advocating
   Linux as gaming platform, collecting knowledge about Linux game
   development and using it to help all interested people, providing
   facilities for discussion to Linux game developers and, last but not
   least, encouraging and helping existing free (Open Source) game SDK
   projects coordinate with one another.
   
   This is also a call for developers, users and game SDK projects to
   join our efforts.
   
   While game development for Linux would be an important goal of the web
   site, the most important goal would be the development of quality
   cross-platform game libraries. For that reason, developers of games
   and game SDKs for platforms other than Linux would be more than
   welcome to join us. Especially if they are interested in porting
   software to or from Linux.
   
   The biggest problem with having multiple, competing projects is the
   resultant (developer and user) confusion. What we are proposing is a
   Linux Game Development Center that is aimed simply at reducing that
   confusion by helping people to find, evaluate, combine and use the
   available tools, or to develop new, missing ones.
   
   http://www.linuxgames.org
   
   For more information:
   Christian Reiniger, warewolf@mayn.de
   PenguinPlay, http://sunsite.auc.dk/penguinplay/
     _________________________________________________________________
   
  New benefit for LJ's GLUE LUGs--Tcl Blast!
  
   Date: Mon, 23 Nov 1998 21:32:13 GMT
   GLUE---Groups of Linux Users Everywhere announces the newest benefit
   for groups who join.
   
   By popular demand and in conjunction with the Tcl/Tk Consortium, SSC
   and Linux Journal's GLUE program is making available the TCL Blast!
   CD-ROM.
   
   This is the latest addition to the membership package GLUE sends out
   to our new LUGs members. Some of the other benefits include: the BRU
   2000 backup and restore utility, and Caldera OpenLinux Lite!
   
   We provide free listings for all LUGs at our web site, where you can
   also: see the complete list of the GLUE benefits; find information and
   resources for Linux User Groups; check to see if there is a LUG in
   your area; post to the Users Seeking Groups part of the listings
   pages; or check to see that there is an accurate listing for your LUG.
   
   Please contact me if you have any questions.
   
   For more information:
   Clarica Grove, Groups of Linux Users Everywhere, glue@ssc.com,
   http://www.ssc.com/glue/
     _________________________________________________________________
   
  Linux Promoted in Albanian Fair
  
   Date: Sat, 28 Nov 1998 14:13:20 GMT
   Yesterday the KlikExpo international fair was opened in Tirana
   Albania. With the help of Fastech Ltd, Linux.org Albania could promote
   Linux for the first time in Albania. Our section was also visited by
   the Albanian Prime Minister. I had a brief chat with him, and
   described shortly the power and efficiency of Linux. Our section will
   be open for the next 4 days at the Tirana international fair center.
   
   Special thanks to Fastech Ltd. who made available for us an ACER PII
   300MHz machine and hosted us in their section.
   
   To read more and see the pictures, please check:
   http://lowrent.org/lnxorgal/klikexpo98
   
   For more information:
   Kledi Andoni, kledi@linux.org.al
     _________________________________________________________________
   
  Open Source Trademark?
  
   I, for one, am confused. See if you can figure what's going on with
   these two announcements:
   Future of the Open Source Trademark
   Launch Announcement of the Open Source Initiative
     _________________________________________________________________
   
  Linux Links
  
   StarOffice 5.0 Personal Edition Free: http://www.stardivision.com
   
   "An Open Letter to AOL" from Eric Raymond:
   http://www.opensource.org/aol-letter.html
   
   UNIX help: http://www.allexperts.com/software/unix.shtml
   
   Linux Ace: http://tarp.linuxos.org/linux/
   
   Informix+Linux article:
   http://news.freshmeat.net/readmore?f=informix-jj
   
   "Liberty and Linux for All":
   http://www.theatlantic.com/unbound/citation/wc981021.htm
   
   Tim O'Reilly, "Open Letter to Microsoft":
   http://oreilly.com/oreilly/press/tim_msletter.html
   
   Eiffel Liberty: http://www.elj.com/
   
   Linux Tips & Tricks: http://www.patoche.org/LTT/
   
   Gary's Place Linux Guide: http://gary.singleton.net/
   
   Official GNUstep Web Site: http://home.sprintmail.com/~mhanni/gnustep/
   
   Blender Site: http://www.BeLUG.org/user/cw/blender_e.html
   
   Eric Kahler's FVWM Web Page: http://mars.superlink.net/eric/fvwm.html
   
   The Linux Game Tome: http://gametome.linuxquake.com/
   
   OBSIDIAN, an open source 3D virtual world for Linux and OpenGL:
   http://www.zog.net.au/computers/obsidian/
   
   inux Today: http://www.linuxtoday.com/
   
   NewsNow: http://www.newsnow.co.uk
   
   Linux Help Page: http://www.ont.com/users/d4588/
   
   Linux Sound and MIDI Applications Page: http://sound.condorow.net/
   http://sound.lovebead.com/
   http://www.bright.net/~dlphilp/linux_soundapps.html
   
   MICO Home Page: http://www.mico.org/
   
   Management Guide to Shifting Standards Tactics:
   http://www.geocities.com/SiliconValley/Hills/9267/sstactics.html
   
   Red Hat Press Release
     _________________________________________________________________
   
                           Software Announcements
     _________________________________________________________________
   
  KDE on Corel's Netwinder
  
   Ottawa, Canada--November 25, 1998
   Corel Computer and the KDE project today announced a technology
   relationship that will bring the K Desktop Environment (KDE), a
   sophisticated graphical user environment for Linux and UNIX, to future
   desktop versions of the NetWinder family of Linux-based thin-clients
   and thin-servers. A graphical user interface is a necessary element
   for Corel Computer to create a family of highly reliable, easy-to-use,
   easy-to-manage desktop computers. The alliance between Corel Computer
   and KDE, a non-commercial association of Open Source programmers,
   provides NetWinder users a sophisticated front-end to Linux, a stable
   and robust Unix-like operating system.
   
   Corel Computer has shipped a number of NetWinder DM, or development
   machines, to KDE developers who are helping to port the desktop
   environment. Corel Computer plans to announce the availability of
   desktop versions of the NetWinder running KDE beginning in early 1999.
   Early demonstrations of the port, such as the one shown at the Open
   Systems fair in Wiesbaden, Germany, in September, have been
   enthusiastically received by potential customers.
   
   As a developing partner, Corel Computer will release its work back to
   the KDE development community.
   
   For more information:
   http://www.kde.org/, press@kde.org
   http://www.corelcomputer.com/
     _________________________________________________________________
   
  KEYTEC Announces Expanded Magic Touch Screen Capabilities.
  
   Date: Tue, 24 Nov 1998 15:38:33 EST
   Dallas, TX -- KEYTEC announced today that the Magic Touch touch screen
   system will soon be LINUX compatible. Screen users will be able to
   operate the Magic Touch touch screen in hardware configurations
   utilizing the LINUX operating system to gain the advantages of smaller
   file size, less memory requirements and faster data access.
   
   For more information:
   news@magictouch.com, http://www.magictouch.com/
     _________________________________________________________________
   
  Red Hat Software releases Red Hat Linux 5.2
  
   Research Triangle Park, NC -- November 2, 1998 -- Simplified
   installation, Native Software RAID support, Apache 1.3 , GIMP 1.0, and
   the Application CD are among the features that mark Red Hat Software's
   November 9 release of Red Hat Linux 5.2.
   
   A feature of Red Hat Linux 5.2's new and improved installation is the
   ability to automatically partition the hard drive by selecting either
   a workstation or server install. All of the power of the Red Hat Linux
   OS is still available via "custom" install. Back buttons DHCP, boot
   floppy creation, enhanced rescue mode and countless other tools that
   made 5.1 a success are all still there.
   
   For more information:
   http://www.redhat.com/ CORBA on LINUX Gains Momentum
   
   Date: Tue, 03 Nov 1998 16:24:32 -0500
   Framingham, MA - Programmers and end-users can now obtain
   implementations of the Object Management Group's (OMG's) Common Object
   Request Broker Architecture (CORBA) for Linux. As the momentum has
   grown behind the open source Linux operating system, more and more OMG
   members have requested this support. The emergence of CORBA-conformant
   ORBs for Linux is an indicator of the commercial confidence and
   industry support for both CORBA and Linux.
   
   At Washington University, the development of the TAO ORB is being
   sponsored by companies and organizations including Boeing, Lucent and
   Motorola which recognize the value of open source models and can
   recognize the future commercial value of such ORBs.
   
   For more information:
   info@omg.org, http://www.omg.org/ Debian GNU/Linux 2.1 'Slink' Frozen
   
   Date: Wed, 04 Nov 1998 14:56:52 -0500
   Debian GNU/Linux 2.1 'Slink' is now in a frozen state. The delay was
   due to the need to stabilize some key packages in the distribution.
   The release of Slink is scheduled for December 1998.
   
   Debian GNU/Linux 2.2 'Potato' will be the next version of the Debian
   distribution. The name is taken from the character 'Mr. Potato Head'
   in the animated movie 'Toy Story.'
   
   For more information:
   press@debian.org, http://www.debian.org/
     _________________________________________________________________
   
  Servertec Announces New Release of iScript, a Platform Independent Scripting
  Language Written Entirely in Java
  
   Kearny, NJ. - November 5, 1998 - Servertec today announced the
   availability of a new release of iScript, a platform independent
   scripting language written entirely in JavaTM for creating scalable
   server side object oriented n-Tier enterprise solutions.
   
   The new release includes iScriptServlet, a Java Servlet, for directly
   accessing iScript from any web server supporting Java Servlets. The
   release also includes bug fixes and updated documentation.
   
   For more information:
   Manuel J. Goyenechea, goya@servertec.com, http://www.servertec.com/
     _________________________________________________________________
   
  QLM Reduces Product Development Time By 1/3
  
   Newton, Mass., November 18, 1998 Kalman Saffran Associates, Inc.
   (KSA), a leading developer of state-of-the-art products for data
   communications, telecommunications and interactive/CATV industries,
   today announced the availability of their new QLM, an innovative
   process for companies looking to reduce product time-to-market in a
   highly competitive marketplace. Using QLM, KSA guarantees that
   companies will reduce their product development cycles by at least
   one-third.
   
   Based on a scientific methodology derived from practices implemented
   for such industry leading companies as Hewlett-Packard, Cisco Systems
   and Nortel, KSA's QLM combines a comprehensive set of processes,
   techniques, tools and templates together with a knowledge base to
   produce optimal results for companies in a broad set of industries.
   
   The QLM offering is available starting at $20,000. For more
   information:
   Joe Bisaccio, VP Marketing, Bisaccio@worldnet.att.net,
   http://www.ksa-mkt.com/
     _________________________________________________________________
   
  LINUX INCLUDED IN PLANETUPLINK EXPANSION
  
   11/11/98 PRESS RELEASE:
   Planet Computer nationally unveiled their newest business solution,
   PlanetUplink on Oct. 30th. PlanetUplink IBN (Internet Based Network)
   allows businesses to gain access to and share virtually any
   application or database simultaneously (real time) on almost any
   computer from their remote and multiple offices, globally, via the
   Internet.
   
   This week, Planet Computer announced the expansion of PlanetUplink to
   support Linux (server and client), in addition to the currently
   supported Macintosh, OS/2, UNIX (Solaris/Sparc, Solaris/x86, SGI, IBM,
   SCO, HP/UX, DEC, SunOS), Windows (Win95, NT, Windows CE, Win3.x), DOS
   and Java.
   
   For more information:
   Mary A. Carson, Planet Computer, mary@planetuplink.com,
   http://www.Planet-computer.com/
     _________________________________________________________________
   
  mcl 0.42.05 - MUD Client for Linux
  
   Date: Mon, 16 Nov 1998 08:50:18 GMT
   mcl is a small MUD client running under a Virtual Console in Linux. It
   uses direct VC access via the /dev/vcsa devices, spending very little
   CPU time (compared to tintin). This, however, allows it only to be run
   under Linux and only under a virtual console.
   
   New in version 0.42.05 is a number of bug fixes (actions not saving,
   speedwalk acting incorrectly in some situation and more) as well as
   support for compression of the connection (using zlib). The latter is
   currently only supported by Abandoned Reality (abandoned.org 4444) but
   we hope to have the server-side code for any MERC-derived MUD
   available soon.
   
   Source: http://www.andreasen.org/mcl/mcl-0.42.05-src.tar.gz Binary
   (libc5): http://www.andreasen.org/mcl/mcl-0.42.05-bin.tar.gz Binary
   (glibc): http://www.andreasen.org/mcl/mcl-0.42.05-glibc-bin.tar.gz
   
   mcl is under GPL.
   
   For more information:
   Erwin Andreasen, erw@dde.dk
     _________________________________________________________________
   
  New release of the ISA PnP utilities (isapnptools-1.17)
  
   Date: Mon, 16 Nov 1998 09:24:20 GMT
   I've now released version 1.17 of my Plug and Play ISA configuration
   tools. They cover isolation, dumping resource data, and configuring
   ISA PnP devices.
   
   The tools I wrote for this _will_ eventually be on
   ftp://ftp.demon.co.uk/pub/unix/linux/utils/isapnptools-1.17.tgz (81768
   bytes), ftp://ftp.redhat.com/pub/pnp/utils/isapnptools-1.17.tgz and
   ftp://sunsite.unc.edu/pub/Linux/system/hardware/isapnptools-1.17.tgz
   ftp://tsx-11.mit.edu/pub/Linux/sources/sbin/isapnptools-1.17.src.tar.g
   z (And various mirror sites shortly afterwards). isapnptools-1.17.lsm
   in the same directory is simply the LSM entry for isapnptools.
   isapnptools-1.17.bin.tgz in the same directory also includes
   precompiled binaries.
   
   I've uploaded them, but they may take a day or two to reach their
   final home.
   
   The latest version is available now via the link on the isapnptools
   home page: http://www.roestock.demon.co.uk/isapnptools/
   
   The isapnptools FAQ is available via the home page above.
   
   For more information:
   Peter Fox, isapnp@roestock.demon.co.uk
     _________________________________________________________________
   
  FMan 0.2 release - an X11 man page browser
  
   Date: Mon, 16 Nov 1998 09:36:09 GMT
   FMan is an X Windows manual browser based on the FLTK libraries.
   Source and binaries are available. The program allows fast searching
   for man pages by keyword. Searching may include man page descriptions
   where available. Searching can be performed four different ways.
   History lists of recently viewed pages and program based configuration
   are included. Keyboard only usage is supported.
   
   Changes include removal of bash dependency, pre-scanning of man pages
   is now an option, uninstall option, italic or underlined text, more
   command line options, moved resource directory.
   
   http://fman.sacredsoulrecords.com/
   
   For more information:
   Larry Charlton, lcharlto@mail.coin.missouri.edu
     _________________________________________________________________
   
  jaZip 0.30: Tools for Iomega Zip and Jaz drives
  
   Date: Mon, 16 Nov 1998 09:44:41 GMT
   I would like to announce version 0.30 of jaZip for Linux, a program
   that combines:
     * Setting/unsetting the write protection flag on Iomega Zip/Jaz
       removable media
     * Securely and transparently mounting/unmounting disks without root
       privledges
     * Software eject feature
     * Reporting status of the disk in the drive
     * A very attractive, easy to use graphical interface based on XForms
       0.88
       
   Details and software are available at the jaZip web site:
   http://www.scripps.edu/~jsmith/jazip/
   
   For more information:
   Jarrod Smith, jsmith@scripps.edu
     _________________________________________________________________
   
  CMU SNMP for Linux v3.6 - SNMP agent and SNMP management tools
  
   Date: Wed, 18 Nov 1998 12:45:03 GMT
   This is the documentation for the tenth release of the CMU SNMP port
   to Linux. This port supports SNMP version 1 (SNMPv1) and SNMP version
   2 (SNMPv2). It includes a bilingual SNMPv1/SNMPv2 agent and several
   simple command-line management tools. This release is based on the CMU
   SNMP release with USEC support. It does not implement the historic
   party based administrative model of SNMPv2 and has no additional
   support for SNMPv3.
   
   The source and binary distributions are named
   * cmu-snmp-linux-3.6-src.tar.gz
   * cmu-snmp-linux-3.6-bin.tar.gz
   
   and are available from ftp.ibr.cs.tu-bs.de (134.169.34.15) in
   /pub/local/linux-cmu-snmp.
   
   SNMP is the Simple Network Management Protocol of the Internet. The
   first version of this protocol (SNMPv1) is a full Internet standard
   and defined in RFC 1155 and RFC 1157. The second version of SNMP
   (SNMPv2) is defined in RFC 1901 - RFC 1908 and is currently a draft
   Internet standard.
   
   For more information:
   http://www.gaertner.de/snmp/, schoenfr@gaertner.de
     _________________________________________________________________
   
  MaduraPage 1.0 for UNIX(Linux, Solaris) Beta version
  
   Date: Wed, 18 Nov 1998 13:03:27 GMT
   MaduraPage[TM] 1.0 is a WYSWYG web authoring tool for creating a
   homepage based on applet which performs HTML functions plus additional
   features such as moving objects (text, image, etc.) and object exact
   positioning.
   
   The page created by MaduraPage[TM] 1.0 can be viewed by JDK1.0
   supporting browsers, such as Netscape Navigator 3.0 or above, Internet
   Explorer 3.0 or above.
   
   For more information on features, demo, or to download the package,
   visit the MaduraSoft web site at http://www.madurasoft.com/
   
   The release version will be available within 1 month. Please send the
   bug report to bug@madurasoft.com
   
   For more information:
   Budhi Purwanto, budhi_purwanto@madurasoft.com
     _________________________________________________________________
   
  Dlite v0.03 -- Debian Lite distributon
  
   Date: Wed, 18 Nov 1998 13:07:25 GMT
   Dlite v0.03 is small sub-set of the Debian GNU/Linux binary packages
   most suited to ISPs needing a small but powerful operating system. The
   distribution will always be less than 100 Mb so it's possible to
   maintain a mirror on every host ready for any situation, from
   emergency rebuild through to regular maintenance updates.
   
   A singular sub-set of packages cannot be all things to all people but
   by having one consistent base-line reference of the most commonly used
   packages readily available and widely used, therefore tested, it can
   assist smaller startup Linux based ISP tech people to get on with
   managing their clients rather than just the system.
   
   This is a fledgling project so any suggestion are most welcome.
   
   For more information:
   http://opensrc.org/dlite/, dlite@opensrc.org
     _________________________________________________________________
   
  wxWindows/GTK C++ GUI library 1.96
  
   Date: Wed, 18 Nov 1998 13:40:55 GMT
   a new version of the GTK port of the wxWindows cross-platform GUI
   library has been released.
   
   More information from homepage at
   
   http://wesley.informatik.uni-freiburg.de/~wxxt
   
   Currently, wxWindows is available for Windows and UNIX/GTK and both
   the Mac and the Motif port are progressing nicely. Python bindings are
   available for the Windows and the GTK port.
   
   For more information:
   Robert Roebling, roebling@sun2.ruf.uni-freiburg.de
     _________________________________________________________________
   
  ClibPDF - ANSI C Source Library for Direct PDF Generation
  
   Date: Wed, 18 Nov 1998 14:13:44 GMT
   FastIO Systems announced the availability of ClibPDF: an ANSI C Source
   Library for Direct PDF Generation. ClibPDF offers a competition to
   Thomas Merz's PDFlib, but it does much more than PDFlib, particularly
   for graph plotting applications.
   
   For details and downloading of ClibPDF, visit our web site,
   http://www.fastio.com/
   
   ClibPDF is a library of ANSI C functions, distributed in source form,
   for creating PDF (Acrobat) files directly via C language programs
   without relying on any Adobe Acrobat tools and related products. It is
   suitable for fast dynamic PDF Web page generation in response to user
   input and real-time data, and also for implementing
   publication-quality graph plotting for both on-screen viewing and
   printing in any custom application.
   
   For more information:
   FastIO Systems - Fast Direct PDF Generation via C, clibpdf@fastio.com,
   http://www.fastio.com/
     _________________________________________________________________
   
  Partitionless Installation with EasyLinux
  
   Date: Wed, 18 Nov 1998 14:20:31 GMT
   EasyLinux is a revolutionary new Linux distribution which eliminates
   the need to repartition hard drives. Instead, it creates a Linux
   filesystem inside a large file on the DOS partition. Unlike with
   umsdos, performance is not significantly affected so this mode of
   operation is suitable for production machines. It is still possible to
   repartition if you want to.
   
   EasyLinux is available in two packages. The first contains only the
   two CDs, and is intended for experienced Linux users. This package
   costs  4.95 (approximately $8). The second contains the CDs and a 700
   page book about installing and using Linux. This package also includes
   technical support. The price of this package is  29.95 (approximately
   $50).
   
   For more information:
   Pete Chown, Pete.Chown@skygate.co.uk, http://www.skygate.co.uk/
     _________________________________________________________________
   
  tomsrtbt-1.7.0 with 2.0.36 and cetera - boot/root rescue floppy
  
   Date: Fri, 20 Nov 1998 10:47:17 GMT
   I've put a new 2.0.36 based tomsrtbt-1.7.0 up on
   http://www.toms.net/rb/. Later I'll also load it to Sunsite's Incoming
   to go into system/recovery.
   
   It's a boot/root rescue/emergency floppy image with more stuff than
   can fit. Bzip2, 1722Mb formatting, and tight compilation options
   helped jam a lot on.
   
   It is useful for "learn unix on a floppy" as it runs from ramdisk,
   includes the man-pages for everything, and behaves in a generally
   predictable way.
   
   "The most Linux on one floppy." (distribution or panic disk). 1.72MB
   boot/root rescue disk with a lot of hardware and tools. Supports ide,
   scsi, tape, network adaptors, PCMCIA, much more. About 100 utility
   programs and tools for fixing and restoring. See tomsrtbt.FAQ for a
   list of stuff that is included. Not a script, just the diskette image
   packed up chock full of stuff. Easy to customize startup and scripts
   for complete rebuilding. Also good as learn-unix-on-a-floppy as it has
   mostly what you expect- vi, emacs, awk, sed, sh, manpages- loaded on
   ramdisks. There is one installer that runs under Linux, another for
   DOS.
   
   http://www.toms.net/rb/
   
   For more information:
   Tom Oehser, tom@toms.net
     _________________________________________________________________
   
  mod_ssl 2.1.0 - Apache Interface to SSLeay
  
   Date: Fri, 20 Nov 1998 10:51:43 GMT
   This Apache module provides strong cryptography for the Apache 1.3
   webserver via the Secure Sockets Layer (SSL v2/v3) and Transport Layer
   Security (TLS v1) protocols by the help of the SSL/TLS implementation
   library SSLeay from Eric A. Young and Tim J. Hudson. The mod_ssl
   package was created in April 1998 by Ralf S. Engelschall and was
   originally derived from software developed by Ben Laurie for use in
   the Apache-SSL HTTP server project.
   
   As a summary, here are its main features:
   
   o Open-Source software (BSD-style license)
   o Useable for both commercial and non-commercial use
   o Available for both Unix and Win32 platforms
   o 128-bit strong cryptography world-wide
   o Support for SSLv2, SSLv3 and TLSv1 protocols
   o Clean reviewable ANSI C source code
   o Clean Apache module architecture
   o Integrates seamlessly into Apache through an Extended API (EAPI)
   o Full Dynamic Shared Object (DSO) support
   o Support for the SSLeay+RSAref US-situation
   o Advanced pass-phrase handling for private keys
   o X.509 certificate based authentication for both client and server
   o Additional boolean-expression based access control facility
   o Backward compatibility to other Apache SSL solutions
   o Inter-process SSL session cache
   o Powerful dedicated SSL engine logging facility
   o Simple and robust application to Apache source trees
   o Fully integrated into the Apache 1.3 configuration mechanism
   o Additional integration into the Apache Autoconf-style Interface
   (APACI)
   o Assistance in X.509 v3 certificate generation
   
   http://www.engelschall.com/sw/mod_ssl/
   ftp://ftp.engelschall.com/sw/mod_ssl/
   
   For more information:
   Ralf S. Engelschall, rse@engelschall.com
     _________________________________________________________________
   
  multiple - an utility for comparing files
  
   Date: Mon, 23 Nov 1998 20:57:11 GMT
   multiple is a utility for comparing files which includes these
   features:
     * comparing all files with each other, given to multiple via command
       line
     * print out superflous files without further comments => so removing
       superflous files is easy
     * if you wish, the names of all equal files will be printed (not
       only superflous)
     * if you want, the data until the first empty line will be ignored
       for comparing the files (e.g. for ignoring mail-/newsheaders and
       comparing only the message-bodies)
       
   ftp://belug.in-berlin.de/pub/user/ob/Programs/Tools/multiple.tgz
   
   For more information:
   Oliver Bandel, oliver@first.in-berlin.de
     _________________________________________________________________
   
  Linux Search Engine (service and download)
  
   Date: Mon, 23 Nov 1998 21:08:31 GMT
   Just wanted to let every one know that I have just released some new
   search engine software. The database is PostgreSQL, the front-end is
   PHP 3.x, and it runs on a Red Hat 4.2 Linux box. (or any Linux box,
   that's just what I'm using)
   
   As it is brand new, it is also mostly empty, feel free to put any new
   listings on it that you want except no porno stuff. Nothing can be
   seen until it is approved, I will check several times a day for stuff
   needing approval.
   
   I would like a lot of Linux related stuff, would like it to sort of
   become a specialty search engine for Linux stuff. But most any
   listings are welcome.
   
   If you want to add a series of new categories, email me and I can add
   them all at once.
   
   This version is BATA and will no doubt evolve a great deal in time to
   come.
   
   For any one how wants to run a search engine, lse is available for
   free down load at my ftp site. Find it on the lse!
   
   http://www.terrym.com/lse/lse.php3 
   
   For more information:
   Terry Mackintosh, terry@terrym.com
     _________________________________________________________________
   
  SANE v1.00 - Scanner Access Now Easy
  
   Date: Mon, 23 Nov 1998 21:17:39 GMT
   The development team for SANE ( http://www.mostang.com/sane/) is proud
   to announce the release of version 1.00 of the SANE API, applications,
   and drivers.
   
   Here is a summary of the main features of SANE:
     * SANE is a public-domain, fully documented, and generic API that
       can supp
     * rt arbitrary image acquisition devices, such as flatbed scanners,
       still cameras, vide
     * cameras
     * Included drivers are released under a relaxed GPL that permits c
     * mmercial use.
     * Included applications are released under GPL.
     * Includes a command-line interface that provides access t
     * all features of all scanners.
     * Includes a stand-alone GTK+ based graphical user interface that pr
     * vides access to all features of all scanners
     * Supports scanning from within GIMP through a GIMP extension
     * Supports remote scanning across a LAN, WAN, or even the Internet
     * Supports dynamic loading of drivers
     * Runs on Linux, most UNIX platforms, OpenStep, Apollo Domain/OS,
       and even OS/2
     * Most devices are auto-detected so no or minimal configuration is
       required.
     * Includes Java scanning application and API.
       
   For more information:
   http://www.mostang.com/sane/ ftp://ftp.mostang.com/pub/sane/
     _________________________________________________________________
   
  Yard 1.17 -- creates custom rescue/boot disks
  
   Date: Mon, 23 Nov 1998 20:51:48 GMT
   Yard is a suite of Perl scripts for creating rescue disks (also called
   bootdisks). A rescue disk is a self-contained Linux kernel and
   filesystem on a floppy, usually used when you can't (or don't want to)
   boot off your hard disk. A rescue disk usually contains utilities for
   diagnosing and manipulating hard disks and filesystems.
   
   Author: fawcett@croftj.net (Tom Fawcett) Primary-site:
   http://www.croftj.net/~fawcett/yard/
   160220 yard-1.17.tar.gz
     _________________________________________________________________
   
  Siag Office 3.1 available for beta testing
  
   Date: Mon, 23 Nov 1998 20:56:14 GMT
   Siag Office consists of the spreadsheet SIAG, the word processor
   Pathetic Writer and the animation program Egon Animator. Changes from
   3.0 include:
     * Multipart documents
     * Cross-sheet references in Siag
     * Unlimited font support
     * Unlimited user-definable style support
     * Unlimited colour support
     * Improved handling of RTF format documents
     * Greatly improved user interface.
       
   Sources are available from:
   ftp://siag.edu.stockholm.se/pub/siag/siag-3.1.0beta1.tar.gz
   
   For more information:
   Ulric Eriksson, ulric@edu.stockholm.se,
   http://www.edu.stockholm.se/siag/
     _________________________________________________________________
   
  Macsyma math software for Linux
  
   ARLINGTON, MA (November 11, 1998): Macsyma(R) math software is now
   available for the first time in PCs running the Linux operating
   system.
   
   Macsyma includes 1,300 executable demonstrations and is easily
   accessible at many points in the help system. Also hypertext
   descriptions of 2,900 topics, a browser with 900 topics and commands,
   and 900 type-in command templates.
   
   Macsyma 421 has client-server capability, which is particularly
   helpful on local area networks.
   
   Recent mathematical improvements include enhanced speed in solving
   linear and algebraic equations, stochastic mathematics, better
   evaluation of special functions, and enhanced tensor analysis. It is
   smarter about using the algebraic signs of expressions to simplify
   results.
   
   The U.S. commercial price for Macsyma 421 for Linux workstations is
   $249 (or $199 without paper manuals). The U.S. commercial price for
   Linux Macsyma with NumKit (which requires using a client running
   MS-Windows) is $349. Academic prices are available.
   
   For more information:
   http://www.macsyma.com/, info@macsyma.com
     _________________________________________________________________
   
  Buildkernel 0.87 - automates the task of building a Linux kernel
  
   Date: Mon, 23 Nov 1998 21:23:23 GMT
   Buildkernel is a shell script that automates the task of building a
   Linux kernel. If you can type "buildkernel --NEWESTSTABLE", you can
   have a new Linux kernel on your system!
   
   Building a kernel is a complicated task for the new user. The
   Kernel-HOWTO is an excellent summary of how it's done, but it still
   takes some time and understanding to do. Buildkernel takes away a lot
   of the learning necessary for first time builders. For experienced
   users that build kernels frequently, if automates the process so it is
   more "hands-off". It has been tested on the x86 architecture and
   currently knows about lilo and boot floppies (I would like to have
   future releases handle syslinux, milo, silo, etc. - any takers?).
   
   http://www.pobox.com/~wstearns/buildkernel/
   
   For more information:
   William Stearns, wstearns@pobox.com
     _________________________________________________________________
   
  groovit - making groovy and accurate sound/noise.
  
   Date: Sat, 28 Nov 1998 11:48:28 GMT
   This application needs testers :
   A linux all-in-a-box drum-machine.
   
   Groovit is essentially a drum matrix which can handle any samples,
   combined with, at least and depending on the CPU strength, two analog
   synths voices.
   
   Any voice can go through several effects, (for instance a dynamic
   filter, and an echo/reverb). It is intended to be as "real-time" as
   possible, depending on CPU strength mostly.
   
   It computes sounds internally with 32bit range, and outputs at 16. It
   also has several resonant filters that quickly bring you handsome
   noise.
   
   complete info at : http://www.univ-lyon1.fr/~jd/groovit/
   
   For more information:
   Jean-Daniel Pauget, jd@cismserveur.univ-lyon1.fr
     _________________________________________________________________
   
  Remote Microscope software 1.0a1 - remote access to optical microscopes
  
   Date: Sat, 28 Nov 1998 12:10:57 GMT
   Version 1.0alpha1 of CNRI's Remote Microscope software is now
   available. The Remote Microscope system allows users to access and
   control an optical microscope over the Internet using a Java applet,
   as demonstrated at the recent Python conference.
   
   As part of the MEMS Exchange project, CNRI is working on fully
   automated and remotely controllable semiconductor inspection
   microscopes to let chip designers view their wafers from any location
   having an Internet connection. However, Internet microscope access can
   be useful in other fields, such as biology or material science. We're
   releasing the code for our microscope software in the hope that other
   people will find it useful and will contribute suggestions,
   improvements, ports to new systems, etc.
   
   Remote Microscope home page:
   http://www.mems-exchange.org/software/microscope/
   
   For more information:
   A.M. Kuchling, akuchlin@cnri.reston.va.us
     _________________________________________________________________
   
  XFree86-3.3.3 has been released
  
   Date: Sat, 28 Nov 1998 12:41:44 GMT
   The XFree86 Project, Inc., is proud to announce its latest release,
   XFree86-3.3.3. This is the latest in our series of "final
   XFree86-3.3.x release" Most of our work is focused on XFree86-4.0
   these days, but the amazing shelftime of graphics hardware makes
   another "old design" release necessary.
   
   For more information:
   Dirk H. Hohndel, hohndel@suse.de
     _________________________________________________________________
   
  MasterPlan 0.1.0 - Planning/Calendar software
  
   Date: Sat, 28 Nov 1998 14:17:13 GMT
   Calendar/planner with tasks, appointments and meetings, Reminder and
   scheduler functions. Planned vCalendar support and shared
   calendar/billborad function, etc.
   
   MasterPlan for Linux represents a new step forward for time management
   software. Sporting many unique and useful features, MasterPlan's ease
   of use makes planning your life easier than ever.
   
   MasterPlan is partly an Open Source project, and is constantly
   evolving in coevolution with its users. This means that your feedback
   is essential in determining whether the program will fit your needs in
   the future!
   
   MasterPlan is also commercial software - this is what we do for a
   living! So if you want to use it, you must pay a (reasonable) fee. But
   of course you get to try it first!
   
   http://www.bgif.no/neureka/MasterPlan/index.html
   http://www.bgif.no/neureka/MasterPlan/master_download.html
   
   For more information:
   Arne O. Morken, arne.morken@ii.uib.no
     _________________________________________________________________
   
             Published in Linux Gazette Issue 35, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
      This page written and maintained by the Editor of Linux Gazette,
      gazette@ssc.com
      Copyright  1998 Specialized Systems Consultants, Inc.
      
    "The Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                           (?) The Answer Guy (!)
                                      
                   By James T. Dennis, linux-questions-only@ssc.com
          Starshine Technical Services, http://www.starshine.org/
     _________________________________________________________________
   
  Contents:
  
   (!)Greetings From Jim Dennis
   
   (?)office server --or--
          Linux as a File/Print Server for Window and DOS boxes: Of
          course!
          
   (?)Suggestions for Linux Users with Ultra Large Disks
          
   (?)Linux question - "out of the Blue" --or--
          Listing "Just the Links": It's the only way, Luke
          
   (?)Yamaha OPL3-SA --or--
          More on Linux Kernels Sound Support: Alan Cox Responds
          
   (?)X and virtual terms --or--
          Some Magic Keys for the Linux Console
          
   (?)Keyboard Problem --or--
          No Echo During Password Entry
          
   (?)FTP Login as 'root' --- Don't!
          
   (?)sendmail problem --or--
          'sendmail' on a Private/Disconnected Network
          
   (?)Question about networking with NetWare --or--
          Needs to Login to Netware
          
   (?)FS Security using Linux --or--
          Crypto Support for Linux
          
   (?)relaying still not correct ...
          
   (?)The state of UNIX in 1998
          
   (?)A newbie question --or--
          How Many Ways Can I Boot Thee: Let Me Count Them
          
   (?)Windows file systems across a linux box --or--
          Programmer Fights with Subnets
          
   (?)Finding IP address with a script --or--
          Using A Dynamically Assigned Address from PPP Startup Script
          
   (?)Setting up Linux to serve CD images through loopback --or--
          More than 8 loopfs Mounts?
          
   (?)SV: PPP-question. --or--
          Where to find Multi-Router Traffic Grabber
          
   (?)Support for the Microtek SlimScan Parallel Port Scanner
          
   (?)RedHat 5.1 and rpms --or--
          RPM Dependencies: HOW?
          
   (?)modutils question
          
   (?)libc5 and libc6
          
   (?)Linux on Dell Systems
          
   (?)How can I find this out? --or--
          Remote Login as 'root'
            ____________________________________________________
   
(!) Greetings from Jim Dennis

   Well, it's getting close to the end of the year.
   
   So, what will I being doing:
   
   I hope to continue doing "the answer guy" --- and maybe I'll finally
   get around to writing some custom procmail scripts to structure it so
   that the Answer Guy can become "the Answer Gang"
   
   (I'd like to thank those who offered to participate in this project
   earlier this year. I haven't meant to snub any of you, but I haven't
   had any time to build on this idea either).
   
   I've been elected to the board of directors for BayLISA (the Silicon
   Valley and SF Bay Area chapter of SAGE, which is the USENIX Systems
   Administrators' Guild --- we inherited the 'e' from the creat() system
   call). I hope to promote better organization of system administrators
   in the bay area and around the world.
   
   I'll be at the annual USENIX/SAGE conference: LISA (Large Instllations
   System Administration) during the 2nd week of this month.
   
   I hope to finish my book real soon now. I've been courting a co-author
   on the project and have found someone that might be interested. This
   will be _Linux_Systems_Administration_ --- but should be of use to all
   sysadmins on all platforms. My goal in writing this is to genuinely
   raise the state of the art in systems administration and to provide
   the basis for "best practice" guidelines in the field.
   
   I started research and notes for my book about three years ago (with
   no intent of seeking a publisher until a good chunk of it was done).
   Last March an editor approach me and asked if I'd consider working on
   a book for them (Macmillan Computer Publishing: http://www.mcp.com/).
   When I agreed to work on this, the group I was working with was about
   as relaxed as book publishers ever get (from what I've heard).
   However, since Linux has suddenly become a hot topic they are now
   under pressure to get things rolling.
   
   When someone pops into the comp.unix.admin newsgroup or onto the
   linux-admin mailing list with the old question: I've just been
   assigned these systems what should I read --- I'd like to see my book
   listed along with Aeleen Frisch's _Essential_System_Administration_
   (O'Reilly & Associates), a.k.a. "the armadillo book" and the
   _Unix_System_Administrator's_Handbook by Evi Nemeth et al
   (Prentic-Hall, a.k.a. "the cranberry book").
   
   The other major project I have on the horizon is a half day
   seminar/tutorial on the subject of "Linux Security for the System
   Administrator" to be presented at the upcoming LinuxWorld Expo
   (http://www.linuxworldexpo.com)
   
   My goal for that is to show enough admins enough about securing their
   Linux systems from common threats that Linux can shed its reputation
   for being "easy" to break into. (Of course, everyone reading this can
   get a head start by reading Dave Wreski's Security Admin Guide
   (http://nic.com/~dave/SecurityAdminGuide/) and his Security-HOWTO
   (http://sunsite.unc.edu/LDP/HOWTO/Security-HOWTO.html at your usual
   LDP mirror site.
   
   Dave gets "The Answer Guy" support award of the month for his work on
   these documents and for his participation on the linux-admin mailing
   list.
   
   Other than than, I'll need to get a lot more consulting done next year
   since I've devoted a bit too much of this year to writing TAG and my
   book. (My wife, Heather has been gracious enough to support me while
   I'm pursuing these new vocations). [I also work on a preprocessing
   script and then polish up the HTML for this column every month. --
   Heather]
   
   So, it looks like a busy year for me as well as the rest of the Linux
   community.
   
   Happy Thanksgiving, everyone.
            ____________________________________________________
   
(?) Linux as a File/Print Server for Window and DOS boxes: Of course!

   From jimr on Sat, 07 Nov 1998 
   
   Is it possible to set up a linux file and print server in an office of
   95,98 & DOS? 
   
     (!) It is a very popular application for Linux boxes. You can
     easily take any old 386, 486, or Pentium with 16 or 32 Mb and an
     ethernet card (or two) and install Linux and Samba.
     
     Samba is a popular Unix package for providing SMB file and print
     services. SMB is the technical name for the set of protocols that
     Windows NT, '95, '98, and OS/2 LANMan and LANServer (among others)
     all used for file and print sharing.
     
     Samba was written by Andrew Tridgell has been enhanced by a host of
     others (much like Linux itself). While much of the development of
     Samba has been done on Linux --- it's worth noting that many of the
     Samba developers also work on FreeBSD and some even work on
     Solaris, SunOS, Irix, and other traditional forms of Unix. The code
     is quite portable.
     
     The master web server for Samba is at:
     
   Australian National University:
          http://samba.anu.edu.au/samba/samba.html
          
     .. there are mirrors world-wide.
     
     Note that Samba come with most Linux distributions. Also note that
     the Samba team is pretty close to releasing version 2.0 which will
     include some code to support DC services (allowing your Linux box
     to act as a "Domain Controller" a PDC or BDC for your NT systems).
     
     It's also worth noting that your MS-DOS machines must be outfitted
     with TCP/IP suites to talk to Samba. I don't know of a Unix
     implementation of the NetBIOS networking protocols (the lower layer
     protocols over which the "server message blocks" of SMB are
     transported).
     
     Another alternative is to run Netware for Linux (available from
     Caldera: http://www.caldera.com) and have your MS-DOS systems
     access their file and print services via IPX protocols. (I always
     found the IPX drivers for DOS to be the quickest, most stable, and
     compatible and to have the most modest memory footprint of any
     networking drivers on the platform --- I always attributed Novell's
     huge success to those qualities). There is also a free "Netware
     emulator" called "Mars_nwe" --- that may also be sufficient for
     your MS-DOS systems.
     
     You may also want to consider switching some of your DOS systems to
     Linux with DOSEmu (a BIOS/system emulator for running a copy of
     DOS). You can also consider installing Caldera/DR-DOS as an
     alternative to MS-DOS. Basically MS isn't upgrading DOS any more,
     but Caldera and the Linux community are.
     
     In any event Netware is not free software. Samba is. However, you
     can run them concurrently on the same server (although I'd suggest
     a Pentium with 64Mb of RAM if you're going to run those and the
     obligatory intranet web, mail, and other services on the one host).
     
     Note that processor speed is not much of an issue here --- all of
     these services take very little processor power, and Linux doesn't
     require that you load the system with alot of unnecessary support
     (like all kinds of GUI baggage) when you just want to run a server
     in the closet. If you hook up a typical cheap laser or inkjet
     printer or two to the system, you can configure Linux to handle
     PostScript (TM) print jobs using the ghostscript drivers (a package
     that implements the PostScript (TM) language on the host computer
     and supports a large number of common printers.
     
     Be sure to get a printer that is NOT a "winprinter" (a print engine
     with essentially no embedded system --- which relies on PROPRIETARY
     drivers to drive it). The problem with these is that the
     manufacturers won't (or can't) release the specifications to allow
     Linux developers to write Linux native drivers for them. So you can
     only run these printers from Windows systems. (Basically it's a
     ripoff. You pay almost as much for a much less sophisticated
     printer that will probably be rendered temporarily useless with
     every Microsoft OS upgrade --- since the old drivers will almost
     never work with their new OS versions).
     
     I suggest that people considering Linux start with the
     Hardware-HOWTO at:
     
     http://sunsite.unc.edu/LDP/HOWTO/Hardware-HOWTO.html
     
     (and any LDP mirror).
     
     The SMB-Howto by David Wood seems to be pretty old --- and I know
     that Samba has been upgraded quite a bit since August of '96 --- so
     we probably need to find someone revise this HOWTO. However, most
     of the principles and examples should still work --- so it's a good
     place to look. Be sure to read the FAQ at the ANU site, though.
     There's a whole newsgroup devoted to the topic:
     news:comp.protocols.smb --- and Samba is the most common topic of
     discussion there.
            ____________________________________________________
   
(?) Suggestions for Linux Users with Ultra Large Disks

   From John Newbigin on Fri, 06 Nov 1998 
   
   In response to your note about Suggestions for Linux Users with Ultra
   Large Disk, here is my suggestion 
   
   Create a small partition at the start of the disk, say 10 meg should
   be plenty, you could get away with ~2 if you are stingy. Use this
   partition to store the kernel/kernels used to boot linux. You can then
   create a root partition as large as you like, set lilo up to use the
   kernel from the first partition and away you go. You would only need
   to mount the partition if you are going to add a new kernel or run
   lilo. You could even put kernel modules on the partition if you
   wanted. 
   
   (I have not tried this myself, but I see no reason why it should pose
   a problem) 
   
   As for the 8gig limit, I have an 8.4 gig HD, and linux 2.0.34+ don't
   have a problem. They do some kind of translation on boot, but it works
   without any problems. 
   
   John.
   UNIX is user friendly. It's just selective about who its friends are. 
   
     (!) It's an excellent suggestion. I've heard variations of it many
     times --- but many of them haven't explained it as clearly as this.
     
     Let's say make I create this filesystem (/dev/hda1) and then a root
     filesystem (/dev/hda3 --- we'll say that hda2 is swap). I should
     create a mount point (let's call that /mnt/kernelfs) which is where
     I mount /dev/hda1 when I need to update a kernel and/or run
     /sbin/lilo for any reason. The rest of the time /dev/hda1 doesn't
     have t be mounted. In fact we don't need to reserve a special mount
     point (/mnt/kernelfs) for it at all --- that's just a bit of
     syntactic sugar that "self documents" what we're doing in the
     /etc/lilo.conf and other configuration files and scripts.
     
     I've tried many times to explain that LILO doesn't care about
     filesystems. /sbin/lilo needs to see files in order to interpret
     the configuration directives and put the LILO boot blocks and maps
     in the correct places. One of these days it will sink into the
     consiousness of a critical mass of Linux users. (Then someone will
     patch the ext2fs superblock to automatically bootstrap kernels by
     name and 90%+ of the Linux users won't care about LILO).
     
     In any event, I've also suggested that such users actually put a
     whole rootfs unto such a small partition --- an "altroot." This can
     be faster and handier than a boot/root diskette and can give you a
     way to test new kernels more easily with less risk.
     
     When testing new kernels you sometimes needs to replace some
     utilities. Back in 1.3 to 2.x we had to do the whole procps suite
     recently it's been the 'mount' command, and some others. Having the
     whole original suite on your altroot can make for a much easier
     time of it!
     
     Also, the "autorecovery" configuration that I talked about last
     month requires an extra root partition. If you ever want to
     experiment with that --- you want to create that "little root"
     partition from the start.
     
     Another advantage of the "altroot" variant of this suggestion is
     that it's actually a little easier to implement. Most distribution
     setup/installation scripts can handle a "minimal" installation (in
     64Mb or less). So you essentially just do your Red Hat, Caldera,
     S.u.S.E. or Debian install twice. Once is the 'short form' to just
     create the altroot. The other is your "real" installation (with all
     the bells and whistles).
            ____________________________________________________
   
(?) Listing "Just the Links": It's the only way, Luke

   From Jerry Giles on Thu, 05 Nov 1998 
   
   Sorry for the intrusion but I came across your name while browsing for
   Linux. I am currently in a CIS program at the local college and a
   recent test had an item I still can't find the answer to. The
   professor asked what command to use to list "only the linked files" in
   a directory. He is expecting us to use ls with flags, I guess, but
   I've looked at all the flags given in the text and nothing seems to
   address this. Can you help? 
   
   Thanks, jerry giles 
   
     (!) Either you misunderstand, or your professor isn't being very
     precise. The 'ls' command "lists links" --- all directory entries
     are links! Some of these are symbolic links; others are "hard"
     links (which we think of as "normal" directory entries. The 'ls'
     command can't list anything but links. I can list other information
     that it extracts from the inodes to which each of these links
     points (via the stat() function).
     
     So, the question is essentially meaningless as you've relayed it.
     
     Now, if the question was about listing symbolic links there are a
     couple of simple answers that do make sense.
     
     ls -l | grep ^l
     
     ... this filters a "long" listing of all the links (hard and
     "symbolic") and displays only those which start with the letter l.
     In a "long" directory listing the first block of characters (field)
     is a string which encodes the type and permissions of the files to
     which these directory links point. (l is "symlink", d for
     "directory", s for "socket", p for "FIFO/named pipe", b and c for
     "block" and "character" special device nodes --- normally only
     found under the /dev/ directory --- and "-" (dash) for "regular"
     files).
     
     The second field in a long listing is the "link count." This tells
     you how many "hard links" point to the same inodes that this one
     does.
     
     Here's an example of my own root directory
     
drwxr-xr-x  14 root     root         1024 Sep 27 17:19 .
drwxr-xr-x  14 root     root         1024 Sep 27 17:19 ..
-rw-r--r--   2 root     root       219254 Sep 27 17:19 System.map
drwxr-xr-x   2 root     root         2048 Sep 12 03:25 bin
drwxr-xr-x   2 root     root         1024 Sep 27 17:20 boot
drwxr-xr-x   2 root     root         1024 Aug 31 06:40 cdrom
drwxr-xr-x  21 root     root         4096 Nov  4 03:12 etc
lrwxrwxrwx   1 root     root           15 Apr 20  1998 home -> /usr/local/home
drwxr-xr-x   5 root     root         2048 Sep 16 23:48 lib
drwxr-xr-x   2 root     root        12288 Mar 10  1998 lost+found
drwxr-xr-x   9 root     root         1024 Aug 31 06:40 mnt
lrwxrwxrwx   1 root     root           14 Mar 31  1998 opt -> /usr/local/opt
dr-xr-xr-x  63 root     root            0 Oct 13 02:25 proc
drwx--x--x  13 root     root         2048 Oct 31 17:47 root
drwxr-xr-x   5 root     root         2048 Sep 16 23:48 sbin
drwxrwxrwt   8 temp     root         3072 Nov  5 09:33 tmp
drwxr-xr-x  30 root     root         1024 Aug 31 13:32 usr
lrwxrwxrwx   1 root     root           13 Aug 31 06:40 var -> usr/local/var
-rw-r--r--   1 root     root       732668 Sep 27 17:19 vmlinuz

     This was generated with the command: 'ls -al /'
     
     The number in the second field (the first number on each of these
     lines) is the "link count." This is the number of hard links
     (non-symlinks) that point to the same inode. Thus my rood directory
     has 14 links to it. The ".." entry for each of /'s subdirectories
     points back up to it. In other words /usr/.. points back to /, so
     do /etc/.., /dev/.., and all the others that are just one level
     down from it. /usr/local/.. points to /usr and so on.
     
     We see that 'System.map' has a link count of 2. That means that
     there is another name for this file. Somewhere on this filesystem
     there is another hard link to it.
     
     Most Unix newbies are using to thinking of the 'ls' command as a
     listing of files. This is wrong. The 'ls' command is a listing of
     links to files. When you add parameters like "-l" to the 'ls'
     command, you are listing the links, AND SOME INFORMATION ABOUT THE
     FILES TO WHICH THEY POINT. (Under the hood the 'ls' command is
     "stat()'ing each of these entries). A Unix/Linux directory consists
     of a list of names and inodes. All of the rest of the information
     that we associate with the file (its type, ownership, permissions,
     link count, all three time/date stamps, size, and --- most
     importantly --- the list of blocks that contains the file's
     contents, all of this is stored in the inode).
     
     To understand the difference better, create a subdirectory
     (~/tmp/experiment). Put a few arbitrary links into that (use the
     'ln' command to make "hard links" and the 'ln -s' command to make
     some symlinks, and maybe some 'cp' commands to copy in a few
     files). Now use the 'chmod' command to remove your own execute
     ("x") rights to that directory ('chmod a-x ~/tmp/experiment').
     
     * (technically this is a "demonstration" rather than a true
       "experiment" but that's a bit of scientific method hairsplitting
       that I'll only mention in passing).
       
     You should be able to do an 'ls' command (be sure to use the real
     'ls' command --- NOT SOME ALIAS, SHELL FUNCTION OR SCRIPT). That
     should work. (If it doesn't --- you probably have 'ls' alias'ed to
     'ls --color' or something like that --- try issuing the command
     /bin/ls, or try the command 'unalias ls' for the duration of this
     experiment). When you can issue the 'ls' command, with no arguments
     and get a list of the file names in the "~/tmp/experiment"
     directory then try 'ls -l' or 'ls -i'
     
     You should get a whole stream of "Permission denied" messages. Note
     that you also have to do all of this from outside of the directory.
     Issuing the 'cd' command to get into a directory requires that you
     have "execute" permission to that directory.
     
     The reason that you get these "Permission denied" errors is
     because, to give any other information about a file (other than the
     link names) the 'ls' command needs to access the 'inodes' (which
     requires "execute" permissions for a directory). You can do an 'ls'
     or an 'ls -a' on the directory --- because these only provide lists
     of the link names. These variations of the command don't need
     access to any other information about the files (which is all
     stored in the inode).
     
     So, now that you (hopefully) understand what links really are ---
     you can understand something about the 'rm' command.
     
     'rm' doesn't remove files. 'rm' remove links to files. The
     filesystem driver then checks the link count. If that's "zero" (and
     there are no open file descriptors, processes with the file open)
     then the file is actually removed.
     
     Note the important element here: file removal happens indirectly,
     as part of the filesystem's maintenance. The 'rm' and similar
     commands just call "unlink()" (the system call).
     
     There was also an extra clause I snuck in. If I open a file (with
     and editor, for example) and then I use 'rm' to remove that file,
     what happens? (Let's assume that there was only one hard link to
     the file).
     
     Nothing spectacular. The link count is zero but the file is open.
     The filesystem maintenance routines leave the inode and the data
     blocks to the file alone so long as the file is open. As soon as
     the file is close, these routines will detect the zero link count
     and then remove the file. If a dozen processes have the file open
     --- than all of them must close it before the file is truly
     removed.
     
     Removal actually involves a few steps. All of the data blocks that
     are allocated to the file are reassigned to the "free list." You
     can think of the free list as a "special file" that "owns" all of
     the free space on the disk. The actual implementation is different
     for different fileystems. Then the inode is marked as deleted, or
     its "zero'd out" (filesystem and version specific).
     
     Now, back to your original question:
     
     A more precise way to find all of the "symlinks" in a directory is
     to use the 'find' command. Try the command:
     
     find / -type l -maxdepth 1 -print
     
     ... (GNU 'find' defaults to "-print" so you can leave that off
     under Linux).
     
     The "maxdepth 1" part is to prevent 'find' from traversing down the
     whole file tree. (Note: I tend to use "file tree" or "file
     hiearchy" to refer to all the files *and all the mounted
     filesystems* below a point, and "filesystem" to refer to all of the
     files on a single mounted fs. This is a subtle point of confusion).
     
     Now, if the question was "find all of the regular files with a link
     count greater than 1" you'd use:
     
     find ... -type f -maxdepth 1 -links +1
     
     ... where the ellipsis is a list of one or more directories and/or
     filenames and the other parameters test for the various conditions
     that I described (and prevent traversal down the tree, of course).
     In GNU find many of the numeric conditions can be specified as "+x"
     "x" or "-x" --- where +x means "more than 'x'", -x means "less than
     'x'" and just x means "exactly x." That's a subtlety of the 'find'
     command.
     
     A last interpretation of this question that I can imagine is: find
     all of the links to a given file (inode). To do this you start with
     the inode. If it is not a directory (*) and it has a link count of
     more than one then search the whole filesystem for any other link
     that has a matching inode. This is a non-trivial question to a
     first term Unix student. It entails writing a script in a few
     parts.
     
     * (We don't have to search for the additional hard links to
     directories, because they should all be in ./*/.. --- that is they
     are all . or .. entries in the current directory and the ones just
     below us. If you were to use some custom code for force the
     creation of some other hard link to a directory --- fsck would
     probably have fits about the anomaly in the directory structure.
     Some versions of Unix have historically allowed root (superuser) to
     create hard links to directories --- but the GNU utilities under
     Linux won't allow it --- so you'd have to write your own code or
     you'd have to directly modify the fs with a hex editor).
     
     I'll just walk through one example to get us warmed up:
     
     In my root directory example above I saw that System.map had a link
     count of 2. It's a regular file. So I want to find the other link
     to it.
     
     First I find the inode.
     
     'ls -ail /' gives us:
     
      2 drwxr-xr-x  14 root     root         1024 Sep 27 17:19 .
      2 drwxr-xr-x  14 root     root         1024 Sep 27 17:19 ..
     13 -rw-r--r--   2 root     root       219254 Sep 27 17:19 System.map
   4019 drwxr-xr-x   2 root     root         2048 Sep 12 03:25 bin
  56241 drwxr-xr-x   2 root     root         1024 Sep 27 17:20 boot
     14 lrwxrwxrwx   1 root     root           13 Aug 31 06:40 var

     (etc).
     
     ... the numbers in the first field here are the inodes --- the
     filesystem data structures to which these links point. We note that
     the '.' and '..' (current and parent directories) both point to the
     same inode *for the root directory*. (For any other directory this
     would not be the case).
     
     ... so I want to find all links on this filesystem (*) which point
     to inode number 13.
     
     * (not on any other filesystem that's mounted --- they each have
       their own inode number "13")
       
     So, here's the command to do that:
     
     find / -mount -inum 13
     
     ... whoa! That was easy. The "-mount" option tells the find command
     not to traverse across any mount points (it's the same as the -xdev
     option).
     
     To do this for each of the items in a directory -- the hard part is
     to find the root of the filesystem on which each file resides. In
     my example this was deceptively easy because the link I was looking
     at was in the root directory (which obviously is at the root of its
     filesytem).
     
     If I had a script or program that would "find the root of the
     filesystem on which a given file resided" (let's call it "fsrootof"
     --- then I could write the rest of this script:
     
     find ... -type f -links +1 -printf "%i %p\n" | while read i f; do
     find $(fsrootof $f) -mount -inum $i
     done
     
     ... this is a bit of shell script code that uses 'find' to generate
     a list of the inodes and names/paths (the -printf option to the
     first 'find') of "regular files" with link counts greater than 1.
     That list is fed into a simple shell loop (a mill) that reads each
     line as a "inode" and a "patch" (later referred to as $i and $f
     respectively). The body of that loop calls my mythical script or
     program to find the "root of the filesystem of the file" --- and
     use that as the search point for the second find command.
     
     Just off hand I can't think of a way to implement this 'fsrootof'
     command using simple shell scripting. It would probably best be
     done as a C program or a Perl script (making direct use of some
     system calls to stat the file and some other trick to traverse
     upwards (following .. links) until we cross a mountpoint. I'd have
     to dig up the sources to the 'find' command to see how they do
     that.
     
     So, maybe I'll leave that as the "Linux Gazette Reader Challenge"
     (implement 'fsrootof' as described above).
            ____________________________________________________
   
(?) More on Linux Kernels Sound Support: Alan Cox Responds

   From Alan Cox on Thu, 05 Nov 1998 
   
   Linux 2.1.12x suppors the OPL3SA/2/3 cards. Also the new 2.1.x modular
   sound gets periodically folded back into an upgrade patch set for
   2.0.x on ftp.linux.org.uk:/pub/linux/alan 
   
   Hannu isnt involved in the current sound work, while a large chunk of
   it is still built on his efforts its best to direct sound queries to
   sound-list@redhat.com for the modular and 2.1.x sound (as well as 'my
   card isnt supported' type stuff. 
   
   Hannu can now concentrate on his commercial work, we concentrate on
   the free stuff and everything seems to be working out well that way. 
   
   Alan 
   
     (!) Thanks for clarifying that for me. I always appreciate it when
     the real experts notice my little tech support efforts and can get
     in here and straighten me out.
     
     Do you really want to get sound support questions at your Red Hat
     address, even if they aren't related to the Red Hat distribution?
     
     Speaking of the efforts on 2.0.x --- is it becoming a race to see
     if 2.0.36 ships before 2.2? I presume that some maintainance work
     will be committed to 2.0.x for a few month after the 2.2.x release
     in any event --- though, I'd also expect it to be relatively minor
     fixes and device driver backports. Is that about right?
     
     Are you going to USENIX/SAGE LISA in Boston?
                        ____________________________
   
(?) Jim Asked, Alan Answers

   From Alan Cox on Thu, 5 Nov 1998 
   
   when the real experts notice my little tech support efforts and can
   get in here and straighten me out. 
   
   Do you really want to get sound support questions at your Red Hat
   address, even if they aren't related to the Red Hat distribution? 
   
     (!) sound-list is a mailing list not my address. Red Hat funded the
     initial modular sound patch, but its the right place for new card
     support, 2.1.x and 2.0 modular sound (ie the stuff RH and I think
     now some other folk ship by default)
     
   (?) Speaking of the efforts on 2.0.x --- is it becoming a race to see
   if 2.0.36 ships before 2.2? 
   
     (!) No Linus is a few laps behind ;)
     
   (?) I presume that some maintainance work will be committed to 2.0.x
   for a few month after the 2.2.x release in any event --- though, I'd
   also expect it to be relatively minor fixes and device driver
   backports. Is that about right? 
   
     (!) I'm expecting 2.0 to stay in heavy use for another 2 or 3 years
     at least, and that there will be a continued flow of 2.0.x
     patches/bug reports. A lot of commercial users don't care what 2.2
     does, their web site has been up for 250 days with 2.0.x and they
     aren't going to upgrade.
     
     And since we arent microsoft, they wont have to...
     
   (?) Are you going to USENIX/SAGE LISA in Boston? 
   
     (!) Nope
     
     Alan
            ____________________________________________________
   
(?) Some Magic Keys for the Linux Console

   From Anthony Gabrielson, on Mon, 02 Nov 1998 
   
   Hello, 
   
   One of my co-workers runs sco unixware 7. Under X he can switch off
   between the GUI and GUI by alt F1 F2 etc ... he can also startx in
   thos terms if he wants. Can this be done under Linux right now. If not
   is any one working on it? 
   
   Thanks
   Anthony Gabrielson 
   
     (!) This is a fairly common source of confusion for new Linux
     users.
     
     You can use [Ctrl]+[Alt]+[Fx] to do this using XFree86 (the free X
     server for Linux, FreeBSD, etc). I presume you could also remap
     your [Alt]+[Fx] keys to do it, probably using 'xmodmap'
     
     You can also use an 'xterm' command, menu entry or icon to do this
     --- using the 'chvt' command that's included with most
     distributions.
     
     Note: You can usually also "back out of" XFree86 using
     [Ctrl]+[Alt]+[BackSpace]. This basically provides a "vulcan nerve
     pinch" or "three finger salute" for the X windowing system, without
     having to reset the rest of the OS.
     
     Speaking of "three finger salutes" there are some neat options in
     the 2.1 kernels if you enable the "Magic SysRq" option when you
     build you new kernels. These give you various commands using
     [Alt]+[SysRq/Print Screen]+? options.
     
     For example you can use "Magic SysRq"+[s] to "Sync all
     filesystems." There are other combinations to restore you keyboard
     from "raw" mode, kill all processes that are attached to the
     current virtual console, remount your filesystems in "read-only"
     mode, dump tasklists, and register or memory status to your
     console, and to set various signals to all processes below 'init.'
     
     These is supposed to work no matter what else the kernel is doing.
     You can read more about these in:
     /usr/src/linux/Documentation/sysrq.txt
     
     (It's a fairly obscure fact that the 2.0 kernels had some similar
     console keyboard features. You could invoke register, memory, and
     task list dumps using [Alt]+[ScrollLock], [Shift]+[ScrollLock], and
     [Ctrl]+[ScrollLock] respectively).
     
     In addition most versions of the Linux kernel (back to 1.2 or
     earlier) would allow you to use [Shift]+[PgUp] to view a small
     backscroll buffer for the current VC. This buffer gets wiped on a
     virtual console switch (unlike the FreeBSD [ScrollLock] feature
     which is maintained for every VC).
     
     Another key binding that many Linux users don't know about is the
     [Alt]+[Left Cursor] and [Alt]+[Right Cursor] bindings, which will
     cycle among your virtual consoles (VC's). In other words if you are
     on VC4 and you use [Alt]+[Left Cursor] you'll be switched to VC3
     while [Alt]+[Left Cursor] would move you to VC5.
     
     If you reconfigure your system to provide logins on more than 12
     virtual consoles (just edit /etc/inittab in the obvious way --- and
     make sure you have the corresponding /dev/tty* nodes) --- you can
     get to the second "bank" of VCs using the other [Alt] key (the
     right one). If you had more than 24 you'd presumably have to use
     the [Alt]+{cursor keys} to get at them.
     
     Of course you can customize most of these to your heart's content.
     Look for the following man pages for more details:
     
                loadkeys (1)
                dumpkeys (1)
                showkey (1)
                keytables (5)

     ... and look through the whole "kbd" package for 'chvt' and other
     commands. There's also supposed to be an improved set of console
     tools (the "console" package?) which should be at Sunsite
     (http://sunsite.unc.edu/pub/Linux) somewhere.
     
     So you can customize your console's keyboard bindings without even
     having to patch a kernel.
     
     Incidentally, I get around the lack of real console backscroll
     buffers by just running 'screen' --- which also allows me to detach
     my processes from one terminal and re-attach to them in another.
     This is very handy when I'm working on a VC (as I usually do) and I
     need to look something up in Netscape --- if I think that Lynx just
     isn't getting what I need. I detach my 'screen' session, switch to
     my X session (which stays on VC13 for me, and VC14 for my wife's
     session), then I re-attach from any available 'xterm' I can then
     cut and paste between my X applications and the emacs that I've
     been running all along.
     
     'screen' also give me keyboard driven cut-and-paste between
     console/text/curses applications. I personally prefer this to 'gpm'
     old 'selection' features --- though I tend to use both
     occasionally.
     
     So, does that list of options block the sockets off of SCO?
                        ____________________________
   
(?) Anthony Replies...

   From Anthony Gabrielson, on Wed, 04 Nov 1998 
   
   Jim, 
   
   Thank You for the help - I don't care for sco, however that co-worker
   kept coming at me w\ can it do this and that. I was stumped on this
   one. 
   
   Thanks Again,
   Anthony 
   
     (!) I figured. About the only things the SCOldies can hold over us
     right now are "journal file support" and "Tarantella."
     
     Just SCOld him with an observation that engineers from SCO were
     making much ado about their recent addition of Linux binary
     compability support --- the ability of SCO to run Linux binaries;
     at last years USENIX in Louisiana. Then ask if Microsoft has sold
     off the last of their SCO stock yet <g>.
            ____________________________________________________
   
(?) No Echo During Password Entry

   From jdalbert on Wed, 04 Nov 1998 
   
   Hi Jim...I'm new to Linux and am trying to install Redhat version 5.1.
   I get as far as the keyboard password and my keyboard will not allow
   me to type any characters. It will allow me to tab or use the arrows
   but the keys do not move when pressed. I do not know who to ask for
   help and while browsing the linux site, I came across your name. Can
   you give me any advice as to how to get around the Root Password
   problem. Do I go into setup and check to see if I have Aami Bios and
   make changes? I'll look forward to hearing from you. 
   
   Thanks, Joe D'Albert 
   
     (!) If I understand you correctly -- you are just confused. The
     fact that the installation's prompt (dialog) for establishing the
     root password doesn't echo any characters, stars, nor even
     register/respond with cursor movements is PERFECTLY NORMAL. (It's a
     feature. It's supposed to work that way. Don't worry about it. Just
     type "blind").
     
     It's going to ask you to repeat the password (any password you
     choose) twice. That's to ensure that you know what you typed. (The
     assumption is that you're unlikely to make the same typo or mistype
     twice in a row --- so if the two entries match one another, than
     you can probably manage to keep typing your password that way
     forever).
     
     Note that it usually will do the same thing after you've got the
     whole system installed and configured. When you login, it will ask
     for a password.
     
     When a Linux system prompts you for a password during login --- you
     won't see any characters or cursor movement as you type. This is
     intended to prevent someone from watching over your shoulder, even
     to find out how many characters are in your password.
     
     Just type you password slowly and carefully. Make sure not to miss
     any keystrokes (by hitting the keys squarely) and make sure not to
     "bounce" the keys --- getting "double images" for some characters.
     
     As long as you do that you should be fine.
     
     I noticed that Lotus Notes used to respond to each keypress in the
     password prompt by echoing a small random number (2 to 5?) of *'s.
     This was a convenient way to give keyboard feedback without
     revealing your password length. Many systems will echo *'s for each
     character typed.
     
     Incidentally the Linux passwords have NOTHING to do with any
     CMOS/Setup (BIOS) passwords that you may have on your system. Linux
     (and other forms of Unix) are multi-user systems. They maintain a
     list of accounts (in the /etc/passwd file) that provide for all
     access to the system.
     
     The main benefit of this is that you can create a Joe account (joe,
     jdalbert or jda or ja or whatever reasonable login name you want to
     use). You normally long in under this account. While using your
     account you run very little risk of damaging any system files. If
     you run a "bad" program --- that program will usually be unable to
     damage the system (infect system binaries with a virus, for
     example).
     
     You should only use the 'root' account for maintaining the system
     --- almost exclusively for adding new accounts, disabling old ones,
     and installing or upgrading your main system software.
     
     You can use the 'passwd' command to change your password at any
     time. If you forget your personal (joe) password you can login as
     root and issue a command like 'passwd joe' to force a change on the
     password for any account on the system. (Thus, if you create an
     account for your wife, girlfriend, kid, roommate, dog, cat or
     whatever --- and they forget their password --- can't get it back
     for them, but you can just give them a new one). Read the 'passwd'
     and 'usermod' man pages for details on that and other tricks.
     
     If you should ever lose the root password you can reboot the system
     (in single-user mode, or possibly you'll need a rescue diskette ---
     if 'sulogin' is configured).
     
     If you've booted from diskette you'll have to mount the filesystems
     that you normally boot from (usually something like /dev/hda3 for a
     partition on your first IDE drive, or /dev/hdb1 for one on your 2nd
     IDE, or /dev/sda1 for one on your first SCSI drive). Let's say you
     mount that under /mnt/ (this is the floppy diskette's /mnt
     directory. Once you've done that you'd change (cd) into that
     directory, and use a command like 'chroot . /bin/sh' --- which
     essentially "locks you into" that (floppy's /mnt) directory as
     though it were the root directory.
     
     (This process is a bit confusing --- but the purpose is to let you
     do the rest of these commands as though you'd booted from the hard
     drive in the first place. You could skip this step if you know how
     to issue all of the following commands while adjusting for the
     floppy/factor).
     
     From there you can use a text editor on the /etc/passwd (and
     possibly the /etc/shadow) files, or you issue the 'passwd' command
     to force a change to 'root's password.
     
     If you booted from floppy/rescue diskette, you'd now type "exit"
     (to exit from the 'chroot shell' that you invoked above). Then
     you'd unmount ('umount') the filesystems that you'd used and
     reboot.
     
     (Note: If the last five paragraphs sounded confusing and
     intimidating --- take it as a warning: DON'T LOSE YOUR ROOT
     PASSWORD! You can recover from it, but you have to do some fussing.
     If you lose any other user's password, you can just log in as
     'root' and do a forced change to fix it.).
            ____________________________________________________
   
(?) FTP Login as 'root' --- Don't!

   From Henry C. White on Fri, 30 Oct 1998 
   
   Hi, I would like to ftp to my linux PC and login as root. When I have
   tried this I get an access denied. Please help me in how to configure
   to allow this. I an runnung linux RedHat 5.1.
   Thanks
   Henry White 
   
     (!) Most FTP daemons (the server software to which your ftp client
     connects) check the /etc/ftpusers file for a list of users that are
     NOT allowed to access the system via FTP. This file normally
     includes a list of all (or most) of the "psuedo-users" on the
     system.
     
     (psuedo-users is a term to describe all those accounts in your
     /etc/passwd file that don't correspond to real users at your site).
     
     Another check which is made by most FTP daemons is to scan the
     /etc/shells file for one that corresponds to that of the user who
     is attempting to login. Normally the /etc/shells file is a list of
     all of the valid 'login' shells on the system. If I want to prevent
     a group of users from accessing normal shell and FTP services on a
     system I can change their shell to something like /bin/false, or
     /usr/local/bin/nologin (presuming that I write such a program).
     Then I just make sure that this program is not listed in
     /etc/shells, and the user will be denied FTP access. (Their login
     via telnet would still be allowed, but a proper (true binary)
     /bin/false will just immediately exit, and one would presumably
     write /usr/local/bin/nologin to write an error message and exit.
     
     If I want to have some accounts that are only allowed access via
     FTP (and not given normal shell accounts) I have to do a few
     things. First I set their login shell (as listed in the last field
     of the /etc/passwd file) to /usr/bin/passwd (if I want them to be
     able to set and change their own passwords), or I create a link
     from /bin/false to /usr/local/bin/ftponly. Then I add one or both
     of those to /etc/shells.
     
     If you add a new shell to the system (someone writes a niftysh --
     that you've just got to have) then you should add it's full path
     name to the /etc/shells list.
     
     This technique, for limiting an account to FTP only actually
     requires more work than I've described. If I stopped at this point
     a user could create a .rhosts file in their home directory and run
     interactive shell commands via the r* tools. The user could also
     create .forward and/or .procmailrc files that would allow them to
     execute arbitrary commands on my systems (including a 'chsh'
     command to change their shell to bash, csh, etc).
     
     So, I usually use the wuftpd (Washinton University FTP deamon)
     "guestgroup" features. This is controlled by declaring one or more
     groups (entries in /etc/group) to be "guestgroup"s in your
     /etc/ftpaccess file. /etc/ftpaccess is used by wuftpd (and I think
     Beroftpd, a derivative therefrom). Then I add the "ftponly" users
     to that group (cleverly named "ftponly" in most cases), and change
     their "home" directory to point to some place under a chroot jail,
     using a clever/hackish syntax like:
     
     joeftp:*:1234:3456:Joe FTPOnly
     Dude:/home/ftponly/./home/joe:/bin/passwd
     
     ... note the /./ to demarque the end of the "chroot" jail (a
     standard FTP "home directory tree" with its own .../bin, .../etc/,
     and .../dev directories). When Joe Dude logs in (via FTP) he'll be
     chroot()'d to /home/ftponly and chdir()'d to .../home/joe under
     that.
     
     Normally we won't allow Joe to own .../home/joe, and we won't allow
     write access to that "home" directory. We can create an incoming
     directory below that if necessary.
     
     If our need to create these "FTP only" accounts is such that we
     must not chroot() the client --- we can just chown the user's home
     directory (to 'root') and remove write access to it. This will
     prevent the creation of those various "magic files" like .rhosts,
     .ssh/*, .forward, .procmail, .klogin, etc.
     
     There are other approaches to these issues.
     
     With PAM (Pluggable Authentication Modules), which has been the
     default set of libraries for the whole suite of Red Hat
     authentication programs for the last couple of versions of that
     distribution, you can configure each service to look into a file
     like /etc/ftpusers (any file you like) --- and limit each user's
     access to each service. You can also limit access based on time of
     day, day of week, terminal and/or source address of the connection,
     require one-time-passwords, etc. Unfortunately, this isn't well
     documented, yet.
     
     (I've been raising dust on the PAM list recently --- since they've
     been hovering at "version 0.6" for over a year. Some of them seem
     to think that version numbers don't matter at all --- that it's
     just "marketing fluff" --- I think that the integration of the
     suite and the "official release" is crucial to it's eventual
     adoption by other distribution maintainers, and admins/users).
     
     Another approach is to just disable all of the "other" services.
     That's great when you're setting up a dedicated ftp server.
     
     You could also go in and manually hack the sources to all of the
     services that you do need, to add the additional checks and the
     enforcement of your policies. That's precisely the problem that the
     PAM project has sought to solve.
     
     Yet another approach is to replace your FTP daemon. For example the
     shareware/commercial 'ncftpd' allows you considerable control over
     these things. It's the product I'd recommend for high volume FTP
     servers (http://www.ncftp.com).
     
     Back to your original question. You can probably enable 'root'
     access via FTP. However, I don't recommend it. You'd really be much
     better off using 'ssh' (the 'secure' rsh, with 'scp' instead of
     'rcp', etc). The best bet is to use 'rsync' over 'ssh' --- to
     distribute files as 'root' to the systems you're trying to
     administer.
     
     (The only sane reason to want to send files to or get them from a
     system "as root" is for remote administration).
            ____________________________________________________
   
(?) 'sendmail' on a Private/Disconnected Network

   From RoLillack on Tue, 10 Nov 1998 
   
   Dear Answer Guy! 
   
   I set up a small network at home with my Linux box beeing
   192.168.111.111, my father's Windooze box beeing ...112 and my brother
   Max' Linux system (root gets mounted using nfs!!!) ...113 (mine is
   called pflaume.lillax.de and my brother's: birne). Both Linux machines
   use RedHat 5.0. 
   
     (!) Nice trick using nfsroot there.
     
   (?) Nearly everthing works well, we use http, ftp, nfs and samba
   without problems. But when I tried to send an email to my brother's
   machine or vice-versa, sendmail sent a warning, that it could not send
   the mail for 4 hours and mailq tells me: 
   
"Deferred: Name server: birne.lillax.de: host name lookup failure"

   So I tried mailing to max@192.168.111.113 and mailq says 
   
"host map: lookup (192.168.111.113): deferred"

   I don't know what I did wrong, our hosts file has the right entries
   and this is the output of ifconfig and route on my machine (on Max'
   system it nearly looks the same): 
   
     (!) 'sendmail' doesn't use /etc/hosts. The standards require that
     it use MX records. It can also use NIS maps (the default on many
     versions of SunOS and Solaris).
     
     If you really mailed it to max at the IP address, it should have
     bypassed the MX lookup. However, to use an IP address in your mail
     you should enclose it in square brackets like:
     
                max@[192.168.111.113]

     ... which is a trick I've used to configure the systems internal to
     my LAN (no DNS) to forward to my uucp hub via SMTP. In other words
     'antares' is my mail server. It exchanges mail with my ISP over
     dial-out UUCP. My users fetch their mail from antares via POP using
     Eric S. Raymond's 'fetchmail' The workstations that we use are
     configured with the "nullclient.mc" file and the "hub" defined by
     IP address like so:
     
divert(0)dnl
VERSIONID(`@(#)clientproto.mc   8.7 (Berkeley) 3/23/96')

OSTYPE(linux)
FEATURE(nullclient, `[192.168.64.1]')

     That's my whole starshine.mc file for all of the workstations on my
     LAN. They relay all mail to 'antares' with not DNS/MX lookups.
     
> ifconfig:
> lo       Link encap:Local Loopback
>          inet addr:127.0.0.1  Bcast:127.255.255.255  Mask:255.0.0.0
>          UP BROADCAST LOOPBACK RUNNING  MTU:3584  Metric:1
>          RX packets:1912 errors:0 dropped:0 overruns:0
>          TX packets:1912 errors:0 dropped:0 overruns:0


> eth0     Link encap:Ethernet  HWaddr 00:60:8C:51:CD:AA
>          inet addr:192.168.111.111  Bcast:192.168.111.255  Mask:255.255.255.0
>          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>          RX packets:2259 errors:0 dropped:0 overruns:0
>          TX packets:554 errors:0 dropped:0 overruns:0
>          Interrupt:12 Base address:0x340

> route:
> Kernel IP routing table
> Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
> localnet        *               255.255.255.0   U     0      0        1 eth0
> loopback        *               255.0.0.0       U     0      0        2 lo
> -------------------------------------------------------

   (?) ...with "localnet" meaning "192.168.111.0"... 
   
   I'm looking forward to your answer. Thank you! 
   
   Robert 
   
   PS: If I mail to "root@localhost" I get the message, but if I send it
   to "root@127.0.0.1" it doesn't work ("deferred" message as above). Has
   this something to do with my real problem?!? 
   
     (!) This further supports my theory. Try a suitable variant of my
     nullclient.mc file, build a sendmail.cf file from that using a
     command like:
     
                m4 ../m4/cf.m4 your.mc > /etc/sendmail.cf

     ... from /usr/share/sendmail/cf or /usr/lib/sendmail-cf/cf or
     wherever Red Hat puts it in your version.
            ____________________________________________________
   
(?) Needs to Login to Netware

   From dave.thiery on Tue, 10 Nov 1998 
   
   Dear Answerguy, 
   
   I recently installed RedHat 5.2 on my laptop(as a dual boot with Win95
   which I need for work). What I would like to do is to be able to log
   into my company's NetWare server and access the network along with the
   internet through Linux. I have a IBM 380XD laptop with a 3Com 3C574-TX
   Fast EtherLink PC Card. Any suggestions? 
   
   Thanks!
   Dave 
   
     (!) I'll assume that you have your ethernet card working.
     
     Caldera (http://www.caldera.com) offers a Netware client as part of
     their distribution. I've heard that this can be used with other
     distributions --- but you'll want to check with them (read their
     notes) to determine if this is legal as per their licensing.
     
     There is a freeware package called ncpfs (by Volker Lendeke) which
     allows some access to some Netware servers. I've never used ncpfs
     but I have seen it used (a couple of years ago). It works a bit
     like NFS --- a directory you mount is visible to the whole system.
     Obviously that's not a problem for your laptop.
     
     By contrast the version of Caldera's client that I used back then
     provided access to NDS and bindery servers, and provided user
     dependent access. In other words, two users concurrently logged
     into your system would have different access to server files based
     on their individual access rights in Netware. (Under ncpfs any
     Linux user with any access to the mounted Netware file tree will
     get the same access as the Netware user who mounted it).
     
     If your Netware servers are using NDS and aren't providing bindery
     emulation --- or if you needs services that are provided via
     bindery emulation --- then you'll have to look at the Caldera
     client. Otherwise the ncpfs package may do the trick for you.
            ____________________________________________________
   
(?) Crypto Support for Linux

   From dreamwvr, August sometime in 1998 (in an old thread on the
   Linux-Admin List which I've been reading as part of the research for
   my book). 
   
   i believe it is called efs which stands for encrypted file system... 
   
     (!) Glynn Clements wrote:
     There is Matt Blaze's CFS (cryptogrphic filesystem) which is
     basically a userspace filesytem over NFS to the loopback interface.
     This was part of a larger package called ESM, encrypted session
     manager. That wasn't Linux specific, but does work under it. 
     
   (?) Joseph Martin wrote:
   I am helping a friend set up a new computer system. He is particularly
   interested in security. The regular linux authentication at the
   console should work well enough, however I was wondering about even
   more security. Are there any encrypted file systems we could set up?
   For example the computer boots up, loads the system from a ext2
   partition and then presents a login prompt. After login a mount
   command is given, a password supplied and the partition data made
   visible and acessable. After use of partition it is unmounted and
   rendered unusuable again. Anything like that exist? 
   
     (!) You can use the loop device, which turns a file into a device
     which can then be mounted (assuming that it contains a valid
     filesystem). 
     
     The loop device supports on-the-fly encryption/decryption using DES
     or IDEA (but you have to get the appropriate kernel source files
     separately; they aren't part of the standard kernel source due to
     legal nonsense). 
     
     Alternatively, you can just encrypt the file with any encryption
     package (e.g. PGP), and decrypt it before mounting. However, this
     requires sufficient disk space to store two copies of the file. 
     
     Glynn Clements 
     
     (!) There is also the TCFS --- a transparent CFS from Italy. This
     is Linux specific code. (http://tcfs.dia.unisa.it)
     
     There was also supposed to be a userfs module for encryption ---
     but I don't know if that was ever completed to production quality.
     
     The best place to get most crypto code is to just fetch it from
     ftp://ftp.replay.com (or http://www.replay.com) which is located
     offshore (Netherlands?) to put it beyond the jurisdiction of my
     government's inane trade regulations. (Apologies to the free
     world).
     
     I thought I read on the kernel list that http://www.kerneli.org was
     supposed to be a site where international (non-U.S. exportable)
     patches would be gathered and made available. However that address
     only returns a lame one line piece of text to lynx. I fared better
     with their ftp site at:
     
     ftp://ftp.kerneli.org/pub/Linux/kerneli/v2.1
     
     Where I saw a list of files of the form: patch-int-2.1.* (which I
     presume are "international" patches).
     
     Userspace toys can be found in:
     
     ftp://ftp.kerneli.org/pub/Linux/redhat-contrib/hacktic/i386
     
     (RPM format, of course).
     
     Meanwhile the loopfs encryption module seems to be located at Linux
     Mama (canonical home of unofficial Linux kernel patches)
     
     http://www.linuxmama.com/dev-server.html
     
     which has a link to:
     
     ftp://fractal.mta.ca/pub/crypto/aem
     
     TCFS is also suitable for encrytion of files on an NFS server (only
     the encrypted blocks traverse your network --- the client system
     does the decryption. That's a big win for security and
     performance).
     
     As for encryption of other network protocols: There's the standard
     ssh, ssltelnet/sslftp (SSLeay), STEL, suite for applications layer
     work, and a couple of IPSec projects for Linux at the
     network/transport layer. A friend of mine has been deeply
     interested in the FreeS/WAN project at:
     
     http://www.xs4all.nl/~freeswan
     
     ... or at:
     
     http://www.flora.org/freeswan
     
     (a mirror)
     
     ... This consists of a kernel patch and some programs to manage the
     creation of keys etc.
     
     The idea of the FreeS/WAN project is to provide opportunistic
     host-to-host encryption at the TCP/IP layer. In other words my
     Linux router would automatically attempt to create a secure context
     (tunnel/route) when communicating with your IPSec enabled system or
     router. Similar projects are underway for FreeBSD, a few routers
     like Cisco, and even NT.
     
     Anyway I haven't tried it recently but I hear that it's almost
     ready for prime time.
     
     One of the big issues is that FreeS/WAN isn't designed for manual
     VPN use --- so it's command line utilities for testing this are
     pretty crude (or were, last time I tried them). On the other hand
     we still don't have wide deployment of Secure-DNS --- which is
     necessary before we can trust those DNS "KEY" RR's. So, for now,
     all FreeS/WAN and other S/WAN secure contexts involve some other
     (non-transparent) key management hackery.
     
     Hopefully someone will at least create a fairly simple front end
     script for those of us that want to "just put up a secure link"
     between ourselves and a remote office or "stategic business
     partner."
     
     Also FreeS/WAN has focused it's effort on the 2.0.x kernels. When
     2.2 ships there will be another, non-trivial, effort required to
     adapt the KLIPS (kernel level IP security?) code to the new TCP/IP
     stack. The addition of LSF (linux socket facility --- a BPF-like
     interface) should make that easier --- but it still sounds like it
     will be a pain.
     
     There's apparently also an independent implementation of IPSec for
     Linux from University of Arizona (Mason Katz).
     
     http://www.cs.arizona.edu/xkernel/hpcc-blue/linux.html
     
     ... however this doesn't seem to offer any of the crypto code, even
     through some sort of hoops (like MIT's
     "prove-you're-a-U.S.-citizen/resident" stuff). I've copied Mason on
     this (Bcc) so he can comment if he chooses. I've also copied Kevin
     Fenzi and Dave Wreski in case they want to incorporate any of these
     links into their Linux Security HOWTO.
     
     http://sunsite.unc.edu/LDP/HOWTO/mini/VPN.html
     http://sunsite.unc.edu/LDP/HOWTO/Security-HOWTO.html
     
     An alternative to FreeS/WAN for now is to use VPS
     http://www.strongcrypto.com with 'ssh' This basically creates a
     pppd "tunnel" over a specially conditioned ssh connection. You have
     to get your copy of 'ssh' from some other site, for the usual
     reasons.
     
     Yet another alternative to these is CIPE (cryptographic IP
     encapsulation?) at:
     
     http://sites.inka.de/sites/bigred/devel/cipe.html
     
     ... which used encrypted UDP as the main transport.
     
     Of course we shouldn't forget our venerable old three head dog of
     mythic fame: Kerberos. This old dog is voted most likely to be our
     future authentication and encryption infrastructure (if for no
     other reason than the fact that Microsoft has vowed to "embrace and
     extent" --- e.g. "engulf and extinguish" it with Windows [DEL: NT
     v5.0 :DEL] 2000).
     
     The canonical web page for MIT Kerberos seems to be at:
     
     http://web.mit.edu/kerberos/www
     
     ... some news on that front is that Kermit version 6.1 is slated to
     include support for Kerberos authentication and encryption. More on
     that is on their web site:
     
     http://www.columbia.edu/kermit/ck61.html
     
     ... on the international front I hope to see the Heimdal project
     (from Sweden) reach production quality very soon.
     
     http://www.pdc.kth.se/heimdal
     
     When I talked to a couple of the developers of Heimdal I asked some
     hard questions about things like support SOCKS proxy (by their
     Kerberized clients), and support for one-time-passwords, support
     for NIS/NIS+ (nameservices lookups), etc. They seemed to have all
     the right answers on all counts.
     
     All that and it's free.
     
     Another European effort that is not nearly as attractive to us
     "free software fanatics" is the SESAME project (Secure European
     System for Applications in a Multi-vendor Environment)
     
     http://www.esat.kuleuven.ac.be/cosic/sesame
     
     The SESAME license only allows for free "experimental" use --- no
     free distribution, no installation for customers, and no
     "production use." Worse than all that no indication is made as to
     how much licensing would cost (say for individual use by a
     consultant). It appears to be geared towards limited distribution
     to "big" clients (the owners seem to be Bull SA, of France).
     
     However, they have some interesting ideas and their web pages are
     well worth reading. The suite of libraries seems to offer some
     worthwhile extensions over Kerberos.
     
     Some other pointers to cryptographic software are at Tatu Ylonen's
     (author of ssh) pages:
     
     http://www.cs.hut.fi/crypto/software.html
     
     (I've also copied Arpad Magosanyi, author of the VPN mini-HOWTO, in
     the hopes that he can find the time to integrate some of these
     notes into his HOWTO --- perhaps just as a list of references to
     other packages near the end).
     
     Of course the main thrust of Linux security has nothing to do with
     cryptography. An over-riding concern is that any privileged process
     might be subverted to take over the whole system.
     
     Bugs in imapd, in.popd, mountd, etc. continue to plague Linux
     admins.
     
     If security is really your friend's top interest and concern, and
     he's planning on running a general purpose Unix system with a
     mixture of common daemons (network services) and applications on
     it. I'd really have to recommend OpenBSD http://www.openbsd.org.
     That is considered by many to be the most secure "out of the box"
     version of Unix available to the general market today. (In the
     realm of commercial Unix, I've heard good things about BSDI/OS
     (http://www.bsdi.com).
     
     That is not to say that Linux is hopeless. Alan Cox has been
     co-ordinating a major Linux Security Audit project at
     
     http://www.eds.org/audit
     
     or:
     
     http://lwn.net/980806/a/secfaq.html
     
     There's also a set of "Secure Linux kernel patches" by Solar
     Designer (I don't know his conventional name --- everyone on the
     lists refers to him by this handle).
     
     http://www.false.com/security/linux/index.html
     
     These are a set of patches that prevent a couple of the most common
     sorts of exploits (buffer overflows and symlinks in /tmp and other
     world-writable directories).
     
     However, these patches are for 2.0.x kernels. They've been firmly
     rejected by Linus for inclusion into future kernels in favor of a
     more flexible and general (and more complicated) approach.
     
     Linux version 2.2 will support a "capabilities lists" (privileges)
     feature. This splits the SUID 'root' mechanism into a few dozen
     separate privileged operations. By default the system maps 'root'
     and 'SUID root' to setting all of these privileges as "enabled" and
     "inheritable." A sysctl() call allows a program to blank some or
     all of these bits, preventing it and (if one is clearing the
     "inheritable" bits) all of its descendants (all the processes it
     creates) from exercising these operations.
     
     This should allow us to emulate the BSD securelevel if we want to
     (create a little userspace utility that clears the appropriate
     "inheritable" bits and then exec()'s 'init' --- now all processes
     are unable to perform these operations).
     
     It's also nice in that it's more flexible than the BSD
     'securelevel' feature. For example you could just strip the
     privilege bits from 'inetd' and your various networking daemons.
     This would mean that the attacker would have to trick some
     console/serial line controlled process into executing any exploit
     code.
     
     The eventual plan is to add support for the additional bits in the
     filesystem. That won't happen for 2.2 --- but will likely be one of
     the planned project for 2.3. These filesystem attributes would be
     like a whole vector of SUID like bits --- each enabling one
     privilege. So each program that you'd currently make SUID 'root'
     would get a (hopefully) small subset of the privileges. If that
     sounds complicated and big --- then you understand. This is
     essentially what the MLS/CMW "B2-level" secure versions of
     commercial Unix do. (As described in the TCSEC "orange book" from
     what I hear).
     
     As a stopgap measure I hope that someone writes a wrapper utility
     that allows me (as an admin) to "manually" start programs with a
     limited set of privileges. This would allow me to write scripts,
     started as 'root' that would strip all unnecessary privs, and exec
     some other program (such as 'dump' or 'sendmail' or 'imapd' etc).
     (Such a wrapper would also allow a developer or distribution
     maintainer to easily test what privs a particular package really
     needed to work).
     
     So, that's an overview of the Linux crypto and security. There are
     just too many web resources on this subject to list them all, and
     there is obviously plenty of work being done on this all the time.
     The major constraint on any new security work is the need to
     support Unix and all the existing and portable Unix/Linux packages.
                        ____________________________
   
(?) Crypto Support ... What Book?

   From Dave Wreski on Mon, 09 Nov 1998 
   
   (From an old thread on the Linux-Admin List which I've been reading as
   part of the research for my book). 
   
   Hey Jim. I was just wondering what kind of book you are writing? Is
   this a linux-specific security book? 
   
   Dave 
   
     (!) Linux Systems Administration (for Macmillan Computer Publishing
     http://www.mcp.com).
     
     Since I consider security to permeate all aspects of systems
     administration, there will be quite a bit of that interwined with
     my discussions of requirements analysis, recovery and capacity
     planning, maintenance and automation etc.
                        ____________________________
   
(?) FS Security using Linux

   From AZ75 on Tue, 10 Nov 1998 
   
   Hello, My name is Jim Xxxxxx and I am a US citizen. I would like have
   a copy of the crypto code sent to me for testing if that's posible. I
   am at: .... 
   
     (!) I think you misunderstand part of this thread.
     
     I wrote an article (posted to the Linux-admin mailing list and
     copied to my editors at the Linux Gazette, and to a couple of
     involved parties and HOWTO authors). In that article I referred to
     the work of Mason Katz.
     
     Mason wrote one of the two implementation of IPSec for Linux.
     Please go to
     
     http://www.cs.arizona.edu/xkernel/hpcc-blue/linux.html
     
     ... and take particular note of this:
     
     You may request the export controlled sections by sending email to
     mjk@cs.arizona.edu
     
     ... at the bottom.
     
     Also, if you read the notes more thoroughly, you'll find a comment
     that:
     
     Although we are not currently tracking the IPSEC architecture, we
     believe that the released version can be brought up to date and
     extended to allow for more services. 
     
     ... which means that this implementation is probably out of sync
     with recent revisions to IPSec. That means that coding work would
     have to be done to make it interoperable with other
     implementations.
     
     I think you'd be far better off with the Linux FreeS/WAN
     implementation. In that case you'll be importing the code from the
     Netherlands. The stated goal of the Linux FreeS/WAN project is to
     provide a fully interoperable, standard implementation of IPSec.
     
     I still don't know what they're going to do about key management
     and Secure-DNS. I can't pretend to have sorted out the morass of
     competing key management specifications: Photuris, ISAKMP/Oakley,
     SKIP, IKE, etc. The Pluto utility with FreeS/WAN implements some
     sort of IKE with ISAKMP for part of the job (router-to-router
     mutual authentication?). The OpenBSD IPSec uses Photuris --- and I
     don't know of a Linux port of that. Presumably an interested party
     in some free country could port the OpenBSD Photuris to use the
     same interfaces to FreeS/WAN's KLIPS (kernel level IP security) as
     Pluto. My guess is that the two key management protocols could work
     concurrently (your FreeS/WAN host could concievably establish SA --
     security associations -- with IKE hosts through Pluto and with
     Photuris hosts) although I don't know how each end would know which
     key management protocol to use.
     
     I came across one reference to an alleged free implementation of
     Sun's SKIP for Linux in an online back issue of UnixReview Magazine
     (now called Performance Computing). That made a passing references
     with no URL.
     
     Further Yahoo! searches dug up Robert Muchsel's:
     
     http://www.tik.ee.ethz.ch/~skip
     
     ... which leads to a frames site (Yuck!). However, the recent
     versions of Lynx can get one past that to more useful page at:
     
     http://www.tik.ee.ethz.ch/~skip/UsersGuide.html
     
     I also guess that FreeBSD offers a SKIP enabled IPSec/IPv6
     implementation out of Japan through the KAME project at:
     
     http://www.kame.net
     
     Anyway, for now it appears that most of the key management will
     have to be done by hand (using shared secrets which are exchanged
     using PGP, GNU Privacy Guard, or over 'ssh' or 'psst' (GPG is the
     GNU re-implementation of PGP http://www.d.shuttle.de/isil/gnupg
     which is moving along nicely, and psst is the very beginnings of an
     independent GNU implementation of the ssh protocol IETF draft
     specification at: http://www.net.lut.ac.uk/psst).
     
     So, Jim, there's plenty of crypto code freely available --- you
     just have to import it from various countries with greater degrees
     of "free speech" than our government currently recognizes here in
     the U.S.
     
     (as is my custom I've removed identifying personal info from your
     message --- since this is being copied to my editors at LG).
            ____________________________________________________
   
(?) relaying still not correct ...

   From joel williams on Thu, 12 Nov 1998 
   
   Dennis, 
   
   I have another computer on my 10.1.1.0 net. I rebooted my Linux box,
   and now it will not relay mail again. Any clues? 
   
   Joel 
   
     (!) Somehow it considers the Windows box to be "in" your domain ---
     while it considers the other 10* system to be offering it mail from
     "outside" of your domain.
     
     Could you take a piece of mail like this:
     
-------------------- Cut Here ----------------
To: jimd@mail.starshine.org
Subject: Testing

Testing
-------------------- Cut Here ----------------

     ... and pipe that into the command:
     
     sendmail -v -t -oi &> /tmp/test.results
     
     ... (or capture the output in a typescript or use cut&paste from
     your telnet/xterm window).
     
     I'm interested in where this (other) system is trying to relay the
     mail to, and what/who it is masquerading as.
     
     The following might work:
     
divert(0)dnl
VERSIONID(`Williams-consulting nullclient')

OSTYPE(`linux')

FEATURE(allmasquerade)
FEATURE(always_add_domain)
FEATURE(masquerade_envelope)
MASQUERADE_AS(`williams-consulting.com')

FEATURE(nullclient, `williams-consulting.com')

     ... put that unto the affected box (if it's Unix), build the cf
     file (using a command similar to the one we used on the Linux box
     --- finding the right directory is the trick. You could use the
     copy of sendmail on the Linux box to build the .cf files for the
     other system(s) --- just redirect the m4 output to another file and
     copy the file over using ftp/rcp (or whatever).
     
     ... Note: Change the OSTYPE argument as appropriate.
     
     If this is a Windows box running Netscape Communicator or something
     like that -- check your "Identity" on that system.
     
     We know that you system will currenly relay for anyone that
     "claims" to be sending mail "from" your domain. So any client that
     masquerades as williams-consulting.com should work.
     
     I'll get an answer about the appropriate format for the
     /etc/mail/{relay_allow} file tonight. I'm pretty sure I have
     examples from Glynn Clements' posting to linux-admin in my
     archives.
            ____________________________________________________
   
(?) The state of UNIX in 1998

   From Jay Gerard on 17 Nov 1998 
   
   I am a sometime writer and CBT (Computer Based Training) developer. In
   1994 I wrote a CBT course, "UNIX for DOS Users." Time to upgrade the
   course and remove the DOS comparisons. 
   
   What I am not is a UNIX expert. To gather enough experience/knowledge
   to write the original course I installed Coherent -- a UNIX clone --
   on a PC, bought some books and asked a lot of questions. 
   
   What I would like to do now -- through this newsgroup -- is to ask
   some questions. I'm hoping that some people here will be willing to
   answer -- either through the group or via personal email to me. So,
   here are some questions. 
   
   1) In 1994, the Bourne shell was the most widely used. Is this still
   true? Are some shells more suitable for particular applications? For
   particular environments? (E.g. - - do universities tend to favor one
   shell?) 
   
     (!) The Bourne family of shells is still somewhat more common than
     csh and tcsh. On Linux the most popular shell, and the one used by
     default is bash (a Bourne/Korn clone from the FSF).
     
   (?) 2) Does Linux offer a variety of shells? Does it use a proprietary
   shell? (BTW, is it pronounced "LIE-nux" or "LINN-ux" or ???) 
   
     (!) Yes. Every shell that is commonly available for other forms of
     Unix are available for Linux. Here's a small list:
     
     ash, bash, pdksh, ksh88, ksh9x, tcsh, zsh
     
   (?) 3) What uses are there for UNIX on a personal (stand-alone) box? 
   
     (!) There are a number of games and applications that are available
     for Unix. In particular we find that Linux is spurring development
     of free and commercial productivity and personal apps. For example
     KDE and GNOME have numerous games and small apps. While KDE and
     GNOME are also portable to other forms of Unix, and much of the
     development was done on FreeBSD and other platforms --- they are
     strongly associated with Linux. (Fairly or not is a flamewar all
     its own).
     
     WordPerfect has been available for Linux for a few years --- and
     Corel has released new versions very recently. In addition Corel is
     committed to releasing their entire office suite for Linux.
     Hopefully this will run under the FreeBSD/Linux compatability
     libraries as well.
     
     There are more different applications suites available for
     Unix/Linux than there are for Windows (since MS has squeezed out
     almost all of the competition in that market). So we have our
     choice among StarOffice, Applixware, SIAG (free, "Scheme in a
     Grid"), LyX (free, LaTeX GUI/front end), and others.
     
     For more info on personal Linux applications I have three favorite
     URLs:
     
     Linas Vepstas' Home Page: http://linas.org/linux
     
     (note: this is NOT Linus Torvalds, the father of Linux --- he is
     another notable Linas)
     
     Christopher B. Browne: http://www.hex.net/~cbbrowne/linux.html
     
     Bill Latura's: Linux Applications and Utilities Page (v.11/12)
     http://www.xnet.com/~blatura/linapps.shtml
     
     I've been pointing people to these pages for some time ---
     sometimes I've referred to them from my monthly column in the Linux
     Gazette (as "the Answer Guy" --- a nomination that I didn't choose
     --- though I did volunteer to answer the questions).
     
     You can read the Linux Gazette (a free online webazine) at:
     http://www.linuxgazette.com.
     
     There are several other Linux webazines and periodicals including:
     
     Linux Weekly News: http://www.lwn.net
     
     Linux Focus: http://www.linuxfocus.org
     
     ext2: http://www.ext2.org
     
     (ext2 is the dominant Linux native filesystem
     
     reputed by some to be the fastest filesystem ever implemented on a
     PC --- which would sound like brash posturing if I'd heard those
     claims from Linux users --- those were from *non-Linux* analysts).
     
     Slashdot: http://www.slashdot.org
     
     (not strictly "Linux" but heavily oriented towards Linux and open
     source Unix systems).
     
     Freshmeat: http://www.freshmeat.net
     
     (not really a publication --- more of a daily announcements board
     for new Linux software and upgrades).
     
     Linux Today: http://www.linuxtoday.com
     
     ... and it's sister publication:
     
     Linux World: http://www.linuxworld.com
     
     These last two are relative newcomers --- the brainchildren of
     Nicholas Petreley --- and they seem to be funded by IDG
     Publications.
     
   (?) 4) Are all GUI applications based on X-Windows? 
   
     (!) Not in Linux. There are SVGAlib programs and there are at least
     two (relatively obscure) alternative GUI's
     
   (?) 5) Can you point me to a (hopefully concise) source of info with
   respect to GUI integration in UNIX today? I'd prefer an Internet-based
   source; but a book is OK, too. 
   
     (!) The most active avenues of Linux/FreeBSD GUI development these
     days are:
     
     http://www.kde.org http://www.gnome.org http://www.gnustep.org
     
   (?) 6) Are "Open Look" and "Motif" still common? In widespread use? 
   
     (!) OpenLook essentially died. You can still find and use the
     toolkits and window manager but it was the loser in the first great
     Unix GUI war.
     
     Motif is nominally still the "standard" for commercial Unix ---
     however, the GTK (GNU toolkit and widget set) is starting to take
     over the freenix flavors. GTK is the underlying library set for the
     GIMP (a popular freenix image manipulation and graphics package),
     and for GNOME (the GNU Network Object Model Environment project ---
     which is in early stages of development and will provide Unix users
     with a full suite of CORBA/GTK applications and utilities for their
     desktop environments).
     
     In commercial world CDE (built over Motif) is supposed to be the
     standard. However most of the serious Unix users I know basically
     work around CDE rather than with or in it. In the freenix
     community, KDE is out and available --- and subject to some
     controversy (since it currently relies upon a set of libraries
     that's only partially "free" --- a bit of hairsplitting that
     concerns programmers ISV's and distribution integrators and
     vendors).
     
     KDE and CDE aren't really comparable. They serve different
     purposes. However, superficially they offer similar appearance
     (although these days most GUI's look alike anyway).
     
   (?) 7) What per cent of UNIX users/installations use a GUI? 
   
     (!) I have not idea. I gather that about 70% or more of the Linux
     user base primarily uses their GUI's
     
     One glitch in any such statistic would be the ratio of "server"
     machines to workstations. Very few organizations use character cell
     terminals these days --- many use Windows systems as ethernet
     terminal emulators to talk to their Unix systems and mainframes.
     
     There is a chance that the Netwinders and similar (Linux/Java
     based) NC's will see significant deployment in banks, retail sites
     (like automotive parts and pizza counters, etc). This is due to
     their low cost, extremely small footprint, low energy consumption
     and heat dissipation, quiet operation and practically non-existent
     maintenance requirements.
     
   (?) 8) Are there installations which use both a GUI and the standard
   character-based interface? 
   
     (!) Yes. I use them at my office.
     
   (?) 9) What is your opinion as to the usefulness/practicality of a GUI
   in UNIX now. In the future? 
   
     (!) Who cares? Why so much focus on the GUI's?
     
     I personally use text mode for most work. I mostly work in my text
     editor (which is also my mail and newsreader). I usually use Lynx
     to browse the web, because I'm usually interested in the content --
     the text.
     
     I usually keep one or two X sessions running on each of my systems
     (one running as me, another running under my wife's account). I
     switch out of them to do most of my work, and into them to use
     Netscape Navigator, xdvi and/or gv (TeX/DVI and PostScript
     previewers), and 'xfig' (a simple drawing program) when I need
     those.
     
     I use 'screen' (a terminal multiplexer utility) which allows me to
     detach my work from one terminal/virtual console and re-attach to
     it from any other. This lets me yank my editor/mail/news session
     into an 'xterm' or yank it from there and unto my laptop/terminal
     in the living room (it's a home office). That's how I watch CNN and
     TV when I want to.
     
     One of the reasons I adopted Linux is because I prefer text mode
     screens and keyboard driven programs to GUI's. It lets me work my
     way --- rather than trying to make me work "its" way.
     
     In answer to your question: You really need to do some research at
     the sites that I've listed. Linux and freenix is poised to
     completely wipe Microsoft from the desktops of the world in the
     next few years. The fact that every major magazine in the industry
     has been recently saying "Linux can't take over the desktop" is
     lending initial credibility to the idea.
     
     It was an absurd idea a year ago. But all the work on KDE, GNOME,
     GNUStep, the growing support by Caldera, Applix, Star Division, and
     the hints of interest by Compaq, Intel, and others clearly point to
     a Linux future on the desktop. (In this regard I must point out
     that FreeBSD is every bit as viable technically --- and that it
     will certainly gain a sliver of that marketshare as well, probably
     not nearly as much as it deserves --- Linux has more "mindshare").
     
     CNN had a three minute segment on Linux running every hour all
     weekend. See my report on Linux Today at:
     
     http://www.linuxtoday.com/stories/867.html
     
     ... for details.
     
     The problems with Unix were typically in the bickering and
     licensing between the major vendors (most of them hardware
     manufacturers whose primary interest was in "trapping" a segment of
     the market). Technically it has always been a pretty good choice.
     
     Another problem with Unix over a decade ago was the lack of power
     in the early micros. You could not support a credible Unix on
     anything less powerful than a 386 with about 16Mb of RAM and about
     300Mb of disk space. Once you get past that threshold (about 1990
     was when these systems saw significant consumer deployment) you saw
     reasonably rapid development of Linux and FreeBSD. (Linux was
     publicly available in late '91 to early '92 --- I've been using it
     since late '92 to early '93. 386BSD, by Mr. and Mrs. Jolitz was
     further along in development by that point --- and FreeBSD appeared
     around that time).
     
     In any event, the usefulness of the various GUI's under Unix are
     equal to those for any other platform. The only thing that isn't
     "all there" is the ability to readily share documents with MS
     Office applications. Anyone who thinks that this is not do to a
     deliberate effort on the part of Microsoft is delusional.
     
     MS executives and developers would have to be IDIOTS not to see the
     strategic importance of "one-way" document interoperability --- of
     the value of "locking in" their customers to "cross sell" their OS
     products. They would be remiss in their fiscal responsibilities if
     they'd failed to use that to the advantage of their shareholders.
     
     (Note: number one dilemma of corporate capitalism --- shareholders
     have priority over customers, number two is the inevitability of
     over-capacity and number three is the necessity for anti-trust
     regulation and government to moderate monopolies and cartels.
     Sorry, but over capacity and monopoly are systemic in our economy
     --- the rules predispose the trend towards them, eventually and
     inevitably).
     
   (?) 10) From the "probably-off-the-wall department": Is there a site
   where I can telnet in to actually practice using UNIX? 
   
     (!) If you have a PC (or a Mac), use Linux. Then install FreeBSD
     (or NetBSD for your Mac). There is no reason (excuse) to try to
     write a tutorial on Unix without getting some hands on experience.
     
     In answer to your question, there are numerous "freenet" and "pain"
     (Public access internet nodes) sites around. They tend not to
     advertise for obvious reasons.
     
   (?) Thanks for any help.
   Jay Gerard 
   
     (!) Good luck on your project.
            ____________________________________________________
   
(?) How Many Ways Can I Boot Thee: Let Me Count Them

   From Wilke Havinga on Tue, 17 Nov 1998 (from the L.U.S.T List)
   
   >I understand that Linux cannot be on the slave drive. 
   
     (!) You misunderstand. Linux can be installed on most combinations
     of devices. You can have the kernel on any drive where your loader
     can find it (for LILO that means anywhere that your BIOS can
     access, for LOADLIN.EXE --- a DOS program that means anywhere that
     DOS can access).
     
     You could put your kernel completely outside of any filesystem,
     laying it out at some arbitrary location on some hard disk. So long
     as you can get your loader code to find it --- you can load that
     kernel. (You could use the /sbin/lilo utility to prepare this
     particular set of LILO boot blocks and maps --- since it needs to
     find the kernel image and it's a linux program. However you could
     hand craft your own maps if you were really determined to have a
     kernel laying on the unused portion of track zero or on some part
     of your disk that was between or after the defined partitions).
     
     Once the kernel is loaded it looks for a root filesystem. For any
     given kernel there is a compiled-in default. This can be modified
     using the 'rdev' command (which performs a binary patch of the
     kernel image). It can also be overridden by supplying the kernel
     with a command line parameter (root=). There are a number of kernel
     command line parameters (all of the form: option=value) --- these
     can be passed to it via the LILO "prompt" or the /etc/lilo.conf
     append= directive, or on the LOADLIN command line (among others).
     
     Read the BootParam HOWTO and man page (section 7 of the man pages)
     for details about kernel parameters.
     
     You can boot a kernel directly from a floppy (just dd the kernel
     image to the raw floppy). You can also use LILO on a floppy. You
     can create a bootable DOS floppy with a copy of LOADLIN and a linux
     kernel on it (with an AUTOEXEC.BAT if you like). You can even use
     the SYSLINUX package (available as DOS and linux binaries). This
     modifies a (non-bootable) DOS formatted floppy to boot a Linux
     kernel (and is used by the Linux Router Project and Red Hat boot
     diskettes).
     
     It is also possible to boot Linux from some sorts of FlashROM and
     ROMdisk emulators and from other forms of ROM installation. You can
     even boot Linux across a network using a boot prom for those
     ethernet cards that support them (for example).
     
     Igel makes PC hardware with embedded versions of Linux for their
     line of X terminals, thin clients and
     "Ethermulation"/"Etherterminals" (thse boot from flash).
     http://www.igelusa.com. Also there are many discussions of
     alternative boot methods and devices that are regularly discussed
     on the "Embedded Linux" mailing list at
     'http://www.waste.org/mail/?list=linux-embedded'
     
   (?)
   >Hmm... That's odd, because I have Linux on a slave HD right here on
   this
   >computer and it works fine. I'm certain it doesn't have trouble
   getting
   >at drives on the secondary controller, either. 
   
   Booting with LILO? Or Loadlin? 
   
   [prior partition dicussion snipped] Don't forget, Linux needs a swap
   partition. 
   
   This is not entirely true, if you have enough RAM (like, >64MB will be
   enough for most people) you don't need one. It's only that RedHat
   requires you to have one (which I find pretty annoying sometimes
   because you can have only 4 partitions on a drive, especially on large
   drives). 
   
     (!) While technically you are correct, you don't need a swap
     partition, this is bad advice.
     
     You'll find that you performance suffers dramatically without one.
     Although I make a couple of 64M swap partitions available on my
     system (allowing Linux to load balance across a couple of spindles
     if it should ever need to), it typically used about 30K of swap
     even when I have plenty of RAM free (most of it is used in
     file/cache buffering).
     
     Read the kernel list archives and search for the term "swap" and
     you'll find that the consensus among the kernel developers is that
     you need swap to get decent performance out of the current kernels.
     Some have even reported that using 100 or 200K RAM disk with a swap
     file on it will dramatically improve the performance over using all
     of your memory as straight RAM.
     
     So, Red Hat's insistence may be irritating --- but it is not wholly
     without cause.
     
     You are wrong about the number of permitted partitions per drive.
     You can have four primary partition entries. One of those can be an
     "extended" partition. That extended partition can have "lots" of
     partitions. Let's look at an example from 'antares', my decade old
     386DX33 with 32Mb of RAM and a full SCSI chain:
     
Disk /dev/hda: 16 heads, 38 sectors, 683 cylinders
Units = cylinders of 608 * 512 bytes

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/hda1   *        1        1      107    32509    4  DOS 16-bit <32M
/dev/hda2          108      108      684   175408   a5  BSD/386

     .... and old FreeBSD partition that I haven't used in a couple of
     years. This is the boot drive. I use LOADLIN to get into Linux.
     
Disk /dev/sda: 64 heads, 32 sectors, 1908 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/sda1   *        1        1       32    32098+  83  Linux native
/dev/sda2            5       32      102    72292+  82  Linux swap
/dev/sda3           14      102     1907  1847475    5  Extended
/dev/sda5           14      103      236   136521   83  Linux native
/dev/sda6           31      236      495   265041   83  Linux native
/dev/sda7           64      495     1248   771088+  83  Linux native
/dev/sda8         1184     1248     1907   674698+  83  Linux native

     Whoa nelly! I have 3 primary partitions: 1 2 3 --- the third
     defined the entended partition. Therein I have 5, 6, 7, and 8 ---
     another four partition on that same drive. I think I've gone upto
     10 at least once --- though I don't know of a limit to these
     extensions.
     
Disk /dev/sdb: 64 heads, 32 sectors, 532 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/sdb1            1        1       17    17392   83  Linux native
/dev/sdb2           18       18      532   527360    5  Extended
/dev/sdb5           18       18      532   527344   83  Linux native

     Lookie! A disk with two primaries, one defining an extended
     partition that contains a single Linux fs.
     
Disk /dev/sdc: 64 heads, 32 sectors, 2063 cylinders
Units = cylinders of 2048 * 512 bytes

   Device Boot   Begin    Start      End   Blocks   Id  System
/dev/sdc1            1        1     2063  2112496   83  Linux native

     ... Oh. One that just has one partition on it.
     
     (The rest of this SCSI chain consists of a CD, a CDR, a 4mm DAT
     autochanger tape drive, and an old magneto optical drive).
     
   (?) So if you intend to run RedHat (which is probably the easiest to
   install) you need 2 partitions for Linux indeed. 
   
     (!) Yes. However, you can just put these in extended partitions
     (one primary partition is labeled as "the 'extended' partition" ---
     then all partitions defined within that are called "extended
     partitions" --- an irritating bit of terminology that serves to
     confuse).
     
   (?) Wilke Havinga 
   
     (!) I hope that helps.
            ____________________________________________________
   
(?) Programmer Fights with Subnets

   From Grant Murphy on Tue, 17 Nov 1998 
   
   I'm a numerical C programmer and have inherited the system admin job
   in a 'small' geophysical exploration company. We have a fine
   collection of lovingly maintained and oftem overhauled equipment
   ranging from SunOs4 machines to an NT box, handbuilt aquisition
   systems mounted in aircraft, dual real time differential GPS systems
   etc. etc. I know A LOT about a number of particular things in maths,
   geophysics, unix, world coord systems etc, but I am a babe in the
   woods about other things ... networking in particular. 
   
   The problem at hand ( & one that I have searched for FAQ's on &
   trolled comp.os.linux.networking for the REAL answer to ) is this: 
   
   We have two networks in our office, one is made up entirely of windows
   95 machines and office printers etc. The other was made up entirely of
   SunOS4 and Solaris machines with an A0 HP map plotter and a versatec
   plotter ( about the size and weight of a compacted VW bug ). The two
   networks intersect in a single linux box running a 1997 version of
   caldera linux, with two network cards, a dial out modem card for
   internet access, no keyboard, no moniter ( well, who needs them ) 
   
   The SunOS network now contains two windows machines used for
   processing data. One is Win95, the other WinNT workstation. 
   
   I **can't** get the two windows machines to see the shared drives and
   printers of the win95 machines on the other side of the linux box. 
   
   1) I have all win machines using TCP/IP with NetBeui disabled (lots of
   people seemed to recomend this) 
   
     (!) That's because NetBIOS/NetBEUI (the "native" Windows transport
     level networking protocols) aren't routable --- they only work
     within a LAN).
     
   (?) 2) I have samba on the linux box and can mount unix drives and see
   them on the network neighbourhood of the win95 box & winNT box on the
   unix network. 
   
     (!) What version of Samba is it? Have all the appropriate patches
     and service packs been applied to the Win '95 and NT boxes?
     
     That problem probably related to the share "browse mastering"
     protocols used by SMB. There have been many problems with these
     browsing protocols. I don't know the details, but I've heard that
     the Samba team has done quite a bit of work to fix those problems.
     
   (?) 3) The network was split into two rings before I arrived under the
   rationale that the traffic of the two networks wouldn't interfere
   (some of the geophysical data traffic is pretty big - half gigabyte
   files etc) 
   
     (!) Isolating LAN segments is a classic and effective way to
     optimize bandwidth utilization. I shudder t think of the amount of
     money that's been unecessarily and poorly spent on etherswitches
     for networks that would have benefitted far more from simple
     segmentation and (in some cases) some server file tree replication.
     
   (?) 4) The linux box has two cards: eth0 with IP address 192.9.200.10
   and broadcasting 192.9.200.* - all unix boxes, win machines attached
   through that card have IP addresses 192.9.200.* eth1 with IP address
   192.168.1.10 and broadcasting 192.168.1.* - all office machines have
   adresses 192.168.1.* 
   
   5) I can ftp from the office network to the unix boxes alright . 
   
     (!) So, TCP routing works between the two.
     
   (?) I'm under a reasonable amount of pressure to make the network look
   easy, people want to access the HP A0 plotter from the office
   computers just like they access the office laser printer - Now that
   the processing guys have an NT box with word processing etc. they want
   to access the office laser printer. 
   
     (!) If the primary resource that is to be shared is the printer ---
     I'd connect the printer to the Linux box, and install Samba. Let it
     be the print server as well as the router between the two segments.
     
     Likewise for the plotter (if that can be driven by your Linux
     system. I'm not familiar with the device or its interfaces).
     
   (?) Owing to industry recession, the chances of getting an expert
   network guy in to solve it seem to be slim to bugger all. This is
   chewing up time that is better spent working on algorithms to do noise
   reduction of 256 dimensioned radiometric data, and improving field QC
   software. 
   
   If you have any answers to this conundrum they would be gratefully
   received & I am happy to return the favour with answer's to any posers
   that you might have about numeric/scientific/geophysical/C language
   problems. 
   
     (!) Try installing the latest version of Samba on the Linux box
     (try the 2.0. beta that was announced last week). Hopefully it will
     be able to propagate those pesky browse/share broadcasts from each
     segment to the other.
     
   (?) (I wrote an ANSI C compiler for an early version of MINUX that was
   ported to both a transputer array and an ARM6 chipset machine - none
   of that involved networking though) 
   
     (!) Is there any Linux support for transputers? Are there modern
     transputers (PCI, even), or have modern processors obviated their
     utility?
     
   (?) Yours sincerely (& perplexed)
   Grant Murphy 
            ____________________________________________________
   
(?) Using A Dynamically Assigned Address from PPP Startup Script

   From D. Kim Croft on Tue, 17 Nov 1998 
   
   I am trying to set up a script that, when I connect to the internet
   will write a little html file with a link to my ipaddress to upload to
   my web account on my isp. However my ip address is dynammically
   assigned so I never know exactly what it is. In windows I can netstat
   -rn to find it but,in linux when I netstat -rn i only get my ?router?.
   Anyways If you know of any way that I can find my ipaddress when I
   connect. it would be greatly apprecciated. 
   
     (!) Let's assume that you are using the Linux pppd package to
     establish this connection. In that case the most obvious method
     would be to call your script from the '/etc/ppp/ip-up' script.
     Reading the 'pppd' man page we find a couple of references to this
     file, which is automagically called when the PPP session is
     established. ('/etc/ppp/ip-down' is called when the session is
     terminated).
     
     It's called with five parameters including:
     
     interface device speed your-IP their-IP
     
     .. and there's an option to provide an additional, admin specified
     parameter which can be set from your options file.
     
     So you can write your script to just take the parameters you need
     (just the local IP address in this case) can call it with an entry
     in your ip-up script with a command like:
     
                /usr/local/bin/update-my-web-page  $4

     ... where 'update-my-web-page' is a shell, perl, awk, Python, TCL,
     or other script or program that opens a connection to your
     website's host and writes your page to it. (I'll assume that you
     have 'rcp/rsh', ksh (Kerberos 'rsh') 'ssh/scp' or C-kermit or
     'expect/ftp' connect and tranfer script that can automate the file
     propagation process.
     
   (?) thankyou 
            ____________________________________________________
   
(?) More than 8 loopfs Mounts?

   From Philippe Thibault on Fri, 20 Nov 1998 
   
   I've setup an image easily enough and mounted it with the iso9660 file
   system and asigned it to one of my loop devices. It works fine. What I
   was wondering was, can I add more than the eight loops devices in my
   dev directory and how so. What I'm trying to do is share these CD
   images through SMB services to a group of Win 95 machines. Is what I'm
   trying to do feasable or possible. 
   
     (!) Good question. You probably need to patch the kernel in
     addition to making the additional block device nodes. So my first
     stab is, look in:
     
     /usr/src/linux/drivers/block/loop.c
     
     There I find a #define around line 50 that looks like:
     
                #define MAX_LOOP 8

     .... (lucky guess, with filename completion to help).
     
     So, the obvious first experiment is to bump that up, recompile,
     make some additional loop* nodes under the /dev/ directory and try
     to use them.
     
     To make the additional nodes just use:
     
                for i in 8 9 10 11 12 13 14 15; do
                        mknod  /dev/loop$i b 7 $i; done

     I don't know if there are any interdependencies between the
     MAX_LOOP limit and any other kernel structures or variables.
     However, it's fairly unlikely (Ted T'so, the author of 'loop.c'
     hopefully would have commented on such a thing). It's easier to do
     the experiment than to fuss over the possibility.
     
     In any event I doubt you'd want to push that value much beyond 16
     or 32 (I don't know what the 'mount' maximums are --- and I don't
     feel like digging those up do). However, doing a test with that set
     to 60 or 100 is still a pretty low-risk and inexpensive affair (on
     a non-production server, or over a weekend when you're sure you
     have a good backup and plenty of time).
     
     So, try that and let us know how it goes. (Ain't open source (tm)
     great!)
     
     Of course you might find that a couple of SCSI controllers and
     about 15 or 30 SCSI CD-ROM drives (mostly in external SCSI cases)
     could be built for about what you'd be spending in the 16 Gig of
     diskspace that you're devoting to this. (Especially if you can find
     a cachet of old 2X CD drives for sale somewhere).
            ____________________________________________________
   
(?) Where to find Multi-Router Traffic Grabber

   From Brian Schau on Sun, 22 Nov 1998 
   
   Hello Jim, 
   
   You might have a point. I haven't even considered mrtg myself. Do you
   have a URL to mrtg? 
   
   Kind regards, Brian 
   
     (!) Freshmeat (http://www.freshmeat.net) is you friend. Its
     quickfinder rapidly leads me to:
     
     Multi-Router Traffic Grabber Home Page:
     http://ee-staff.ethz.ch/~oetiker
     
     ... the canonical home page for MRTG.
            ____________________________________________________
   
(?) Support for the Microtek SlimScan Parallel Port Scanner

   From Alejandro Aguilar Sierra on Sun, 22 Nov 1998 
   
   Hello, 
   
   The scanner SlimScan from Mocrotek uses a kind of scsi through
   parallel port connexion. Neither 2.0.x nor 2.1.x kernels seems to have
   support for this device, at least I didn't find it in the kernel
   config in parallel and scsi sections. There are drivers for parallel
   port ide and atapi devices but not for pseudo scsi. 
   
   Am I wrong (I hope) ? Any suggestion? 
   
     (!) You're probably not wrong. There probably isn't support for
     this, yet.
     
     Suggestion: Call Microtek. Ask if they have a drive. Then ask if
     they know of a driver some someone else? Then ask if they'd be
     willing to write a driver (point out there there are plenty of code
     examples for parallel port device drivers and from which they are
     legally entitles to derive their own driver --- subject to the GPL,
     of course).
     
     If you're still making no headway --- consider asking for an RMA (a
     return merchandise authorization: assuming that you've only
     recently purchased this scanner). Then go get one that's supported.
     When a company gets enough of these (customers who purchased a
     product in good faith and found that it doesn't suit their needs
     due to a lack of Linux support), they often come to their senses
     and realize that they are hardware companies (that providing source
     code drivers and technical specifications removes the biggest
     constraint to their ability to sell their products).
     
   (?) Thanks, Alejandro 
   
     (!) You're welcome. Good Luck.
            ____________________________________________________
   
(?) RPM Dependencies: HOW?

   From Riccardo Donato on Sun, 22 Nov 1998 
   
   How can you install rpm packages that are written for redhat 4.0 or
   5.0? I tried to install them but for some of them I receive error
   messages (libraries which are not into the system). 
   
     (!) When asking questions in any public forum (mailing list,
     newsgroup, webazine or traditional magazine) if the question
     relates to any errors you are seeing ....
     
     INCLUDE THE TEXT OF THE ERROR MESSAGES!
     
     It's also a good idea to include the exact command line or sequence
     that gave the error. I can't tell if you were getting this from a
     shell prompt using the 'rpm' command or from some X Windows or
     curses front end to the RPM system.
     
     That said I suspect that the RPM system is complaining about
     dependencies. That is to say that the package you are trying to
     installed "depends" on another package (such as a library).
     
     The usual solution is to get get the RPM file which provides those
     libraries or other resources, and install them first. Sometimes it
     can be a bit of a trick to figure out which RPM's you need to
     install and in what order. It would be nice if Red Hat Inc.
     provided better information on that (perhaps in the "info" page
     that can be extracted fromm any RPM file using the 'rpm -qpi'
     command). There's some 'rpm --whatprovides' switch --- but I have
     no idea what that does.
     
     Another trick, if you have a hybrid system (with some RPM's and
     some packages you've built and installed from "tarballs" or even
     through the Debian package system) is to try the installation with
     the "--nodeps" option to the 'rpm' command. However, this may not
     work very well, even if you have the requisite packages installed.
     It shouldn't be a problem with libraries --- but some other types
     of files might not be located in the "right places." You can
     usually solve that with judicious use of symlinks; but you need to
     know what the RPM package's programs are looking for and where.
     
     Without knowing the specific packages involved, I couldn't do more
     than generalize. Considering that there's a whole web site devoted
     to the RPM system http://www.rpm.org and a couple of mid-sized
     corporations (Red Hat, http://www.redhat.com, and S.u.S.E.
     http://www.suse.de and http://www.suse.com) --- it would be silly
     for me to generalize on the RPM system.
            ____________________________________________________
   
(?) modutils question

   From M Carling on Sun, 22 Nov 1998 
   
   Jim, 
   
   The docs for 2.1.129 indicate that modutils-2.1.121 are prerequisite.
   But the README for modutils-2.1.121 indicates that it must be compiled
   under a 2.1.X kernel. Do I have a chicken-and-egg situation here? 
   
   M 
   
     (!) Shouldn't be that bad. You should be able to build a kernel
     with enough support (compiled in) to access your root fs device.
     (You already do, unless you were doing something fancy like running
     an 'initrd' (initial RAM disk)).
     
     Also the claim that it needs to be compiled under a 2.1 kernel
     seems very odd. I could see where it would need the 2.1.x kernel
     installed (so that it could find the proper header files --- which
     are symlinked from /usr/include to somewhere under
     /usr/src/linux.... (/usr/src/linux in turn is normally a symlink to
     .../linux-X.Y.ZZZ).
     
     I can't see where the compiler (totally user space) needs to have
     any special kernel support to do its job. I think you could even
     cross compile the kernels and modutils --- so I think the README is
     wrong (or being misinterpreted).
     
     (Note: having the kernel "installed" is not quite the same as
     running under it. Maybe that's what they mean).
     
     (Again, I didn't have a problem with this -- but I often compile
     kernels without loadable module support and I routinely compile my
     SCSI and ether card drivers statically into my kernel. There's
     often nothing else I really need loaded.).
            ____________________________________________________
   
(?) libc5 and libc6

   From M Carling on Sat, 21 Nov 1998 
   
   Hi Jim, 
   
   I'm preparing to configure and compile 2.1.129. At the moment, I'm
   trying to bring up-to-date all the software on which it's dependant.
   The documentation ambiguously seems to suggest that one needs BOTH
   libc5 AND libc6. Is that right? Or is it either/or? 
   
   M 
   
     (!) The linux kernel is completely independent of your libc
     version. You can run a 1.2.x, 2.0.x and 2.1.x kernels with libc4,
     libc5 and glibc (libc6). You can switch among kernels mostly with
     impunity and you can have all of these libc's on the system
     concurrently. (The dlopen stuff will resolve the problems according
     to how each binary was linked).
     
     The few gotchyas in this:
     
     Really old kernels (1.2.x) used a different presentation of key
     nodes under /proc. Thus the procps utilities (like 'ps' and 'top')
     from that era would core dump when executed under a newer kernel
     (with an incompatible proc representation). I don't know if the
     newer procps suite will gracefully deal with the obsolete proc
     format or not. I should check that some time.
     
     The format of the utmp and wtmp files has changed between libc5 and
     glibc. This is completely unrelated to the kernel. However, it
     means that all utmp/wtmp using programs must be linked against or
     or the other library. Those won't co-exist gracefully.
     
     (I imagine you could isolated all your libc5/utmp/wtmp programs
     under a chroot or some silly thing --- but I doubt that's going to
     be useful in practice).
     
     There is a list of all of "Linux 2.1 Required Utility Program
     Upgrades" at LinuxHQ:
     
     http://www.linuxhq.com/pgmup21.html
     
     ... with convenient links to the tar.gz file for each of them. I
     have run 2.1.12x kernels without upgrading any of these and without
     any mishaps. I'd probably eliminate some of minor quirks and Ooops'
     that I've see --- and I'll get around to that when I get the time.
            ____________________________________________________
   
(?) Linux on Dell Systems

   From Mikhail Krichman on Fri, 20 Nov 1998 
   
   Dear Mr. Dennis, 
   
   Sorry for bothering you out of the blue, but you seem to be THE person
   to talk to regarding the problems I have. 
   
     (!) I wouldn't say I'm THE person. There are thousands of Linux
     users on the 'net that do the same sorts of support that I do. They
     just don't get all the glory of a monthly column in LG ;) .
     
   (?) I am thinking of buying a Dell computer system (350Mhz, Pentium II
   desktop). I intend to install Linux on it (to type my dissertation in
   LaTeX), but I also want to have Win98 and related software, just in
   case. IN relation to this I have two burning question: 
   
     (!) Maybe I could ask you a few questions on LaTeX. I'm writing my
     book (Linux Systems Administration) in that format because I love
     the extensibility, the cross references and labels, the indexing,
     and the ability to focus on structural markup rather than
     appearance (and to defer many elements of cosmetics to later).
     
     However, it is a pretty complex environment (more programming than
     composition) and I occasionally get into some tight spots). I'd
     love to have a LaTeX guru on tap. (Yes, I sometimes post to the
     comp.text.tex newsgroup; but sometimes I prefer the bandwidth of
     voice to the precision of e-mail/news text).
     
   (?) 1) My friends warned me that Dell (just as any other brand name 
   
   computer) may have some proprietary features of the design, which
   would prevent Linux from functioning properly. Have you had any
   related problems reported or dealt with? 
   
     (!) Actually, Dell owes a tremendous degree of its popularity to
     the fact that they usually eschew proprietary features and
     traditionally have produced very compatible systems with consistent
     quality.
     
     They might not always the the "hottest, coolest, fastest, and
     latest" --- but a palette load of Dells will all work the same way,
     probably won't require any special vendor drivers and patches, and
     won't cost as much as the first tier IBM's and Compaq's (who can
     afford to devote that extra margin on research and development of
     cool, fast, late-breaking, bleeding edge and proprietary features).
     
     Many business have standardized on Dell for this reason. Some of
     these have palettes of these systems drop shipped to them (hundreds
     at a time in some cases). They want the systems they order next
     month to work just like the ones they deployed last month ---
     because having your IS and help desk staff trying to sort out those
     new "features" can rapidly cost more than the systems themselves.
     
     So, Dell traditionally was noted for it's lack of proprietary
     frills. However, they've now been the "wunderkind" of the stock
     market for about the last year. This may spur them to take on the
     very same "bad attitudes" that provided them with the opportunity
     to overtake IBM and Compaq in the marketplace.
     
     I should reveal some of my biases and involvement with this issue:
     
     I wrote an open letter to Dell(*) to lobby for customer choice in
     the bundled software. This was specifically to allow Linux and
     FreeBSD users to order systems without purchasing software that we
     don't want and will never use.
     
     (*) Published in the Linux Weekly New
     http://lwn.net/lwn/980514/dell.html
     
     They'd initially claimed that there was "no customer demand for
     this" (which was an offensive lie).
     
     It was later revealed that they had been pre-installing Linux on
     systems shipped to some select corporate customers in Europe (read:
     BIG contracts that DEMANDED it) for about a year.
     
     Micheal Dell has recently commented on the issue (though not in
     response to me, personally) and characterized the demand a "vocal"
     but not necessarily from a large market segment.
     
     I responded to that as well.
     (http://www.lwn.net/1998/1112/backpage.phtml).
     
     So, obviously I'm biased. More importantly I've pointed to
     alternatives. There are a large number of hardware vendors that
     will respond to their customer's needs.
     
     You can find a list of vendors who will pre-install Linux at:
     http://www.linux.org/vendors/systems.html
     
     Naturally these are small companies that "nobody" as ever heard of.
     However, Dell was also an obscure company as little as five or six
     years ago. So, there's a real chance that one of these vendors will
     become the next Dell.
     
     I think that Dell will soon "see the light." Although I've lobbied
     for it and think it would be best of the Linux community as a
     whole; I have mixed feelings from another tack. I'd really rather
     see one of the "little guys" (from the Linux vendors list for
     example) grow into a new powerhouse on Wall Street.
     
     (My superficial impression is that VA Research has the best current
     head start on this market. However, VA Research focuses entirely on
     PC's --- and so far refuses to consider Alpha, PowerPC, StrongARM,
     or other platforms that represent some interesting new options for
     the Linux user. There's a part of me that is getting REALLY tired
     of PC's. Linux gives of the choice --- all of the core software
     that most of use for most of our work is portable and has already
     been ported to almost a dozen architectures. WE DON'T HAVE TO TAKE
     IT ANY MORE!).
     
   (?) 2) I really would like to have a DVD-ROM on my machine (III 
   
   generation, but I don't know which brand they are offering). Are there
   DVD-drivers supported by Linux, or, alternatively, will the CD-ROM
   drivers available with Linux make the DVD-ROM work at least as a
   CD-ROM? 
   
     (!) Quite by chance I noticed that PenguinComputing
     (http://www.penguincomputing.com --- founded by my friend and
     fellow SVLUG member, Sam Ockman) now offers DVD Drives on his
     systems. (*)
     
     * (http://www.penguincomputing.com/dvd-cd.html)
     
     I note that there isn't currently any available software to view
     DVD movies under Linux. However, there's apparently no problem
     using these drives to read CD discs, including CD-R and CD-RW
     media.
     
     ... He also offers those cool case LCDProc displays there were all
     the rage at SlashDot (http://www.slashdot.org) earlier this year.
     These are little backlit LCD panels that you can install in place
     of a couple of 5.25" drive blankup covers in any normal PC case.
     You can drive this to provide various types of process status
     displays.
     
     Anyways, you might want to consider getting the whole system from
     him. (Editorial disclaimer: I did mention that he's a friend of
     mine, didn't I? I'm not, however involved in any business with Sam,
     nor with VA Research --- which is also operated by friends and
     aquaintances and where Sam used to work, in fact).
     
   (?) Sincerely, Mikhail KRichman 
   
     (!) Hope this all Helped.
            ____________________________________________________
   
(?) Remote Login as 'root'

   From Crown Magnetics, Inc on Fri, 20 Nov 1998 
   
   How can I find out how to make it possible on A Linux system to login
   as root at a location other than the console? 
   
   (I'm used to Solaris Intel and there it's in /etc/default/login) but
   I'm not sure how to do this in Linux . . . 
   
   Thanks - Sheldon 
   
     (!) Most UNIX systems refuse to allow remote users (telnet) to
     login directly as root. This is intended to require that you login
     under your normal account and 'su' to 'root' as necessary.
     
     Overall I think this is an excellent policy to enforce. Actually I
     think its still far too liberal. You really should consider
     installing 'ssh', STEL, or a similar secure and encrypted remote
     access system.
     
     If you really insist on being able to do this via 'telnet' or
     'rlogin' then you'll have to look in your man pages for the
     'telnetd', 'login' and 'in.rlogind' (or equivalent) programs. I'm
     not saying this to be churlish --- there are different suites of
     these utilities that are included with different distributions.
     
     Some distributions use the "Shadow Suite" (originally by J. Haugh
     III?). There is a file called '/etc/login.defs' (with a
     corresponding man page: login.defs(5)). That case a CONSOLE
     directive/option. Read about it. Red Hat includes the PAM suite of
     these utilities. It's possible to remove the 'securetty' check from
     the specific PAM service configuration files by editing the files
     under the /etc/pam.d/ directory (more recent versions) or the one
     /etc/pamd.conf file (obsolete).
     
     In some cases you may have to edit your /etc/inetd.conf file to add
     or remove options from the 'in.*' services listed therein. For
     example you have to add a -h to the in.rlogind entry if you want to
     force that command to respect a '.rhosts' file for the 'root' user.
     That man page notes that these flags are not used if PAM is enabled
     --- and directs you do use the /etc/pam.d/ configuration files
     instead.
     
     Those couple of cases should handle the vast majority of Linux
     distributions. I realize that my answer is basically RTFM --- but I
     hope I've directed you to the appropriate FM's to R.
     
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                                Basic Emacs
                                      
                              By Paul Anderson
     _________________________________________________________________
   
   Emacs is, by nature, a very difficult program to use. Few people can
   even figure out how to exit it, let alone use it. I won't cover
   configuring emacs, as that is a whole art unto itself, one which I
   have yet to master.
   
   You probably already have emacs installed, I'll assume you do. At the
   command prompt, type:
   
emacs

   Emacs will start up with a scratch buffer, which isn't really meant
   for anything other than scratch notes. So, we must bring emacs up with
   a filename on the command-line. Before we do that, we must exit emacs.
   Hit C-x C-c(hold down control, then press x, then press c), and it'll
   exit. Now, let's bring it up with a filename:
   
emacs bork.txt

   The screen will look something like this:
   
Buffers Files Tools Edit Search Mule Help






















----:---F1  bork.txt          (Text)--L1--All----------------------------------
-
(New file)

   Now, let's look at the bottom status line. It displays the filename
   we're working on, informs that it's using the Text mode(more on emacs
   modes later in this doc) , that we're on line 1, and it's display all
   of the file. As an example of what it will display while editing a
   file with information in it, here's what's on the status bar on my
   screen:

----:**-F1  emacs.html        (HTML)--L59--70%---------------------------------
-

   The two asterisks show that file has been changed since I last saved,
   I'm editing emacs.html, emacs is using it's HTML mode, I'm on line 59
   and 70% of file is displayed on the screen. Now, type some text in.
   You'll notice the asterisks and line number. Now, let's save your
   masterpeice! Hit C-x C-s(that's hold down control, press x then s),
   and at the bottom it will say:
   
Wrote /home/paul/bork.txt

   You've just saved your work! Let's exit emacs and bring it back up
   with our text file, and you can see for certain that the file has been
   saved. That covers the basics you need to get around with emacs, now
   on to....
     _________________________________________________________________
   
                                 Special Modes
                                       
   Emacs has a built-in LISP interpreter, making it so that emacs can be
   programmed to do various tasks. This allows it to handle HTML, SGML,
   shell scripts, C code, texinfo source, TeX source, etc. more
   appropriately. The classic thing to do with programmable calculators
   has always been to write games for them - guess what one of the
   classic things to do with a programmable text editor like emacs is.
   Emacs has a LISP-based version of the classic pseudo-AI program,
   Eliza. In this case, it's designed to act as a psychoanalyst. Now this
   part can get a bit tricky, as the official key used to run these modes
   is named 'meta'. PCs don't have a true-blue meta key, so it's often
   mapped to one of the alt keys, or a control key. Hit M-x, trying first
   the left alt, then right alt and same for controls, you'll know when
   you've hit the right one when the bottom line displays M-x with the
   cursor beside it. Now, type doctor and hit enter. The following text
   will appear on your screen:
   
I am the psychotherapist.  Please, describe your problems.  Each time
you are finished talking, type RET twice.

   Go ahead, chatter with doc for a bit. It can be entertaining...
   
   Back so soon? Well, it does get a wee bit boring after a while... Now
   that you're back, we're gonna write some C code to show the benefit of
   using emacs. I want you bring up emacs, and edit ~/.emacs
   
   Put the following in it:
   
     (add-hook 'c-mode-common-hook
               '(lambda () (c-toggle-auto-state 1)))

   This may, at first glance, look like gibberish. It's actually LISP
   code, at seeing this you now understand why some derisevly state that
   LISP really stands for Lots of Irritating Superfluous Parentheses.
   Fortunately, you don't need to know LISP right now - though you will
   have to learn it to do much configuring with emacs. Save the file, and
   start emacs editing a file named foo.c
   
   Type the following:
   
#include <stdio.h>

main(){printf("\nHello.\n");}

   Doesn't look like what's here, does it? Notice how emacs automagically
   indents the code properly and indicates to you that the braces are
   matched? If you don't program in C, you won't realize just how neat
   this is. Beleive me, if you do much coding, it's a godsend!
   
   Emacs has similar modes for HTML, SGML, even plaintext. It can read
   e-mail, usenet news and browse the web. Emacs includes everything,
   including the kitchen sink. Browse the docs, and use it, and with time
   you will begin to use emacs to it's full capacity.
   
   May the source be with you,
   --Paul Anderson, paul@geeky1.ebtech.net
     _________________________________________________________________
   
                      Copyright  1998, Paul Anderson
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
               Creating A Linux Certification Program, Part 3
                                      
                                By Dan York
     _________________________________________________________________
   
   The Linux certification saga continues. In my October article, I
   outlined why I thought Linux needs a certification program and what I
   thought the major characteristics of such a program should be. In my
   November article, I described what efforts were already underway
   toward Linux certification, provided pointers to resources on the Web,
   and explained how people could become further involved. With this
   article, I would like to relay the current status of our discussions,
   and provide additional pointers to information and resources.
   
   Specific topics in this article are:
   
     * Status of "linux-cert" discussion list and web archive of
       discussions
     * NEW LIST - "linux-cert-announce"
     * Linux Training Alliance
     * Points of consensus that have emerged from discussions
     * Working groups close to being formed
       
   If you have any questions about this article or the other articles,
   please feel free to contact me by email at dyork@Lodestar2.com or
   visit my list of certification pointers at http://www.linu=
   xtraining.org/cert/resources.html
   ______________________________________________________________________
   
Status of "linux-cert" discussion list

   In last month's article, I mentioned a "linux-cert" mailing list that
   was established to host further discussions on creating a Linux
   certification program. That list is operational and has had a strong
   volume throughout the last month. There has truly been too much
   discussion to adequately summarize, although the points of consensus
   mentioned below should give you a flavor of the list. People have been
   contributing from all around the world and it has been great to be a
   part of it all!
   
   If you would like to subscribe, send a message to:
          majordomo@linuxcare.com
      with the message:
          subscribe linux-cert
   Messages to the discussion list are sent to "linux-cert@linuxcare.com"
   
   The list is intended for people who *want* to build a certification
   program.  This is not another place to discuss whether or not a Linux
   certification program *should* exist... subscribers to the list agree
   that, yes, we want a Linux certification program - now let's discuss
   how best to build one.
   
   We now have two sites that are hosting web-based archives of the
   mailing list where you can view what has been discussed on the
   "linux-cert" list. Dave Sifry at Linuxcare set up our primary archive
   at his site. You can see every message from the beginning of the list
   at:
   
   http://www.linuxcar= e.com/linux-cert/archive/
   
   Or you can view just November's postings at:
   
   http://www.lin= uxcare.com/linux-cert/archive/9811/
   
   Bruce Dawson also set up a second site to see the messages (albeit
   over a slower connection) at:
   
   http://lin= ux.codemeta.com/archives/linuxcert_archive
   
   Thanks are due to both Dave and Bruce for setting these archives up.
   
   Please visit the archives, see what we're up to, and join in our
   efforts.
   ______________________________________________________________________
   
New List - "linux-cert-announce"

   After we set up the "linux-cert" list, I had several people contact me
   and say that they were interested in staying up on what was going on
   with Linux certification, but didn't want to subscribe to a
   high-volume mailing list. To address this concern, we have now
   established a second list, "linux-cert-announce", which will be a very
   low volume list (probably only a few postings per month). We will only
   send occasional status reports and announcements to this "announce"
   list. It is a moderated list with a limited number of possible
   senders, so there will be no extra traffic or spam.
   
   If you would like to subscribe to this list, send a message to:
          majordomo@linuxcare.com
   with the message:
          subscribe linux-cert-announce
   
   in the message body. Thanks again to Dave Sifry at Linuxcare for
   setting up this second list.
   
   Note that if you subscribe to "linux-cert", you do not need to also
   subscribe to "linux-cert-announce". Any message sent to
   "linux-cert-announce" will automagically be sent to the "linux-cert"
   mailing list.
   
   So... subscribe to "linux-cert" if you want to be involved with the
   ongoing discussions and receive a strong volume of email, subscribe to
   "linux-cert-announce" if you only want to get occasional updates on
   the current status of certification discussions and plans.
   ______________________________________________________________________
   
Linux Training Alliance

   In order to promote the teaching of classes in Linux, I am organizing
   an alliance of training centers who either are currently or are
   planning to teach Linux classes. We now have a web site located at:
   
          http://www.linuxtraining.org/
   
   The goals of the organization and Web site include:
   
     * to provide a central place on the Internet where potential
       students can learn about available Linux training resources
     * to publicize the classes of those training centers currently
       offering Linux training
     * to be a resource for other training centers that want to start
       teaching Linux classes
     * to promote the ongoing efforts to create a Linux certification
       program
     * to prove to courseware publishers that if they create Linux
       courseware there would be centers who would potentially purchase
       their materials
     * to provide another way to combat the argument made against Linux
       that "there is no support"
       
   If you are interested in Linux training, please visit the site and let
   me know what you think (it's pretty basic so far).
   
   If you are affiliated with a training center (loosely defined as a
   corporate training center, college, university or basically anyone
   else currently teaching Linux) and would like to be listed on the site
   (and join the LTA), please contact me at dyork@Lodestar2.com.
   
   If you are an freelance/contract instructor who would be available to
   teach classes in Linux, of if you have developed courseware in Linux
   that would be available to other training centers, please contact me
   as I would like to publicize your contact information as well.
   ______________________________________________________________________
   
Points of Consensus from the "linux-cert" discussion list

   Our discussion on the "linux-cert" mailing list has been quite
   involved and detailed with numerous points being debated quite
   intensely at times (check out the archive mentioned above). In recent
   days, I have asked the list to approve a number of "Consensus Points"
   that I have summarized from the ongoing discussions. Realizing that we
   will not always be able to reach consensus on every issue, we are
   working out a method of voting. In the meantime, I have been trying to
   collect the points on which we do all agree. The process is continuing
   as this article is being written, but so far the following points have
   been agreed upon:
   
     * The cost of attaining Linux certification shall be as low as
       possible. Costs of exams shall be targeted at only that needed to
       cover delivery of the exam, with perhaps a slight portion helping
       to offset development of the exam.
     * Whatever mechanism we develop for delivering Linux certification
       must be global in scale. People in any nation must be able to take
       exams toward certification.
     * The Linux certification program will consist of multiple levels.
       For instance, after perhaps 1 or 2 exams, someone becomes a "Linux
       Certified Professional". After 2 or 3 more, one becomes a "Linux
       Certified Administrator", etc. (Note that we have NOT agreed upon
       names.)
       
   Additionally, the following points appear headed toward consensus (but
   have not, as of 11/25/98, been approved by the group):
   
     * The Linux certification program will employ standardized
       multiple-choice exams for at least the entry and perhaps middle
       certification levels. The highest certification level will involve
       either a hands-on or oral exam of the candidate. The exact
       mechanism for the upper level test will be determined by a working
       group.
     * Linux certification exams will initially be developed in the
       English language. Exams in other languages will be made available
       as soon as possible depending upon financial and conversion
       support.
     * The core Linux certification program will be distribution-neutral.
       Distribution differences will be addressed through a required
       distribution-specific exam or other mechanism developed by a
       working group.
       
   We did not reach consensus on another point, and there are a number of
   other items which we cannot yet agree upon.
   
   If you are interested in being part of this process, please join the
   "linux-cert" mailing list mentioned above and visit the web archives
   to see what has already been discussed.
   ______________________________________________________________________
   
Working groups close to formation

   In the process of debating these consensus points, several
   participants have suggested we form smaller "working groups" to refine
   specific subjects and report back to the larger group. It looks at
   this point that at least one group will be launched to develop some
   proposals for naming conventions (i.e. "Linux Certifed Professional"?
   "Linux Certified Engineer?" etc.) and also to explore some possible
   options for the non-computer-based test for the highest level of
   certification. Other groups will also be launched as our efforts
   continue.
   
   If you are interested in being involved with this working group,
   please join the "linux-cert" mailing list mentioned above.
   ______________________________________________________________________
   
Final Thoughts

   This past few weeks on the mailing list has been quite an interesting
   one. The global scale of this project has brought in a wide variety of
   contributors and made for interesting discussions. It's been a great
   group of people to work with and I look forward to our evolving
   discussions and plans.
   
   Along the way, we also discovered another group coordinated by Evan
   Leibovitch from the Canadian Linux User's Exchange (CLUE) that had
   been discussing Linux certification since earlier this year. Evan and
   I have now been working together to combine the expertise from both
   groups and it has been a great experience - look for more exciting
   news and opportunities to come soon!
   
   Please join us on the list(s) and let's make this happen!
   
   Dan York is a technical instructor and the training manager for a
   technology training company located in central New Hampshire. He has
   been working with the Internet and UNIX systems for 13 years. While
   his passion is with Linux, he has also spent the past two-and-a-half
   years working with Windows NT. He is both a Microsoft Certified System
   Engineer and Microsoft Certified Trainer and has also written a book
   for QUE on one of the MCSE certification exams. He is anxiously
   awaiting the day when he can start teaching Linux certification
   classes. He can be contacted electronically at dyork@Lodestar2.com.
   
                  Previous ``Linux Certification'' Columns
                                      
   Linux Certification Part #1, September 1998
   Linux Certification Part #2, October 1998
     _________________________________________________________________
   
                         Copyright  1998, Dan York
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
   A Linux Journal Preview: This article will appear in the January 1999
   issue of Linux Journal.
     _________________________________________________________________
   
                        1998 Editor's Choice Awards
                                      
                           By Marjorie Richardson
     _________________________________________________________________
   
   When the LJ staff decided to have Editor's Choice Awards this year in
   addition to the Readers' Choice, I agreed without truly realizing how
   difficult it would be to make decisions. So many fine products that
   support Linux are available today, and the number grows daily. This
   has indeed been a good year for Linux users, beginning with the
   announcement that Netscape would become open source and proceeding
   through the announcements of support for Linux by all the major
   database companies.
     _________________________________________________________________
   
   [INLINE]
   
   
   
  Product of the Year--Netscape Communicator
  
   
   
   I must admit this one wasn't a hard decision. It is my belief that
   Netscape's announcement that Communicator would be open source started
   it all. This announcement galvanized the world to find out about the
   Open Source movement and the Linux operating system that was
   responsible for its creation. Linux needed a big company in its corner
   in order for the word to spread, and Netscape provided just the
   initiative that was needed.
     _________________________________________________________________
   
  Most Promising Software Newcomers--GNOME and KDE
  
   This was probably the most difficult decision, so it ended in a tie.
   So many new products are available for Linux this year; finally, the
   flood of software applications we have all been waiting for is
   happening. However, the one thing everyone has always said Linux needs
   to become competitive with the commercial operating systems is a
   user-friendly desktop--both GNOME and KDE are filling this need.
     _________________________________________________________________
   
   [INLINE]
   
   
   
  Best New Gadget--Schlumberger Smart Card
  
   
   
   
   
   While I was given some interesting suggestions for this one, I never
   had any doubt that the Smart Card was the proper choice. A credit card
   with a Linux CPU on it is just too extraordinary. The computer chip
   embedded in the card stores not only mundane information about the
   card holder, but also biometric information that can be used for
   identification--talk about great security! The suggestion most people
   gave me was the PalmPilot, which is indeed a cool product, but even
   though Linux runs on it, the port was done by programmers outside
   3Com. According to Mr. Bob Ingols, a 3Com staff member, 3Com does not
   support Linux and does not plan to.
     _________________________________________________________________
   
   [INLINE]
   
   
   
  Best New Hardware--Corel NetWinder
  
   
   
   
   
   Corel Computer was the first company to declare Linux as its operating
   system of choice and sell computers with Linux pre-installed. With the
   continuing growth of Internet popularity, the network computer's day
   has come and the NetWinder is one of the best. It is small, powerful
   and easily configured. Best of all, it comes with Linux. Debian's
   recent port to the ARM architecture means that it too will run on the
   NetWinder. A close second was the Cobalt Qube Microserver--not only is
   it a great little server, it's cute too.
     _________________________________________________________________
   
   [INLINE]
   
   
   
  Best New Application--Informix
  
   Another tough one. My initial choice was the GIMP, but it's been
   around for some time (my first thoughts always seem to be free
   software). At any rate, a port of a major database to Linux has long
   been anticipated, and Informix made the breakthrough with other
   database companies following suit. With support from Informix, Linux
   can now enter the business ``big leagues''. A close second, in my
   mind, is Corel's WordPerfect 8 for Linux for the same reason--to be
   accepted in the workplace, Linux needs this product.
     _________________________________________________________________
   
   [INLINE]
   
   
   
  Best New Book--Samba: Integrating UNIX and Windows
  
   
   
   
   
   Some might call ``foul'' on this one, because it is published by SSC.
   However, this award is for the book and the author, John Blair, not
   for the publisher. Samba: Integrating UNIX and Windows was needed and
   its popularity has proved it. John has written a comprehensive book of
   interest to all who are running multi-OS shops. The book has been
   endorsed by the Samba Team, who has gone so far as to make John a
   member. If the award had been for ``best all-around book on Linux'', I
   would have given it to the ever-popular (with good reason) Running
   Linux by Matt Welsh, published by O'Reilly & Associates.
     _________________________________________________________________
   
  Best Business Solution--Linux Print System at Cisco
  
   In our October issue, we had a great article called ``Linux Print
   System at Cisco Systems, Inc.'' by Damian Ivereigh. In it, Damian
   described how Cisco was using Linux, Samba and Netatalk to manage
   approximately 1,600 printers worldwide in mission-critical
   environments. He also described how he did it and supplied the source
   code he used, so that others could also benefit from this solution--a
   wonderful way to contribute to the Linux community.
     _________________________________________________________________
   
  Most Desired Port--QuarkXPress
  
   Linux Journal uses Linux as its operating system of choice on all but
   one lone machine. For layout, we must have an MS Windows 95 machine in
   order to run QuarkXPress. Each month we hold our breath during the
   layout period hoping that when Windows crashes (it always does), it
   won't be at a critical juncture. Crashing for no apparent reason
   creates extra work for Lydia Kinata, our layout artist, and much
   stress for all of us each month. We are more than ready to be rid of
   this albatross and have a total Linux shop. Next, like everyone else,
   we'd like Adobe to port all its products to Linux.
     _________________________________________________________________
   
                   Copyright  1998, Marjorie Richardson
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
   A Linux Journal Review: This article appeared first in the December
   1999 issue of Linux Journal.
     _________________________________________________________________
   
                   Product Review: Happy Hacking Keyboard
                                      
                              By Jeremy Dinsel
     _________________________________________________________________
   
     * Manufacturer: PFU America Inc.
     * E-mail: hhkb-support@pfuca.com
     * URL: http://www.pfuca.com/
     * Price: $139 US with one cable, $30 for extra cable
     * Reviewer: Jeremy Dinsel
       
   The Happy Hacking Keyboard is a cute and fuzzy streamlined keyboard
   designed specifically with programmers in mind. While not a single bit
   of fuzz is actually on the keyboard, its size makes it cute, if not
   disorienting, to people used to the standard IBM PC keyboard.
   
                                  [INLINE]
                                      
   According to PFU America, the keyboard's design makes it easier for
   programmers to reach the keys they want quickly and efficiently. They
   claim having fewer keys on the keyboard increases efficiency by
   preventing users from overextending their fingers on certain
   keystrokes.
   
  Installation
  
   The Happy Hacking Keyboard arrived in a tiny box shortly after I
   agreed to do a review of the product. Inside were the keyboard and
   three cables (for a PS/2, Macintosh and Sun computer) along with the
   usual manual and warranty information.
   
   PFU America recently changed the package, and lowered the price. The
   Happy Hacking Keyboard now comes with only one cable (of the
   customer's choice), but additional cables are available for $35.00
   each. The cables are expensive because they are handmade by the people
   at PFU America.
   
   The manual was fairly straightforward--after all, almost everyone
   knows how to hook up a keyboard. However, with the many cables that
   accompanied the keyboard, it was comforting to know that documentation
   was available should it be needed.
   
   After the computer was powered down, I said goodbye to my 101 Enhanced
   keyboard and hello to blissful days of Happy Hacking. Or so I
   thought--I had to grab a PS/2 to AT keyboard adapter first.
   
  Life is a Series of Adjustments
  
   The keyboard is streamlined, containing only 60 keys. A function key
   is included that can be used in combination with other keys; as a
   result, awkward finger positioning is sometimes required. My first
   days using the keyboard reminded me of playing Twister and trying to
   reach the red dot by squeezing my arm past two opponents while keeping
   my feet on the orange and blue dots on opposite sides of the mat. In
   fact, two weeks later, I was still finding myself reverting to my old
   PC keyboarding habits. Some complex key sequences were hard to
   complete correctly, as old habits die hard.
   
   Also, in the beginning, the backspace key didn't work; however, this
   turned out to be primarily my fault. Being lazy and excited to test
   out the new keyboard, I refrained from reading all the way through the
   manual to the final (third) page where a table and accompanying figure
   would have taught me how to program the keyboard using a slider
   switch. Eventually, I toggled the switch and had the backspace key
   working to my satisfaction.
   
   Since I started using Linux before Windows 95 was introduced (I
   stopped using MS products long before that), I did not miss the extra
   ``Windows'' keys found on most PC keyboards. I did, however, have to
   get used to console cruising with the new keyboard. Switching from X
   to the console requires a four finger/key combination (ctrl-alt-fn-f*,
   where fn is the function key), while cruising through consoles
   requires a three finger/key combination (alt-fn-arrow-key).
   
   Even in a non-vi-type editor without command mode movement keys, the
   Happy Hacking Keyboard makes the user adjust to finding the location
   of the arrow pad and remembering to hit the function key. In all
   fairness, it took me less than a week to become oriented with the key
   locations. (It does remain comical to watch others try to wander
   through the key selections for the first time.)
   
   Unlike a laptop, the size and shape of the keys are the same as on a
   PC keyboard, making it easier to adjust. I never overreach the true
   location of the keys and don't have a difficult time typing something
   on other people's computers (who don't have a Happy Hacking Keyboard).
   However, I am now known to complain about how ``weird'' other
   keyboards are.
   
  Happy Hacking
  
   While the keyboard did not cure me of my sarcastic nature, I did find
   the escape key much easier to reach since it's located to the
   immediate left of the ``1'' key. In vi, I can quickly switch out of
   insert mode since I never have to look down to relocate the escape key
   or reposition my fingers afterwards; thus, cruising through vi has
   become even easier.
   
   For XEmacs programming, the control key is located in the ``right''
   place, directly left of the ``A'' key. This makes it easy to use
   without any odd movements or taking your fingers away from the home
   row. (Yes, I learned to type before I learned to program.)
   
   Both of these key locations, escape and control, have allowed me to
   quickly negotiate commands without having to reposition my fingers.
   This has the benefit of reducing the frustration of trying to return
   to the home keys after each command--my fingers never wind up in odd
   locations as they did on a typical PC keyboard.
   
  Disgruntled Gamer
  
   As a part-time game player (Linux Quake), I'm accustomed to using the
   keyboard for all player movements, such as turns and running. With
   this keyboard, I'd have to hold the function key down constantly (to
   select the arrow keys) or figure out how to use the mouse. Otherwise,
   keeping the function key depressed (two keys away from the arrow keys)
   and trying to fumble around with the arrows might increase the
   probability of developing carpal tunnel syndrome.
   
   After a few games of Quake, I think I'll be comfortable with the
   bizarre fingering required. Also, using the keyboard to program in
   XEmacs helped in the adjustment needed to get into the gaming world.
   
  Technical Support and On-line Documentation
  
   Documentation is also available on-line. While I haven't had to use
   their tech support e-mail, it is readily available--my contact at PFU
   America was quick to reply to any e-mail I sent. Furthermore, all of
   the information needed to install and hook up the keyboard can be
   found on-line. All of the information in the manual is included in
   their on-line documentation.
   
  In Closing
  
   Overall, I would be hard-pressed to sum up this review with anything
   but a positive remark. With the price tag recently dropping by $40,
   the keyboard is more affordable. I'm sure other hackers will be quite
   happy to own it.
   
   For someone who hasn't experienced the keyboard, it's hard to believe
   everything reported about the Happy Hacking Keyboard by PFU America.
   In fact, I was skeptical about the remarks I had heard before I became
   a Happy Hacking Keyboard user. Now, one month after laying my fingers
   on it, I can't imagine using any other keyboard. I wonder if PFU
   America makes a Happy Hacking tote bag.
     _________________________________________________________________
   
                      Copyright  1998, Jeremy Dinsel
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                         Getting Started with Linux
                                      
                          Version 1.0 November 98
                                      
                             By Prakash Advani
     _________________________________________________________________
   
   This document is written for people who have just installed Linux but
   don't know what to do next. Most of the commands discussed here should
   work on all distribution of Linux but since I use Red Hat 5.0 some of
   them may be specific to Red Hat 5.0. I have also used Caldera
   OpenLinux 1.3 and have included some Caldera specific information. If
   any of you have any suggestions or ideas to improve this document,
   they are most welcome. All commands are in quotes and you need to type
   them without the quotes. For example if you see type "ls" then you
   just need to type ls. You will also have to press the ENTER key after
   typing each of the commands. There are some useful commands in the
   document but for complete command reference you will need to refer to
   additional documents.
     _________________________________________________________________
   
   Let us begin with first booting into Linux. When you boot Linux, you
   will see a lot of messages coming up. You need not understand all of
   them right now but if you get some errors while booting, you may want
   to look at them. These help in understanding them and do any
   troubleshooting if required. First thing you must do is login in to
   your Linux system. At the login prompt type "root" (or whatever
   username you have created) and put in the password. The password is
   selected at installation. If you installed linux on your machine then
   you are the root user and you have supervisory access to the system.
   If you didn't choose any password then the system will not ask for a
   password, instead take you straight to the Linux prompt. You will now
   come to the Linux prompt. The prompt will be a # if you are root or
   will be a $ if you are some other user and have chosen the BASH shell.
   If you are new to Linux then you should use the BASH shell. Out of
   several shells under Linux, I prefer BASH because it is easy to use.
   BASH is also the default on most Linux Distributions. Your prompt may
   look something like.
   
   [root@yoom.com /root]#
   
   If you need to logout just type "exit".
   
   Once you have logged in type "dmesg" to see the bootup messages. You
   will see something like:
   
   Serial driver version 4.13 with no serial options enabled
   tty00 at 0x03f8 (irq = 4) is a 16450
   tty01 at 0x02f8 (irq = 3) is a 16450
   Real Time Clock Driver v1.07
   hda: QUANTUM FIREBALL_TM2110A, 2014MB w/76kB Cache, CHS=1023/64/63
   hdc: CREATIVECD2421E, ATAPI CDROM drive
   ide0 at 0x1f0-0x1f7,0x3f6 on irq 14
   ide1 at 0x170-0x177,0x376 on irq 15
   Floppy drive(s): fd0 is 1.44M
   FDC 0 is a post-1991 82077
   md driver 0.35 MAX_MD_DEV=4, MAX_REAL=8
   raid0 personality registered
   DLCI driver v0.30, 12 Sep 1996, mike.mclagan@Linux.org.
   Partition check:
   hda: hda1 hda2 < hda5 hda6 hda7 >
   VFS: Mounted root (ext2 filesystem) readonly.
   Adding Swap: 16092k swap-space (priority -1)
   Soundblaster audio driver Copyright (C) by Hannu Savolainen 1993-1996
   SB 3.1 detected OK (220)
   sb: Interrupt test on IRQ5 failed - device disabled.
   YM3812 and OPL-3 driver Copyright (C) by Hannu Savolainen, Rob Hooft
   1993-1996
   sysctl: ip forwarding off
   Swansea University Computer Society IPX 0.34 for NET3.035
   IPX Portions Copyright (c) 1995 Caldera, Inc.
   
   You will realise that the messages scrolled down before you could read
   them. To see them page by page type "dmesg | less" or "dmesg | more".
   
   The dmesg command provides valuable information about the hardware
   devices detected by Linux. It also helps in knowing if there was some
   problem somewhere. Like if you see the line: sb: Interrupt test on
   IRQ5 failed - device disabled. It means there was a problem with
   setting up of the Sound Blaster sound card at IRQ5. If you get such
   errors, it may mean that some of your hardware is not working
   correctly under Linux.
   
   The BASH shell has a lot of ease of use. If you like working a lot on
   the command line, you will find it very easy. The bash shell allows
   using the previous command by press the up arrow key. You can also
   search for previous commands by typing "CTRL-R" and typing some words
   from the previous commands. To clear the screen press CTRL-L or simply
   type "clear".
   
   Another important command is df. Just type "df" and you will see
   something like:
   
Filesystem      1024-blocks     Used    Available Capacity      Mounted on
/dev/hda6        388362         341804  26501   93%             /
/dev/hda5        614672         572176  42496   93%             /dosd

   This gives information of all your mounted hard disk partitions,
   available space and used space. The space shown is 1024 blocks which
   is 1024 bytes or one Kilo Byte. It also shows at which directory the
   partition is mounted. Like in DOS and Windows partitions and devices
   are allotted drive letters such as C:, D:, E:; in Linux partitions or
   devices are mounted onto directories. For example /dev/hda5 is mounted
   on /dosd. Normally /dosc, /dosd, would be your mounted dos partitions.
   It could also be anything else. Which means you can access your Dos
   files through Linux by going through these directories.
   
   Another useful command is ls. Type "ls" and you will see something
   like:
   
   bin/ dev/ etc/ lost+found/ proc/ tmp/
   boot/ dosc/ home/ mnt/ root/ usr/
   cdrom/ dosd/ lib/ opt/ sbin/ var/
   
   Type "ls -l" to see a more complete list. This will show the owners,
   permissions, date and time of when last modified and file sizes. You
   will need to understand file permissions once you get the hang of the
   basic Linux operations. Permissions are useful for multiuser Linux
   system where you need to restrict or allow access to files or
   directories.
   
drwxr-xr-x 2    root root 2048  Sep 17  12:49   bin/
drwxr-xr-x 2    root root 1024  Oct 4   23:24   boot/
drwxr-xr-x 2    root root 1024  Sep 2   17:32   cdrom/
drwxr-xr-x 3    root root 21504 Oct 22  12:54   dev/
drwxr-xr-x 2    root root 1024  Oct 2   21:59   dosc/
drwxr-xr-x 13   root root 21504 Jan 1   1970    dosd/

   The cd command is used to change directories, you can try by typing
   "cd /" to go the root directory. Type "cd -" to return back to where
   you were. If you just type "cd" you will return back to your home
   directory. Installing softwares, opening compressed files under Linux.
   
   If you download documents, utilities, softwares or anything else for
   Linux, you will find that a lot of them have extensions of .tgz or
   .tar.gz. In that case you will have to type the following command to
   extract the files. Replace filename.tar.gz with the name of the file.
   
   gzip -dc filename.tar.gz | tar xvf -
   
   If you downloaded some Linux files under DOS, chances are that the
   file names may get truncated. In that case you will have to rename
   your files before extracting them under Linux. To rename files just
   type "mv oldfilename newfilename". Replace oldfilename with what the
   current file name is and replace newfilename with what you want the
   file name to be.
   
   Several files are also in the .rpm format. These formats are for the
   Red Hat and Caldera distribution and they are also used by other
   distributions. To install rpm's type
   
   rpm -i filename.rpm
   
   If you are upgrading an existing software type
   
   rpm -U filename.rpm
   
   If your distribution does not support RPM's you can add that support
   by installing the RedHat Packet Manager (RPM). Similarly there is
   pkginstall under some distributions to manage .tar.gz files.
   
   Man Man! What's man man ? These are help pages or manuals to get some
   help on a specific command. To get help on man type "man man".
   Similarly to get help on rpm type "man rpm". To get help on ls type
   "man ls" and so on. You can get help on all the command using man. To
   begin with get help on commonly used commands. These commands will
   help you move around files and directories. Some commonly used
   commands are:
   
cat     To type the content of a file
cp      Copy files
du      To check the disk space used
pine    Email client
find    Find files on the linux system
grep    Search for keywords on a file or a command
kill    To kill any process, ps to see the process number
less    If you cat a file you can pipe it to less for page by page viewing
ln      Create or remove links between files or directories
lpr     Print files or output to a printer
ls      List files or directories
mkdir   To create a new directory
more    Similar to less but less is better than more!
mount   See the mounted devices or mount additional devices
umount  Unmount mounted volumes
mv      Move or rename a file
passwd  Change your password
ps      To see the processes running
rm      Remove files or directories
rmdir   Remove directories
useradd Add a user to the linux system
userdel Delete a user on the linux system
usermod Modify a user on the linux system
which   Find where a program is located
who     Displays the users logged in
zless   To see the content of a .gz file (compressed)

   Some more tips for bash users. If you know that the first letter of a
   command for example is a but don't know the rest type "a" and then
   press TAB twice and bash will show the list of possibilities. You can
   also press a single tab to complete a command if there is only one
   possibility. This saves a lot of typing time. Example type "mou" and
   then press TAB, bash will put mount on the command line.
   
   Pressing TAB twice shows all the Linux commands. It looks something
   like:
   
   There are 1212 possibilities. Do you really wish to see them all? (y
   or n)
   
   Type "y" and you will see all of them!
   
   Sometimes if you type a command, the screen may scroll by too fast for
   you to read, unless you are superman. In that case you can see the
   previous screen by pressing Shift and PG-UP keys together.
   
   If you type some commands, you can break by pressing CTRL-C or ESC. It
   may not work in man or less, in that case just type "q".If you need to
   edit some files try pico or joe. These are two easy to use editors.
   Joe works more like WordStar and pico is the editor for Pine. Power
   users may try vi or emacs. These two are very powerful editors but
   have a high learning curve. Examples would be type "joe filename".
   Replace the filename with the name of the file that you wish to edit.
   
   Most distrbutions install X-Window. To start X-Window type "startx".
   X-Window is a GUI for Windows. There are several flavours available
   which give you different look and feel. To configure a redhat system
   type "setup". If you are under Caldera type "lisa". You can also
   configure through a GUI interface under X-Window.
   
   Most users may want to use some dos floppies or partitions. You can
   type some dos commands under Linux without mounting your devices. Type
   "man mtools" to see a list of these commands. These commands start
   with m, example the dos copy command would be mcopy. Similarly there
   are several commands such as mattrib, mcd, mcopy, mdel, mdeltree,
   mdir, mformat, mlabel, mmd, mrd, mmove, mren, mtype, mzip, etc.To see
   some more Linux documentation's look under the following directories.
   If the files have .gz extension the to view them type "zless
   filename.gz" replace filename with the name of the file.
   
   /usr/doc/FAQ
   /usr/doc/LDP/install-guide
   /usr/doc/mini/usr/doc/HOWTO
   
   
   
   Prakash Advani is an Internet and Systems consultant based in Mumbai,
   India. Currently we are setting up a Web site dedicated on Free
   Operating Systems [www.FreeOS.com] including Linux. Any help would be
   greatly appreciated.
   
   
     _________________________________________________________________
   
                      Copyright  1998, Prakash Advani
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
   A Linux Journal Preview: This article will appear in the February 1999
   issue of Linux Journal.
     _________________________________________________________________
   
                             The GNOME Project
                                      
                             By Miguel de Icaza
     _________________________________________________________________
   
   GNOME is an acronym for GNU's Network Object Model Environment. GNOME
   addresses a number of issues that have not previously been addressed
   in the UNIX world:
   
     * Providing a consistent user interface.
     * Providing user friendly tools and making them powerful by
       leveraging the UNIX foundation.
     * Creating a UNIX standard for component programming and component
       reuse.
     * Providing a consistent mechanism for printing.
       
   GNOME's main objective is to provide a user-friendly suite of
   applications and an easy-to-use desktop. As with most GNU programs,
   GNOME has been designed to run on almost all strains of UNIX-like
   operating systems.
   
  History of GNOME
  
   The GNU GNOME project was initially announced in August, 1997, and
   after just one year of development, approximately two hundred
   programmers worldwide are involved in the project.
   
   The original announcement called for developers in a number of forums,
   shaping the GNOME project: the GNU announce mailing lists; the Guile
   mailing list; and the GTK+ and GIMP mailing lists. The programmers and
   people who influenced the project were mainly free software
   enthusiasts with diverse areas of expertise, including graphics
   programming and language design.
   
   The GNOME team has been working steadily toward creating a foundation
   for future free software development. GNOME provides the toolkit and
   reusable component set to build the end-user free software the world
   so badly needs.
   
   Our recent releases of the GNU Network Object Model Environment have
   been: GNOME 0.20, the first version of GNOME that showed signs of
   integrations, released in May 1998; The Drooling Macaque 0.25 release,
   with more features; and finally our latest public release, GNOME 0.30,
   code named Bouncing Bonobo.
   
   The GNOME 0.20 release was the first release included in a CD-ROM
   distribution: Red Hat 5.1 shipped with a technology preview of the
   GNOME desktop environment, and it was first demonstrated at the 1998
   Linux Expo in North Carolina.
   
   Before the Drooling Macaque release, the GNOME software releases were
   coordinated by two or three people on the team. This became a
   significant burden, as precious time was being used coordinating each
   release. We have been trying to make the release process more modular
   and have assigned different modules to package maintainers. Each
   package maintainer is responsible for packing, testing and releasing
   their packages independently of the main distribution, which we
   consider to be the core libraries and the core desktop applications.
   So far, we have had some success, but there is still room for
   improvement. We will continue to polish the release process to make it
   simpler.
   
   The most recent GNOME release, Bouncing Bonobo, is the first to
   feature the GNOME spreadsheet, Gnumeric.
   
  Red Hat Advanced Development Labs
  
   In January 1998, Red Hat announced the creation of the Red Hat
   Advanced Development Laboratories (RHAD). The initial objective of Red
   Hat Labs would be to help the GNOME effort by providing code and
   programmers and by helping us manage the project resources.
   
   All code contributed by Red Hat Advanced Laboratories to GNOME has
   been provided under the terms of the GNU GPL and the GNU LGPL
   licenses. Several GTK+ and GNOME developers have been hired by Red
   Hat, and they have rapidly provided the GNOME project with a number of
   important features.
   
   For example, Rasterman has implemented themes for GTK+; the GTK+
   themes allow the user to change the appearance of the widgets. This is
   done by abstracting the widget drawing routines from the toolkit, and
   putting those drawing routines in modules that can be loaded at
   runtime. Thus, the user can change the appearance of applications
   without shutting them down or restarting the desktop.
   
   GTK+ themes are fully working, and so far a number of theme front-ends
   have been written. At the time of this writing, the available themes
   include Motif, Windows95, Metal, native-GTK+ and a general purpose
   Bitmap-based engine (see Resources).
   
   Various important changes to the GTK+ toolkit required for the GNOME
   project, such as the menu keyboard navigation code and the enhanced
   ``Drag and Drop'' protocols (XDND and Motif DND), were written by Owen
   Taylor, a famous GTK+ hacker now working for Red Hat Labs.
   
   Assorted applications were created or are maintained nowadays by the
   GNOME team at RHAD as well: the Ghostscript front end (by Jonathan
   Blandford), the GNOME Help Browser and the GNOME RPM interface (Marc
   Ewing and Michael Fullbright), the GNOME Calendar and GNOME Canvas
   (Federico Mena) and the ORBit CORBA 2.2 implementation (Elliot Lee).
   
  Other Donations
  
   The GNOME project received a monetary donation from the GNU/Linux
   Debian team in the early stages of the project, as well as an Alpha
   board from Quant-X Service and Consulting G.m.b.H. We are very
   grateful for their contributions.
   
  Some Key GNOME Features
  
   The GNOME libraries provide a framework to create consistent
   applications and to simplify the programmer's task. More of the
   features of the GNOME libraries are described later. Some of the most
   important current developments in the GNOME libraries are discussed
   here.
   
   Metadata
   
   One of the problems that a desktop environment faces is the fact that
   it is usually necessary to have a mechanism for storing information
   about a file's properties. For example, applications might want to
   bind an icon for a specific executable file or bind a small thumbnail
   image for a graphic produced by a graphics program. These icons should
   be semantically attached to the main file.
   
   The Macintosh OS, for example, provides a way to store this
   information in the file as its ``resource fork''. This mechanism would
   be awkward at best to implement in a UNIX environment. The main
   problem is that non-metadata-aware application can cause the metadata
   information to get out of sync.
   
   The GNOME metadata was implemented by Tom Tromey at Cygnus, given a
   number of design constraints and tradeoffs (described in detail on
   their web site). The following is a list of the GNOME metadata
   features:
   
    1. Binding the information on a per-file basis is a per-user setting,
       and each user keeps track of its own bindings. System defaults
       apply on top of these.
    2. Binding information by file content is done according to the type
       of the file using file signatures, similar to the UNIX file
       command.
    3. Binding information by a regular expression: for example, a
       default icon for gif files would be provided by the regular
       expression *.\.gif$.
    4. The metadata system is optimized to provide a coherent GUI
       solution, rather than as a compromise or kludge to existing
       command line tools.
    5. Most ordinary uses of files will continue to work without
       metadata, just as they do now.
       
   A number of standard properties for file metadata are available in
   GNOME. For example, ``View'' stores the action for viewing the file
   contents; ``Open'' stores analogous action for editing; ``Icon'',
   which contains the icon, is used for displaying the file on the
   desktop.
   
   Metadata types are MIME types.
   
   Canvas
   
   GNOME provides a Canvas widget, patterned after Tk's excellent canvas.
   This widget simplifies the programming of applications that need
   control over graphical components. The most noticeable feature of the
   GNOME Canvas is that it provides a flicker-free drawing area where
   high-level objects can be inserted and manipulated. Basic zoom and
   scroll facilities are also a part of the canvas.
   
   The high-level objects inserted into the canvas behave like regular
   widgets. They can receive X events, they can grab the focus, and they
   can grab the mouse just like a regular widget. As with their Tk
   counterparts, the GNOME Canvas items can have their properties changed
   at runtime with a Tk-like configuration mechanism.
   
   The GNOME Canvas ships with a number of items derived from the
   GnomeCanvasItem object: lines, rectangles, ellipses, arrows, polylines
   and a generic widget container to embed GTK+ widgets within a canvas.
   The Canvas framework is designed to be very extensible. As proof of
   this extensibility, the GNOME spreadsheet is implemented on top of the
   base canvas engine, with additional functionality provided by
   spreadsheet-specific CanvasItems.
   
   Note that the current Canvas uses Gdk primitives (a thin wrapper over
   Xlib primitives) to draw, so it is limited in the quality and range of
   special effects that can be provided with it, which bring us to the
   next step in Canvas technology.
   
   Raph Levien is working on an advanced rendering engine for the Canvas.
   It was originally developed as a stand-alone widget within his Type1
   outline font editor, gfonted. As of the time of this writing, work on
   integrating the engine into the Canvas is getting underway.
   
   Features of this engine include:
   
     * Anti-aliased rendering of all items
     * Alpha transparency
     * Items for vector and bezier paths
     * Items for RGB and RGB plus alpha images
     * Vector operations, including clip (intersect), union, difference
       and stroke layout
     * PostScript Type1 font loading and rendering
       
   The engine's design goal is to support almost all of the PostScript
   imaging model with the addition of alpha transparency. As such, it is
   expected to be an excellent starting point for high-powered graphics
   applications.
   
   In spite of the ambitious goal of keeping the display up to date with
   entirely anti-aliased and alpha-composited items, performance is
   surprisingly good--comparable in fact to the Xlib-primitive-based
   canvas engine.
   
   His code is expected to be merged into the main Canvas sometime soon.
   
   Window Manager Independence
   
   GNOME does not have any dependency on a special window manager--any
   existing window manager will do. GNOME specifies window manager hints
   that can be implemented by the window manager to give the user better
   desktop integration, but they are optional. The E window manager
   implements all of the GNOME window manager hints and can be used as a
   reference implementation for people wishing to extend their window
   managers to be GNOME-compliant. The ICEWM manager is tracking those
   developments, and it is also considered to be a GNOME-compliant window
   manager, although at this time, it is lagging a bit behind. People
   have showed interest in providing the WindowMaker and FVWM2
   maintainers with patches to make those window managers GNOME-aware.
   
   Component Programming
   
   Historically, one of the attractions of UNIX has been the philosophy
   of small tools that each do one thing well, and combining these tools,
   using pipes and simple shell scripts, to perform more complex tasks.
   This philosophy works very well when the data objects are represented
   as plaintext and the operations are effectively filters. However, this
   UNIX command-line philosophy does not scale well to today's world of
   multimedia objects.
   
   Thus, it would be nice to have a framework in GNOME that would provide
   software reuse and component plugging and interaction, i.e.,
   connecting small specialized tools to carry out complex tasks. With
   this infrastructure in place, GNOME applications can once again return
   to the UNIX roots of simple, well-specialized tools.
   
   An RPC system was then required for providing this sort of
   functionality, so we decided to use CORBA (the Common Object Request
   Broker Architecture) from the Object Management Group (OMG). CORBA can
   be thought of as an object-oriented RPC system, which happens to have
   standardized bindings for different languages.
   
   CORBA opened a range of applications for us. Component programming
   allowed us to package programs and shared libraries as program servers
   that each implement a specific interface.
   
   For example, the GNOME mail program, Balsa, implements the
   GNOME::MailMessage interface that enables any CORBA-aware program to
   remotely compose and customize the contents of a mail message and send
   it. It is thus possible to replace the mail program with any program
   that implements the GNOME::MailMessage interface. As far as the GNOME
   desktop is concerned, the process just implements the
   GNOME::MailMessage interface. This means, for example, that I will be
   able to continue using GNUS to read my mail and have GNUS completely
   integrated with the rest of my desktop. This also applies to the other
   components in the GNOME system: the address book, the file manager,
   the terminal emulation program, the help browser, the office
   applications and more.
   
   Besides providing the basic GNOME interfaces, applications can provide
   an interface to their implementation-specific features. This is done
   by using CORBA's interface inheritance. A specific interface would be
   derived from the more general interface. For example, GNUS would
   implement the GNOME::MailMessage interface and extend it with GNUS
   specific features in the GNOME::GnusMailMessage interface. This
   interface would hypothetically allow the user to customize GNUS at the
   Lisp level, something other mailers may not do. Another example would
   be a GNOME::MozillaMailMessage interface that would let the user
   configure the HTML rendering engine in Mozilla mail.
   
   Not only does CORBA address these issues, but CORBA can also be used
   as a general interprocess communication engine. Instead of inventing a
   new ad-hoc interprocess communication system each time two programs
   need to communicate, a CORBA interface can be used.
   
   Embedding documents into other documents has been popularized by
   Microsoft with their Object Linking and Embedding architecture. A
   document-embedding model similar in spirit is being designed for GNOME
   (the Baboon model), and all of the interprocess communication in this
   model is defined in terms of CORBA interfaces.
   
   Initially, we were very excited by the possibilities CORBA presented
   us, but we soon realized that using CORBA in the GNOME desktop was
   going to be more difficult than we expected.
   
   We tried using Xerox's ILU for our CORBA needs. The license at the
   time did not permit us to make modifications to the code and
   redistribute them, an important thing for the free software community,
   so we had to look for alternatives. Xerox has since changed the
   licensing policy.
   
   After evaluating various free CORBA implementations, we settled on
   MICO, as it was the most feature-full free implementation. MICO was
   designed as a teaching tool for CORBA, with a primary focus on code
   clarity.
   
   Unfortunately, we soon found that MICO was not a production-quality
   tool suitable for the needs of GNOME. For one, we found that the
   rather indiscriminate use of C++ templates (both in MICO and in
   MICO-generated stubs) proved to be a resource hog. Compiling bits of
   GNOME required as much as 48MB of RAM for even the simplest uses of
   CORBA, and this was slowing down our development. Another problem was
   that MICO only supported the C++ CORBA bindings. Even though an
   initial attempt had been made at providing C bindings, they were
   incomplete and not well-maintained.
   
   To address these problems, Dick Porter at i2it and Elliot Lee at Red
   Hat labs wrote a C-based, thin and fast CORBA 2.2 implementation
   called ORBit. As soon as ORBit became stable, the use of CORBA
   throughout GNOME began, after a delay of almost eight months.
   
   With an efficient, production quality CORBA implementation under our
   control, we ensure that CORBA-enabled interprocess communication is a
   valuable service for application programmers, rather than a source of
   overhead and bulk.
   
  Dissecting a GNOME desktop Application
  
   The toolkit
   
   GNOME desktop applications have been built on top of the
   object-oriented GTK+ toolkit originally designed as a GUI toolkit for
   the GNU Image Manipulation Program (GIMP).
   
   GTK+ has been implemented on top of a simple window and drawing API
   called Gdk (GTK Drawing Kit). The initial version of Gdk was a fairly
   thin wrapper around the Xlib libraries, but a port to Win32 and a port
   to the Y windowing system are presently in alpha stages.
   
   GTK+ implements an object system entirely in C. This object system is
   quite rich in functionality, including classical single inheritance,
   dynamic creation of new methods and classes, and a ``signal''
   mechanism for dynamically attaching handlers to the various events
   that occur in the user interface. One of GTK's great strengths is the
   availability of a wide range of language bindings, including C++,
   Objective-C, Perl, Python, Scheme and Tom. These language bindings
   provide access both to GTK+ objects and to new objects programmed in
   the language of choice.
   
   An additional feature of GNOME is Rasterman's Imlib library. This
   library is implemented alongside Gdk, and provides a fast yet flexible
   interface for loading and saving images, and rendering them on the
   screen. Applications using Imlib have quick and direct access to PNG,
   GIF, TIFF, JPEG and XPM files, as well as other formats available
   through external conversion filters.
   
   The Support Libraries
   
   C-based GNOME applications use the glib utility library. Glib provides
   the C programmer with a set of useful data structures: linked lists,
   doubly linked lists, hash tables (one-to-one maps), trees, string
   manipulation, memory-chunk reuse, debugging macros, assertion and
   logging facilities. Glib also includes a portable interface for a
   dynamic module facility.
   
   The GNOME libraries
   
   The GNOME libraries add the missing pieces to the toolkit to create
   full applications, dictate some policy, and help in the process of
   providing consistent user interfaces, as well as localizing the GNOME
   applications so they can be used in various countries.
   
   The current GNOME libraries are: GTK+-xmhtml, gnome-print, libgnome,
   libgnomeui, libgnorba, libgtop, gnome-dom and gnome-xml. Other
   libraries are used for specific applications: libPropList (soon to be
   replaced by a new configuration engine) and audiofile.
   
   The main non-graphical library is called libgnome. This provides
   functions to keep track of recently used documents, configuration
   information, metadata handling (see below), game score functions and
   command-line argument handling. This library does not depend on the
   use of a windowing system.
   
   As we use CORBA to achieve parts of our desktop integration, we have a
   special library that deals with various CORBA issues, called the
   libgnorba library. It provides GUI/CORBA integration (to let our GUI
   applications act as servers), authentication within the GNOME
   framework, and service activation.
   
   The gnomeui library, on the other hand, has the code that requires a
   window system to run. It contains the following components:
   
     * The GNOME session management support
     * Widgets, both as straightforward extensions of GTK+ and designed
       to be dependent on libgnome features
     * A set of standard dialog boxes otherwise not available on GTK+,
       well-integrated with other GNOME libraries
     * Standard property configuration dialog boxes
     * Standard top-level window handling
     * A multi-document interface (gnome-mdi)
     * Windowing hints
     * CORBA integration where required
       
   GTK+-XmHTML is a port of the Koen D'Hondt's XmHTML widget for Motif,
   and it is used for our HTML display needs. Our changes are being
   folded back into the main distribution.
   
   The gtop library allows system applications to be easily ported to
   various operating systems; it provides system, process and file system
   information.
   
   gnome-xml provides XML file loading, parsing and saving for GNOME
   applications, and it is being used in the GNOME spreadsheet (Gnumeric)
   and in the GNOME word processor program. gnome-dom provides an
   implementation of the World Wide Web Consortium's Document Object
   Model for GNOME applications. By the time you read this article,
   gnome-dom will have been deployed widely in the GNOME office
   applications. Both gnome-xml and gnome-dom were developed by Daniel
   Veillard from the World Wide Web Consortium.
   
   gnome-print implements GNOME's printing architecture. It consists of a
   pluggable rendering engine, as well as a set of widgets and standard
   dialog boxes for selecting and configuring printers. In addition,
   gnome-print is responsible for managing outline fonts, and contains
   scripts that automatically find fonts already installed on the system.
   
   The GNOME print imaging model is modeled after PostScript. Basic
   operations include vector and bezier path construction, stroking,
   filling, clipping, text (using Type1 fonts, with TrueType to follow
   shortly) and images.
   
   At this time, gnome-print generates only PostScript output. However
   the design of the imaging model is closely synchronized with the
   anti-aliased rendering engine for the Canvas, and it is expected that
   these two modules will be interoperating soon. In particular, it will
   be possible to ``print'' into a canvas, useful for providing a
   high-quality screen preview, and to print the contents of a canvas.
   This feature should simplify the design of applications that use the
   Canvas, as very little extra code will be needed to support printing.
   
   The same rendering engine will be used to render printed pages
   directly without going through a PostScript step. This path is
   especially exciting for providing high-quality, high-performance
   printing to color ink-jet printers, even of complex pages containing
   transparency, gradients and other elements considered ``tricky'' in
   the traditional PostScript imaging model.
   
   Bindings
   
   One explicit goal of GNOME was to support development in a wide range
   of languages, because no single language is ideal for every
   application. To this end, bindings for both GTK+ and the GNOME
   libraries exist for many popular programming languages, currently C,
   C++, Objective-C, Perl, Python, Scheme and Tom.
   
   The early involvement of Scheme, Tom and Perl hackers in both the GTK+
   and GNOME projects has helped in making the GTK+ and GNOME APIs easy
   to wrap up for various different languages. Multi-language support is
   ``baked in'' to the design of GTK+ and GNOME, rather than being added
   on as an afterthought.
   
  Development model
  
   GNOME is developed by a loosely coupled team of programmers around the
   world. Project coordination is done on the various GNOME mailing
   lists.
   
   The GNOME source code is kept on the GNOME CVS server
   (cvs:cvs.gnome.org:/cvs/gnome/). Access to the source code through
   Netscape's Bonsai and LXR tools is provided at http://cvs.gnome.org/,
   to help programmers get acquainted with the GNOME source code base.
   
   Most developers who have contributed code, major bug fixes and
   documentation to GNOME have CVS write access, fostering a very open
   atmosphere. GNOME developers come from a wide range of backgrounds and
   have diverse levels of skills and experience. Contributions from less
   experienced people have been surprisingly helpful, and the older,
   wiser coders have been happy to mentor the younger contributors on the
   team. The GNOME developer community values clean, maintainable code.
   Even programmers with many years of coding experienced have noted how
   the GNOME project has helped them write better code.
   
  The GNOME Office Suite Applications
  
   As the GNOME foundation libraries become more stable, the development
   of larger programming projects has become possible and has allowed
   small teams of developers to put together the applications which will
   make up the GNOME office suite.
   
   As with other GNOME components, the GNOME office suite is currently
   catching up with commercial offerings. By providing an office suite
   which is solid, fast and component-based, the code written for the
   GNOME project might become the foundation for a new era of free
   software program development.
   
   The office suite leverages a lot of knowledge many of us have acquired
   during the past year while developing various GNOME components. Our
   coding standards are higher, the code quality is better, and the code
   is more clean and more robust.
   
   The availability of these applications has provided us with the test
   bed we required to complete our document embedding interfaces (the
   Baboon model).
   
   There are two word processing projects going on for GNOME: one of them
   is GWP by Seth Alves at the Hungry Programmers and the other one is Go
   from Chris Lahey. GWP is currently more advanced and has printing
   working with the GNOME printing architecture.
   
   Gnumeric, the GNOME spreadsheet project, is aimed at providing a
   commercial quality spreadsheet with advanced features. It provides a
   comfortable and powerful user interface. As with other components in
   GNOME, we have worked toward providing a solid and extensible
   framework for future development.
   
   Recently, work has begun on Acthung, the GNOME presentations program.
   It is still in the early stages of development.
   
  Getting GNOME
  
   Tested source code releases of GNOME are available from GNOME's ftp
   site: ftp://ftp.gnome.org/.
   
   It is also possible to get the very latest GNOME developments from the
   Anonymous CVS servers. Check the GNOME web page for details on how to
   pull the latest version straight from the CVS servers.
   
   Breaking news about GNOME is posted to the GNOME web site in
   http://www.gnome.org/, along with documents to get you started on
   GNOME and developing GNOME applications.
     _________________________________________________________________
   
  Acknowledgments
  
   There is no way to thank all of the contributors to the GNOME project
   in this space. All of these contributions are gratefully appreciated.
   
   I would like to especially thank Alan Cox, Nat Friedman, Raph Levien
   and Richard Stallman for reviewing the draft of this document.
     _________________________________________________________________
   
  Resources
  
   bonobo: The GNOME team has learned that the Bonobo, the primate
   closest to humans, is an endangered species. If you want to know more
   about how you can help save the Bonobos, check this web page:
   http://www.gsu.edu/~wwwbpf/bpf/
   
   GIMP: http://www.gimp.org/
   
   GNU: http://www.gnu.org/
   
   GTK+: http://www.gtk.org/
   
   GNOME: http://www.gnome.org/
   
   Gnumeric: http://www.gnome.org/gnumeric/
   
   gnome-print: http://www.levien.com/gnome/print-arch.html
   
   GWP: http://www.hungry.com/products/gwp/
   
   OMG: http://www.omg.org/
   
   ORBit: http://www.labs.redhat.com/orbit/
   
   RHAD: http://www.labs.redhat.com/
   
   Themes: http://www.labs.redhat.com/themes/
   
   Tom Tromey: http://www.cygnus.com/~tromey/gnome/metadata.html
   
   Y: http://www.hungry.com/
     _________________________________________________________________
   
                     Copyright  1998, Miguel de Icaza
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
   "Linux Gazette... making Linux just a little more fun!"
     _________________________________________________________________
   
                      IMAP on Linux: A Practical Guide
                                      
                                By David Jao
     _________________________________________________________________
   
     ABSTRACT: The Internet Mail Access Protocol, Version 4rev1
     (IMAP4rev1), allows users to access and maintain hierarchical
     collections of e-mail folders on a remote server over the Internet.
     The "client-server" nature of the IMAP paradigm allows e-mail
     programs to enjoy the same benefits of portability and network
     transparency that graphical programs have gained from the X11
     Windowing system. In this article, we describe how to set up client
     and server software on Linux to use IMAP for managing your mail. In
     addition, we explain the benefits and drawbacks of IMAP, and
     discuss when and under what situations it makes sense to use IMAP.
     
1. Why IMAP?

   How do you read your e-mail today? Most likely, you start up a program
   like pine or Netscape to read your mail. You probably have only one
   Inbox for each e-mail account you own. Since a few month's worth of
   accumulated e-mail is much too unwieldy for a single Inbox, your mail
   messages are almost certainly organized into separate mail folders for
   easy cataloging and maintenance. Unless you use IMAP already, these
   mail folders are sitting on your local disk (or in your home directory
   on a remote account). However, there are a number of problems with
   storing mail folders on a local disk:
     * With a text-based mail client, you have to log in to the account
       that holds your mail folders in order to check your mail. This is
       not so bad if you have one account, but it can be tough juggling
       multiple accounts this way. For example, try moving a large number
       of messages from a folder in one account to a folder in another
       account.
     * With a graphical mail client, it can be difficult or impossible to
       manage your mail over a low bandwidth link. This point merits
       consideration since many people prefer graphical clients and have
       low bandwidth links.
     * It's not very easy to switch clients when all your mail folders
       are formatted for one particular program. Many users are finding
       it increasingly useful to be able to tailor their choice of mail
       client to best suit their current situation.
       
   IMAP solves all these problems at once. The simple idea behind IMAP is
   that mail folders are stored on a central server and accessed via a
   commoditized, widely supported protocol. Using IMAP, you can:
     * access your mail folders from any machine, anywhere, as long as an
       IMAP client is installed,
     * manage multiple mail folders belonging to multiple e-mail accounts
       from a single client,
     * switch mail clients (Netscape, pine, Eudora) at will, and
       automatically carry all your mail folders with you.
       
   The analogy to the X11 windowing protocol is helpful. In MS Windows, a
   graphical program running on a computer is inextricably bound to that
   computer's display. In contrast, under X, a program running on one
   machine can display itself on another machine through a well defined,
   commoditized protocol. The resulting network transparency is a
   critical advantage in today's highly interconnected world. IMAP offers
   the same kind of flexibility: your e-mail folders (that is, all the
   data you really care about) are stored on a central server, so that
   instead of being inextricably bound to one mail program on one
   machine, they can be transparently accessed over the network by any
   compliant program.
   
2. IMAP Server Installation

   So now you're psyched about IMAP and want to use it, right?
   
   The first step is to install an IMAP server. If your ISP already runs
   an IMAP server for you, then you might want to just use their server
   instead. An advantage of this route is that you can access your mail
   from anywhere without requiring your computer to be on. A disadvantage
   is that you have to dial in to your ISP to access your mail. In any
   case, most ISPs don't provide IMAP services, so you'll most likely
   have to run IMAP on your own computer anyway.
   
   Without further ado, here's a quick and dirty set of instructions for
   installing the University of Washington IMAP server.
   
   First, get and extract the latest version (4.4 as of this writing):
[root@localhost ~]# lynx ftp://ftp.cac.washington.edu/imap/imap-4.4.tar.Z
[root@localhost ~]# tar xzvf imap-4.4.tar.Z
[root@localhost ~]# cd imap-4.4

   Type one of "make lnx", "make sl5", "make slx". The first is for
   traditional systems, the second is for systems using libc5 and shadow
   passwords, and the third is for glibc-based systems that use shadow
   passwords.

[root@localhost imap-4.4]# make lnx

   Install the newly compiled file:
[root@localhost imap-4.4]# install -s -m 755 -o root -g mail imapd/imapd /usr/s
bin

   Add the following line to your /etc/inetd.conf (it may already be
   there; if so, uncomment it out):

imap    stream  tcp     nowait  root    /usr/sbin/tcpd  /usr/sbin/imapd

   Set up your hosts.allow and hosts.deny files to restrict IMAP access
   to authorized domains only. This step is highly recommended, as the
   University of Washington IMAP server has had some fairly serious
   security vulnerabilities in the past.
   
   In /etc/hosts.deny add the line
imapd: ALL

   In /etc/hosts.allow add the machines and domains that you want to
   allow to access your IMAP server:

imapd: your.local.host.com
imapd: .yourisp.com
imapd: .yourschool.edu

   Finally, restart inetd and your server is ready to go:
[root@localhost ~]# killall -HUP inetd

  2.1. Distribution-Specific Installation Instructions
  
   If you are running a Linux distribution that comes with a package
   manager, you can install a precompiled IMAP server if you want.
   
   RedHat 5.2 instructions:
lynx ftp://ftp.redhat.com/pub/redhat/redhat-5.2/i386/RedHat/RPMS/imap-4.4-2.i38
6.rpm
rpm -Uvh imap-4.4-2.i386.rpm

   Debian 2.0 instructions:
lynx ftp://ftp.debian.org/debian/dists/stable/main/binary-i386/mail/imap_4.2-1.
deb
dpkg -i imap_4.2-1.deb

   After installing these packages, you'll still have to go back and edit
   /etc/inetd.conf, /etc/hosts.deny, and /etc/hosts.allow yourself as
   described above.
   
3. IMAP Client Configuration

   Once you've set up your server, configuring an IMAP client to use the
   server is a snap. The basic procedure is:
     * Pick a directory on the server system to hold all your mail
       folders. You need to have read and write access to this directory.
       I usually use $HOME/Mail. Create this directory if it doesn't
       exist.
     * Tell your IMAP client the name of your IMAP server, your username
       on that server, and the directory above where your mail folders
       live.
     * Now you can create and delete remote folders, and move messages to
       and from remote folders, just as if they were local folders, using
       the same techniques that you already use in your mail program to
       manipulate local folders.
       
   Here's three examples of programs that I actually use:
   
  3.1. Pine 4.05
  
   Pine is available from http://www.washington.edu/pine/. It is very
   popular in the Unix world. The 4.0x versions added support for online
   IMAP folder access. To configure pine, press S to enter Setup, L to
   configure your collection list, and then A to add a collection. Enter
   your server, username, and mail folder directory as described above.
   
   Simple, isn't it? Pine supports multiple IMAP collections, so you can
   add as many as you want and manage them all from one place.
   
                              Pine screenshot
                      Screenshot of pine configuration
                                      
  3.2. Netscape Communicator 4.07
  
   Netscape Communicator is an integrated web browser and Mail/News
   reader that is in fairly widespread use today. The 4.07 version is
   suitable for light mail processing, but it will crash if you give it a
   folder with well over 1000 messages (try it). Netscape Communicator is
   available from http://home.netscape.com/.
   
   To set Netscape up for IMAP, select Preferences under the "Edit" menu,
   expand the "Mail & News" tab, click on the "Mail Server" entry, and
   enter in your username and your IMAP server. Obviously, make sure the
   server type "IMAP4" is selected. Click on the "More Options" box and
   enter in the mail folder directory you selected above. Finally, make
   sure the "Move Deleted Messages to Trash" box is not checked; this
   feature is rather broken and IMAP already provides flags to deal with
   deleted messages.
   
   Netscape 4.0x does not support multiple IMAP collections, and it
   cannot automatically copy sent mail to a remote IMAP folder. Netscape
   4.5 does support these things, but I have found the IMAP client in
   Netscape 4.5 to be far too unstable for real work.
   
                            Netscape screenshot
                    Screenshot of Netscape configuration
                                      
  3.3. TkRat 1.2
  
   TkRat is my favorite graphical mail client right now. It also happens
   to be the only Open Source IMAP client I know (it's licensed under a
   BSD style license). It is available from
   http://www.dtek.chalmers.se/~maf/ratatosk/.
   
   In TkRat, select "New/Edit Folder" from the Admin menu. Then select
   "IMAP Folders" from the Import menu, and type in your username, IMAP
   server, and a wildcard matching the folders in your mail folder
   directory. Note that TkRat expects a wildcard rather than a directory.
   
                              TkRat screenshot
                     Screenshot of TkRat configuration
                                      
4. Important Usage Notes

   Here's some things about IMAP that are not obvious, but are very
   useful to know.
   
  4.1. Folder hierarchies
  
   Currently, a limitation of the UW IMAP server is that a folder cannot
   contain both messages and subfolders. That is, a folder can either
   contain subfolders, or messages, but not both. To specify a folder
   that contains subfolders, you need to add a / to the end of its name.
   
   Here's some examples:
     * Courses/ is a folder that can only contain subfolders.
     * Courses/Calculus is a subfolder of Courses/. It can only contain
       messages.
     * Courses/Languages/ is a subfolder of Courses/ that can only
       contain further subfolders.
       
  4.2. The Inbox
  
   The folder name INBOX, Inbox, or any capitalization thereof, is
   reserved for your inbox. You can't create a folder of your own with
   this name.
   
5. Security Considerations

   Running an IMAP server adds another system daemon, and thus, another
   potential security vulnerability. If you're not going to make use of
   the capabilities of IMAP, you're probably better off not installing
   it.
   
   A separate issue is the use of plaintext passwords for logins and
   authentication. Like most services, IMAP sessions are sent as
   plaintext over the Internet. Many people feel that sending passwords
   over the Internet as plaintext is no big deal. These people tend to
   use telnet, ftp, POP3, etc. without reservations. However, if you
   don't like sending your password over the Internet unprotected, you
   have precious few options:
     * Use Netscape Messaging Server, which supports IMAP over SSL.
       Unfortunately, there's no Linux version available, and the
       software costs $1295 besides.
     * Compile the Cyrus IMAP Server with Kerberos authentication
       support.
     * Use Secure Shell to transmit your IMAP session over an encrypted
       tunnel.
     * Install imp (a web-to-IMAP gateway) on your machine and access it
       through an SSL web server.
       
   Unfortunately, all of these techniques are beyond the scope of this
   article. The fact of the matter is, most of the data on the Internet
   is transmitted as plaintext these days. If it were easy to conceal
   this data, people would be doing it already.
   
6. Conclusion

   Fewer and fewer people are able to handle their daily volume of e-mail
   from one client on one machine all the time. While many are dealing
   with the e-mail mobility problem using the existing infrastructure of
   telnet, remote X displays, and distributed file systems, IMAP alone
   offers a comprehensive, application level solution tailored
   specifically for this need. By offering network transparency without
   sacrificing functionality, IMAP promises to revolutionize mobile mail
   access and change the way we read our mail for the better. I except
   that user demand will soon force IMAP support to be a required feature
   on all mail clients.
   
   In short, if you're really happy with the way you read your mail now,
   then you don't need to bother with IMAP, but if you're itching for
   some additional flexibility in managing your mail, you should
   definitely consider adopting IMAP.
   
7. Additional References

   IMAP4rev1 RFC
   
   A paper comparing IMAP and POP
   
   A long list of products supporting IMAP
     _________________________________________________________________
   
                        Copyright  1998, David Jao
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
   "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                     Linux Installation Primer, Part 4
                                      
                               By Ron Jenkins
     _________________________________________________________________
   
   
   
   Copyright  1998 by Ron Jenkins. This work is provided on an "as is"
   basis. The author provides no warranty whatsoever, either express or
   implied, regarding the work, including warranties with respect to its
   merchantability or fitness for any particular purpose.
   
   
   
   The author welcomes corrections and suggestions. He can be reached by
   electronic mail at rjenkins@qni.com, or at his personal homepage:
   http://www.qni.com/~rjenkins/. Corrections, as well as updated
   versions of all of the author's works may be found at the URL listed
   above.
   
   
   
   NOTE: As you can see, I am moving to a new ISP. My old one changed to
   metered access, which makes the information superhighway a toll road.
   Please bear with me as I get everything in working order. The e-mail
   address is functional; the website will be operational hopefully
   around mid December or early January.
   
   
   
   MESSAGE TO MY READERS:
   
   
   
   I would like to thank you all for your kind comments, and constructive
   criticisms concerning my articles. I would also like to thank the
   staff of the Linux Gazette, Marjorie in particular, for giving an
   unskilled goofball like me a chance to publish my scribbling. Keep
   those e-mails and questions coming!
   
   
   
   SEQUENCE OF UPCOMING ARTICLES CHANGE:
   
   
   
   To preclude a flood of e-mail on the subject, I have decided to change
   the order in which my columns will run. I had originally intended to
   do the IP_Masq/Internet Gateway piece this month, but then it occurred
   to me - what good is an Internet gateway without a network?
   
   
   
   So, the new sequence for the next few months will be:
   
   
   
   This column Planning a home network.
   
   Deploying a home network.
   
   IP_Masq/Internet Gateway.
   
   
   
   If you can't wait that long, and have a need for the Internet Gateway
   stuff, just drop me an e-mail.
   
   
   
   Part Five: Planning a Home Network
   
   In this installment, we will address some of the issues necessary to
   plan a home network. We will cover most of the issues that you will
   encounter, and perhaps a few you had not thought of. Finally I will
   walk you through the steps to creating an effective and optimal
   Network Plan. As with each installment of this series, there will be
   some operations required by each distribution that may or may not be
   different in another. I will diverge from the generalized information
   when necessary, as always.
   
   
   
   In this installment, I will cover the following topics:
   
   
   
    1. Do I need a home network or not?
    2. Some background theory on Ethernet and TCP/IP.
    3. Choosing a Topology.
    4. Choosing a NIC.
    5. IP issues - Reserved or Proper IP addresses.
    6. WAN connection issues.
    7. Planning the network - Physical vs. Logical layout.
    8. Planning ahead for easy administration.
    9. Deciding what services you require.
   10. Disaster Recovery and Fault Tolerance issues.
   11. Bringing it all together.
   12. References.
   13. Resources for further information.
   14. About the Author.
       
   
   
   Do I need a home network or not?
   
   This is a relatively easy question to answer. If you have more than
   one computer, you can certainly benefit by networking your boxes
   together. If you have a SOHO or small business, you can benefit as
   well.
   
   
   
   You might ask, "Why do I need a network?"
   
   
   
   Some possible answers include:
   
   Integration of common services such as file sharing so that your
   documents are stored on a single machine, which in turn allows all or
   some of your users access.
   
   
   
   Consolidation of all documents and data, eliminating the "Who's got
   the latest version of this freaking spreadsheet or document?"
   
   
   
   The ability to create internal discussion forums, as well as access to
   newsgroups either in real time or off line relevant to your business
   or personal interests.
   
   
   
   Consolidated Internet access for everyone where only one modem is
   required.
   
   
   
   Fax and scanner access from all your workstations.
   
   
   
   The desire to learn more about networking in general and Unix
   networking in particular providing you with new marketable skills.
   
   
   
   Some background theory on Ethernet and TCP/IP.
   
   For an overview of TCP/IP and networking, see my article in last
   month's issue.
   
   
   
   Briefly, to network two or more computers, three things are required:
   
    1. A Network Interface Card (NIC) which is installed in the computer,
       and provides the physical as well as the logical (more on this
       later) connection to the network.
    2. A medium to exchange information from machine to machine. See
       Topology below.
    3. A common protocol to transport the data from machine to machine.
       In our case, TCP/IP.
       
   
   
   Choosing a Topology.
   
   Crucial to the proper performance of your network is the topology you
   choose. There are many different topologies available, but for the
   purpose of your installation, I will confine the choices to the two
   most common topologies - 10BASET and 10BASE2, or more appropriately a
   star network versus a bus network, respectively.
   
   
   
   Pros and Con's of the two different topologies:
   
   10BASET:
   
   Pro's:
   
   Uses unshielded twisted pair (UTP) wiring. Is a point to point
   topology, meaning if any node (computer) on the network goes down, the
   rest are unaffected.
   
   
   
   Con's:
   
   Requires the use of a hub as a common connection point. Wiring is more
   difficult, since each node (computer) requires a separate connection
   to the central hub. More expensive than 10BASE2.
   
   
   
   10BASE2:
   
   Pro's:
   
   Uses easily available cheap coaxial cable forming a "bus" to connect
   all nodes. No hub or extra equipment required. Is easy and simple to
   wire. Costs significantly less than a 10BASET topology.
   
   
   
   Con's:
   
   If the bus goes down, the entire network goes with it. Requires proper
   termination at both ends of the bus (basically two fancy 50-Ohm
   resistors). A termination problem can bring down the whole network.
   
   
   
   Finally, another point to consider - mixed topologies are often used
   to accomplish different objectives. For instance, say you have an
   office set up in the basement that contains many workstations that are
   physically close together. Upstairs you have 3 computers used by your
   family in disparate locations. The solution - downstairs you use a
   star (10BASET) this provides better fault tolerance for your business
   machines. Upstairs you use a bus (10BASE2) to simplify wiring issues.
   To tie it all together, you run a 10BASE2 cable downstairs, extending
   the bus to the downstairs machines and hook it up to the hub. You can
   then access your "office" downstairs, to get your work done, and the
   business machines can contact you e-mailing you until they feel happy.
   Voila!
   
   
   
   NOTE:
   
   When determining the length of coaxial cable, remember that the cable
   will run from machine to machine, not in one long piece.
   
   
   
   If you are going with UTP, depending on the size of your installation
   and amount of cable required, you may or may not want to look into
   purchasing the cable in bulk, purchasing some RJ-45 plugs, a crimping
   tool and do it your self.
   
   
   
   Choosing a NIC.
   
   This can be a tricky one. Almost everyone is tempted to buy the cheap
   clone cards, and sometimes it works, sometimes it does not. At least
   specifically ask if the card can disable the plug-n-pray features, as
   you may or may not need to explicitly set the IO address, as well as
   the IRQ.
   
   
   
   This mostly applies to the ISA based cards. Most PCI cards can be
   autoprobed if you are using kernel 2.0.34>.
   
   
   
   I like the 3Com products. They cost a little more, but it's worth it
   in the long run. For an ISA bus, I like the 509B. For a PCI bus, I
   like the 905 series. Also the PCI NE2000's are known to work. Also,
   the type of NIC you buy is largely determined by your topology choice.
   I recommend getting a "combo" card which contains both a 10BASET as
   well as a 10BASE2 interface. This lets you connect to either topology,
   and is a prudent measure.
   
   
   
   As you will soon see networks are never a finished product, but rather
   a constantly changing, ever evolving project. Getting a combo card
   will give you maximum flexibility as your network changes. And it
   will.
   
   
   
   A final note - NIC's are measured in the amount of bit space they can
   transfer data. Common to most Ethernet cards is 8, 16, and 32 bits.
   The higher the number the better. 8 and 16 bit cards are usually ISA
   cards. The 32 bit cards are PCI.
   
   
   
   IP issues - Reserved or Proper IP addresses.
   
   The next thing you will need to determine is the adressing scheme you
   will use on your network. I always tell my clients that getting Proper
   IP adresses (a block of IP's purchased from your ISP) is the best way
   to go, but it does cost more. This is usually referred to as a
   dedicated connection and costs more than a regular dialup account.
   
   
   
   The advantages of a dedicated connection means your ISP will set aside
   one of their modems for your personal use. This, along with the IP
   addresses set aside for your personal use, account for the higher
   pricing.
   
   
   
   Also, a dedicated connection allows you to have as many e-mail
   addresses as you want, put up your own website or sites, and for
   $74.00, your own domain on the Internet. This will give friends
   clients or browsers a permanent way of contacting you, obtaining
   information on your products or services, or a virtual gathering place
   for your family to let them keep in touch. As you and your family
   exchange more and more information, it can ultimately become the
   central point for family news, organizing events, and keeping current
   on things without those $50.00 phone calls everyone makes around
   Thanksgiving and Christmas.
   
   
   
   More commonly, people want to used Reserved IP's - certain subnets,
   set aside to be used for this sort of service, and are not routable
   unless they pass through a gateway machine, or proxy, which
   effectively hides the interior network (usually 192.168.x.x) from the
   outside world making all your machines appear to the outside world as
   the gateway machine.
   
   
   
   The downside to this is that using this scheme, you will only have one
   e-mail address, the one you got at the time of your sign up. However,
   many ISP's offer dialup accounts with more than one e-mail address,
   and some even allow concurrent connections (this means you can have
   more than one modem connected at the same time.) Check around in your
   area for this kind of service. It will probably cost more, but not as
   much as the dedicated connection option.
   
   
   
   Finally, try to get a "static IP" address instead of a "dynamic" one.
   This will allow you to put up a webserver for personal use, or to
   advertise your business. Without a static IP, it is very difficult to
   do much more than pull from the Internet, you will not be able to push
   much more than e-mail.
   
   
   
   Before I get bombed with e-mail about dynamic IP hacks, scripts that
   can post your current IP, etc. Please keep in mind that the purpose of
   this series is to provide new users of the Linux operating system as
   many services and options as possible, while keeping the configuration
   and deployment as easy as possible.
   
   
   
   As the series progresses, and our skill levels improve, I will begin
   to go a little deeper into the details and tuning and tweaking.
   
   
   
   WAN connection issues.
   
   This is primarily a budgeting issue. Briefly you have two dialup
   choices, and for dedicated connections, you have three. Outlined below
   you will find the various choices compared and contrasted, along with
   my recommendations of what I usually choose.
   
   
   
   Dialup Choices:
   
    1. A standard modem, 33.6Kbps or less. (What about 56k? I have not
       seen any so far that are not just a telephone interface and a
       impedance transformer, with all the "modem" work being done by
       your CPU. This is like P-N-P on steroids. If anyone has
       sucessfully used one, I would love to hear about it.) This is
       suitable for small networks, <=5 users, who will be using the
       Internet sporadically. This option costs the least. Requires a
       computer to function.
    2. An ISDN modem, and ISDN line. This option is best for networks of
       <=25 users, or power users who are on the net most all of the
       time, and doing many tasks simultaneously. I can and have soaked
       one of these all buy myself. But then again, I have nowhere to go
       and all day to get there. :-) This option will give you a true
       steady throughput of slightly less than 128Kbps. This option will
       require an additional ISDN line to be purchased, In my area, it
       runs $112.00 per month for unlimited time. There are metered usage
       plans that can run as low as $40.00 per month. This might make
       sense for you if you and your network will be sporadic users, but
       be warned - speed is addictive, and you may find your sporadic use
       goes way up. Additionally, your ISP connection charge may or may
       not be more. Requires a computer to function
       
   
   
   Dedicated Choices:
   
   Here you have both of the options above, and an additional one
   described below.
   
   
   
   A dedicated router. This device takes care of the connection to your
   ISP, automatically redial if the link fails, and offers firewall and
   many other security features. It is an independent device, so no
   computer is required. All you need is the router and the ISDN line.
   Costs range from ~$100.00 - $800.00. I use the Ascend Pipeline 50,
   which as I recall cost about $600.00 when I bought it three years ago.
   This is the best choice for people with a dedicated connection, who
   plan to do business on the web as well as provide Internet access to
   their end users. Otherwise, it's probably overkill. This is the
   easiest, quickest, most reliable way to manage your connection. Can be
   set to dial on demand, from your network out, as well as from the
   Internet in. This may save you some money if you are on a metered
   usage plan. Your ISP charges will definitely be higher. In my area, a
   dedicated ISDN account ranges from ~$150.00 - $300.00 per month.
   
   
   
   Planning the network - Physical vs. Logical layout.
   
   There are two things to consider when planning a network the physical
   layout (where the machines are, where and how the cable will be
   installed, which machines will provide which services, etc.) And the
   Logical layout (how the data actually flows, and how each machine
   interacts with the network, usually expressed in a hierarchical
   manner.)
   
   
   
   For instance, say you have a network consisting of four workstations,
   two on each side of another three machines, a fileserver, an Internet
   gateway, and a DNS server, all connected to each other by a bus
   (10BASE2) architecture.
   
   
   
   Physically, you have 2 workstations, the file server, gateway, DNS,
   and two more work stations. Logically, you have four levels to your
   network - at the top you have your bus (since any interaction requires
   the bus to operate,) at the second tier, you have the Internet gateway
   and the DNS machines (since all machines require DNS to "find" each
   other, and DNS needs the gateway for name requests it cannot resolve,)
   at the third tier, you have the fileserver (since all the workstations
   need access to this machine, but it should not interact with the
   outside world for security reasons,) and finally at the fourth level,
   you have your workstations.
   
   
   
   Planning both the physical and logical layout of your network is
   crucial to the effectiveness and performance of the network. On the
   physical side, you need to plan where your cabling will be, and pay
   particular attention to how it is placed. You will need to include in
   your plan entry and exit points if necessary and how you can best
   arrange the cables to run together and how you will bundle and anchor
   them. You will also need to consider the placement of any other
   network devices such as hubs or routers to keep the distance from the
   device to the machines that will connect to it to assure you will use
   the shortest length of cabling possible.
   
   
   
   On the logical side, check and recheck your logical layout to make
   sure you are placing your machines in the proper logical positions
   that will provide maximum performance and minimum interaction
   problems. Looking at your network logically may point out some
   problems not apparent in the physical layout.
   
   
   
   Planning ahead for easy administration.
   
   Now we come to one of the two things most people do not or will not
   do, but are crucial to effective management of your network. You will
   need to do a thorough and complete inventory of all your hardware. At
   the bare minimum, you should collect the following information about
   every computer that will be connected to your network:
   
   
   
    1. Make, model number, and manufacturer of the computer.
    2. Type and speed of your CPU.
    3. Amount of RAM.
    4. Bus type.
    5. Number and type of slots used/available.
    6. The make, model, and manufacturer of each device inside your
       computer.
    7. The IO and IRQ for each of the above devices.
    8. Make, model, and manufacturer of you video card including the
       amount of RAM onboard.
    9. Make, model, and manufacturer of your monitor.
   10. What resolutions your monitor is capable of.
   11. Type and size of floppy drive(s).
   12. Type and size of hard disk drive(s).
   13. Type, make and model of your mouse.
   14. Make, model and manufacturer of any external devices.
   15. Type and version of operating system(s).
   16. Make, model, manufacturer and interface of your printer (if
       needed).
   17. Make, model, manufacturer and interface of your backup device (if
       needed).
       
   
   
   Ideally, you should record everything, all the way down to the
   chipsets, but you can start with the above. I can hear everyone
   yelling "What good will this do me?"
   
   
   
   Well, consider this - if your computer has only 4 MB RAM, and is
   running some flavor of windows, you will need to add more RAM.
   Similarly, if some of your workstations contain only ISA slots, while
   others have both PCI and ISA slots, now is the time to find out. Not
   after you get back from the store with a bunch of PCI NIC's.
   
   
   
   The type and version of the operating system is very important. If you
   have any Novell boxes, they will require additional configuration and
   translation services. The same applies to some Mac's.
   
   
   
   Additionally, this time and effort will pay off in the long run when,
   not if, one of your machines starts misbehaving.
   
   
   
   Deciding what services you require.
   
   This is important as well, because the services you need will somewhat
   dictate how your network is set up. Some of the more popular things
   are listed below. You may or may not have additional requirements.
   
   
   
    1. File Server - this will most likely be the first thing to think
       about. Consolidating access to your information was one of the
       reasons networks were invented.
    2. Internet access - this is the second most common service required.
       This will allow all workstations to connect to the Internet.
       Depending on the type of connection, you may or may not be able to
       e-mail, offer ftp services, and web services to the outside world,
       as well as internally. This will require either a router or a
       computer dedicated to this purpose. If you are using a computer to
       provide access, some additional configuration and software may be
       necessary.
    3. Name Resolution - some type of name resolution is required on any
       TCP/IP network. For smaller networks, you can simply use a hosts
       file to take care of this. If you have a dedicated connection, DNS
       is required. You must have two DNS machines to maintain your
       network information and when necessary, update the Internet root
       servers. Finally, if you are connecting through a dial up
       connection, you should probably consider running a caching
       nameserver from which all your network nodes obtain information,
       and in turn you instruct this machine to use your ISP's DNS
       servers. This will speed up things a bit on slower connections.
    4. If you are in an all Unix shop, or a cross platform environment,
       you will probably want to use NFS and possibly Samba. The former
       can be used by Unix machines by default, and on windows boxes with
       additional software. The latter is used exclusively by windows
       clients, making the Linux machine appear as just another computer
       in your Network Neighborhood, and allows you to transfer files by
       simply dragging and dropping, just like copying files from one
       disk to another.
    5. Sometimes it is advantageous to be able to execute programs on a
       remote machine, and have the results display on another
       workstation. Using telnet, you can execute any character mode
       programs, but often you will need and/or desire to run remote
       programs that require the X windowing system to function.
       Instructions for this can be found in the September issue of the
       Linux Gazette.
    6. Another handy thing to run is a time server. This allows all your
       machines to synchronize to the National Institute of Standards and
       Technology (NIST) atomic clock in Ft. Collins, Colorado. Many
       Internet applications and services are very sensitive to time
       disparities, and you want your servers to be right on, for
       examining the logs for problems or unauthorized use.
       
   Disaster Recovery and Fault Tolerance issues.
   
   I know I keep harping on this subject throughout my columns, but it is
   crucial. You WILL need a backup device. Ideally, you should have a
   backup device on every workstation and server on your network.
   Practically, you can get by with one backup device, usually on the
   file server, or a machine dedicated to this function.
   
   
   
   When you purchase a backup device, make sure it is supported by Linux.
   Otherwise what you end up with is a very expensive bookend. This
   machine should have sufficient disk space to handle the spooling of
   your windows and Mac clients. Your Unix machines should be able to
   access the backup device remotely.
   
   
   
   Also, you need to define a backup schedule for both your end users, as
   well as the servers. At a minimum, you should have enough tapes or
   whatever your backup device uses, to perform daily backups Mon. - Fri.
   as well a weekly backup Sat. or Sun. for two weeks. This will at least
   allow you to go back two weeks when, not if, you or one of your end
   users finds out they need a file they deleted "Uhh, sometime last week
   ."
   
   
   
   Bringing it all together.
   
   You have chosen your topology, picked your NIC's, decided on the type
   of IP addresses you will use, decided on the type and speed of your
   Internet connection (if needed,) looked at your proposed network from
   both a physical and logical point of view, completed your hardware and
   software inventory, determined what services you will require, last,
   developed a backup schedule and are going to purchase a backup device
   (if needed.)
   
   
   
   "What do I now?"
   
   
   
   You check everything over and over. You want to make all your mistakes
   at the planning stage, not the deployment stage.
   
   
   
   Once you are satisfied with your plan, write it all down. What you
   need to purchase , as well as the things mentioned in this article.
   Then check it one more time.
   
   
   
   Finally, you can start shopping around for the best price on the
   things you will need. Here are a few general guidelines - when
   purchasing coaxial cable, don't buy it at a computer store. The kind
   of cable they sell is crap and noisy as all getout. Go to a ham
   (amateur) radio shop, and tell them you want RG-58A/U coax with BNC
   connectors on each end in the lengths you require. If a Ham shop is
   not available, go to Radio Shack, and look there, where I believe they
   offer 6, 8, 12, and 50 foot lengths.
   
   
   
   When purchasing your NIC's, look into bulk discounts. If you are
   buying at least four or five, there is often a price break.
   
   
   
   Stay tuned, and next month we are going to actually install and
   configure the network !
   
   
   
   References:
   
   The System Administrators Guide
   
   The Network Administrator's Guide
   
   The NET-3 HOW-TO
   
   The Ethernet HOW-TO
   
   The IP_Masq mini HOW-TO
   
   The Hardware HOW-TO
   
   
   
   Resources for further information:
   
   http://sunsite.unc.edu/LDP/
   
   http://www.ssc.com/
   
   http://www.lantronix.com/
     _________________________________________________________________
   
               Previous ``Linux Installation Primer'' Columns
                                      
   Linux Installation Primer #1, September 1998
   Linux Installation Primer #2, October 1998
   Linux Installation Primer #3, November 1998
     _________________________________________________________________
   
                       Copyright  1998, Ron Jenkins
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
   "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                    Linux - the Darling of Comdex 1998?
                                      
                          By Norman M. Jacobowitz
     _________________________________________________________________
   
   Once again the mega-computer show known as Comdex
   (http://www.comdex.com/) took over Las Vegas, Nevada, this past
   November 15 through 20th. On hand to represent the Linux community
   were 12 vendors who made up this year's Linux Pavilion: Linux Journal,
   Red Hat, S.u.S.E., Caldera, VA Research, Linux Hardware Solutions,
   Linux International, InfoMagic, Enhanced Software Technologies, Turbo
   Linux, Interix and ApplixWare. Special Linux-related events included
   the presentation of the first annual Linux Journal Editor's Choice
   Awards by our esteemed editor, Marjorie Richardson.
   
   As usual, there were throngs of corporate buyers, sellers and
   interested onlookers from nearly every nation on hand for the event.
   Hundreds of exhibits, from small, quiet displays of software to a real
   high-wire balancing act performed above the crowd, entertained and
   informed visitors.
   
   But there are several factors that set the recent Comdex apart from
   years past. Number one, there was a noticeable drop-off in business
   attendance. Several major corporations, including Netscape and Intel,
   either did not show up at all or rented small meeting spaces rather
   than building booths. Mirroring the corporate no-shows was the
   precipitous decline in individual attendance. The missing visitors
   were readily noticed -- taxi lines were shorter, hotel rooms were
   easily secured, etc.
   
   What makes Comdex 1998 stand out even more is the dramatic increase in
   the amount of attention that was received by Linux. Not only was the
   Linux Pavilion packed from the opening on Monday until the close on
   Friday, but other exhibitors had more to say about Linux during the
   course of the show.
   
   Evidence was everywhere that Linux is reaching past the IT departments
   at major corporations and getting the attention of management and
   other non-technical decision makers. This in turn meant that press
   attention was focused on Linux as never before. Several vendors in the
   Linux Pavilion were interviewed for a local TV news segment, while
   most major computer oriented print outlets made at least some mention
   of the Linux presence at Comdex.
   
   Even more impressive were the numbers of average computer users who
   approached vendors at the Linux Pavilion with an open mind and lots of
   questions ... and then walked away with a distribution CD! Linux
   International was distributing several different CD-ROMs and asking
   for a $1 donation. They "sold out" of CDs quite quickly, and were
   eventually rescued by the generosity of S.u.S.E. As a result of the
   efforts of LI and the rest of the Linux Pavilion, there are now
   perhaps as many as several thousand new Linux users.
   
   So, what does Comdex 1998 mean for the future of Linux? Well, based on
   my experience there and the people I spoke to, I believe we can expect
   several of the following events, if not all, to occur between now and
   the turn of the century:
     * Far more major companies will be porting their software to Linux.
       At Comdex, I was approached by many programmers and marketing
       types alike who were sent to the Pavilion to assess the potential
       for porting their wares to Linux. Look for a few surprises to come
       up in the next year; rumors were flying about various vendors
       currently alpha testing their products for Linux.
     * Even more hardware will be sold with native Linux drivers
       available, especially in the field of RAID controllers, now that
       Oracle is ported to Linux. Again, I spoke to many programmers sent
       by hardware vendors to seek out counsel and advice on writing
       drivers for Linux.
     * Linux will continue to grow in appeal to "end-user" types who are
       fed up with the inadequacies of proprietary, closed-source
       Operating Systems. Many a newcomer was exposed to Linux at Comdex;
       many of them will wind up long-term users. Look for this to emerge
       as a trend.
     * Major vendors will consider their Linux ventures to be a major
       strategic business move, not merely a sideline venture. At Comdex,
       many of Oracle's major announcements centered around their support
       for Linux and role of Linux in it's future. Look for more
       companies to expand into Linux in some capacity and proudly
       advertise and publicize those moves, rather than burying the Linux
       news under other announcements.
     * Linux-specific skills will become a hot resume item for
       programmers, system administraters, and other techie types in the
       job market. Many professionals from several different
       organizations asked me for personal assistance in helping them
       locate Linux-savvy professionals for their personnel pools; one of
       my friends now has a lucrative job with someone I met briefly at
       Comdex.
     * Linux will continue to improve, while certain major Operating
       Systems will see no such improvement, even as major new releases
       are published. Yes, this is an opinionated prognostication, but
       there is evidence to support such an assertion: few vendors at
       Comdex 1998 had anything "new" or dramatically improved to show.
       Their plight is not going to change overnight, no matter what kind
       of marketing hype surrounds their upcoming releases.
       
   In all fairness, there are some negative interpretations of the
   attention Linux received at Comdex. For one, the strong press
   attention could be somewhat explained by the very point made in the
   previous paragraph: because the big vendors like Microsoft didn't have
   any new mind-boggling toys to show off, the press had to look for news
   where it could find it. Linux was the biggest new thing to talk about.
   Plus, with attendance off and fewer vendors on hand, visitors had to
   look harder to find anything interesting to see at the show; it's
   possible they may not have come down to see us had their been more
   going on at the other Comdex venues.
   
   Yet the reception received by Linux vendors and enthusiasts at Comdex
   1998 can only be described as overwhelmingly positive. As a final bit
   of evidence to support that claim, let me relate the following
   personal anecdote ...
   
   On the flight down to Las Vegas from Seattle, it was my pleasure to
   sit next to a Vice President from Microsoft. This gentleman was a
   pleasure to speak with about Microsoft, Open Source software and
   Linux. He was filled not with judgment and disdain, but genuine
   interest and thoughtful questions about what free software and Linux
   mean for the future of computing. Not only that, but he did assert
   that while companies like Microsoft are in business to make money, he
   himself is very interested in learning more about Linux and other free
   software. He said that many of his colleagues and contemporaries all
   over the business spectrum are equally intrigued. Something tells me
   his attitude is not unique ... Linux and Free/Open Source Software are
   poised to take a remarkable position in the future of computing and
   technology.
   
   With all of these facts taken into consideration, there is one logical
   conclusion: Comdex 1998 was one more step on Linux's way to complete
   world domination.
     _________________________________________________________________
   
                   Copyright  1998, Norman M. Jacobowitz
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                        My Hard Disk's Resurrection
                                      
                               By Ren Tavera
     _________________________________________________________________
   
   I'm not a hacker, or a computer genius, I'm just a common mortal who
   has a computer at home, work with computers at the office and usually
   do the same things mortals do on their home and work computers; well,
   almost the same things, I really don't like much playing computer
   games.
   
   As many others, I began using computers with commercial software (all
   right, let's admit it, I've been using the wintel type platform for
   years) and I thought there was nothing better. I can't say anything
   about a Mac, maybe some day I'll try one, but, few months ago, reading
   some messages on a bbs I found someone saying "....and here I am,
   happy with Linux...".
   
   -Linux?, What's that?- I thought, -maybe another game...- and I wasn't
   really interested on that, so I forgotten at all... almost. One fine
   day I asked him about that "Linux" thing, and the answer was a short
   explanation about the OS and some of the advantages on using it.
   
   My first impression was thinking that UNIX was for a super computer (I
   own an AMD K6/166, 32Mb RAM), for strange scientific applications or
   big companies, I thought I didn't need it, and after all, most people
   say that it's really hard to learn how to use it. I'm not a
   programmer, a scientist or so, no, I don't need that. But my curiosity
   was increasing and I was asking more people about Linux and I was
   surprised on the amount of people using it (I found a Linux users
   Group in my home city). The answers and stories were amazing and
   exciting, maybe I could give it a try.
   
   I bought a book that included a Slackware distribution CD. After
   reading the installation section, making a backup of my important
   information, learning (on the same book) how to use the fdisk program
   (I never thought I had to re-partition my hard disk, so I didn't care
   on learning how to do that) and a lot of pray and courage, installed
   the new OS in a second partition. All went really well and I've been
   learning about Linux and system administration since that day. I have
   installed and re-installed different distributions several times
   (Slackware, RedHat and S.u.S.E), having success on some items, but
   having to read more documentation and information sources on other
   ones to make things work. Sometimes I had frustrations (couldn't use
   my CD-ROM drive, graphic user interface, sound, printer, modem,
   floppy. etc.), I had to read a lot and make questions to the Users
   Group. At the end, the results really worth the effort.
   
   Frankly, I didn't abandon the wintel side, I was working with the well
   known commercial office suite on making documents, commercial graphic
   programs, sound applications, using my printer, etc..., the Linux side
   was only for navigating on the net, but, two months ago, my computer
   refused to start windows, I even couldn't start DOS. I started it from
   a floppy, looked for the C:\ drive and found it. What happened?, I ran
   the scandisk program getting a message about a sector damage and that
   the disk couldn't be repaired by the program, Oh no!, my hard disk was
   dead...
   
   -I still can take the guaranty and the vendor can repair or replace
   the hard disk...-, well, the damage was caused by a powercut during a
   storm, and they don't support that on the guaranty, so the solution
   could be a low-level formatting, but doing that could let the disk
   completely useless, so I took the computer back home, thinking that I
   would have to wait to have the money to buy a new disk.
   
   -Hey, wait a minute!, I didn't try Linux, It may works- and it really
   did perfectly so I had to decide, and I did it, save my information to
   floppy disks and use the hole disk space (2.1Gb) for Linux.
   
   Maybe some superior mind was trying to take me to the light. I've been
   learning more on using The OS and its applications. Now, I can print
   in full color with my Epson stylus 400 (ghostscript, ghostview,
   apsfilter), play sounds, midi files and CD-music with my pnp Yamaha
   soundcard (oss/linux sound driver), work with my .doc, .xls, .wks,
   .dbf, ... documents (StarOffice 4.0), manipulate, use and print a lot
   of graphic files (the Gimp), and of course, get connected to the
   world, sending and receiving email, faxes, files, etc. (Netscape,
   efax, ncftp), even I play some games.
   
   I can change the look and feel of the X Window environment every time
   I want, keep my secret and important information away and safe from
   intruders (kids), render some strange 3D scene while I compile a new
   program to work on my system and update my shares portfolio on the
   spreadsheet, taking the data directly from the internet, and receiving
   another new application from an ftp site.
   
   Next is learn on TV-cards, I have one and want to see my WallStreet
   News (MTV, Bay Watch and the Nanny too) on my home computer again, but
   using Linux. Also I want to learn how to set-up a LAN using TCP/IP,
   for a little home network (experimental purposes), and maybe for a
   later small business.
   
   I don't worry about the prices (all the applications I use came with
   the distributions or I got them from the net) and the legal stuff,
   almost everything I have installed is free software (mostly GNU Public
   License).
   
   Now I can solve a lot of problems in the office, and have nice talks
   with the systems guys, I can understand all what they say (as I said,
   I'm not a programmer or a hacker, I buy and sale shares, screaming and
   pushing people all day, but we use computers to get the orders and
   register the trades). As the post on the bbs said, ...and here I am,
   happy, with Linux...
   
   I'm still saving for another hard disk (and a UPS to prevent surprise
   powercuts)..., I admit it, I'm going to install (maybe) a win95
   portion on that disk (it's easier to use for my wife, by now), but I
   can take my time, because my good old HDD was dead, and now, It's
   again alive.
     _________________________________________________________________
   
                       Copyright  1998, Ren Tavera
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                       Gaby and Notes-Mode Revisited
                                      
                   Two Small Personal Databases for Linux
                                      
                               ByLarry Ayers
     _________________________________________________________________
   
                                Introduction
                                      
   Though many full-fledged SQL database systems exist for Linux, both
   commercial and Open Source, these large client-server applications are
   overkill for managing a single user's personal data. Personal
   information managers such as Lotus Organizer have long been popular
   with users of mainstream OS's, while Preston Brown's Korganizer (a
   QT-based Organizer clone) and Ical (a Tcl/Tk calendar application) are
   popular with many Linux users. These applications for the most part
   combine a PIM with calendar and scheduling features. In my case, I
   have little need for the calendar, etc., but I do have quite a bit of
   information which I would like to make more manageable. In keeping
   with the unix tradition of small, specialized tools designed for
   specific tasks, this article concerns two applications which can help
   a Linux user organize and make more accessible personal data.
   
                                    Gaby
                                      
   Gaby, written by Frederic Peters, started as a simple address-book
   application written with the GTK toolkit. The name is an acronym,
   originally standing for Gaby Address Book of Yesterday; after further
   evolution of the program the author decided that the acronym could be
   generalized to Generic Astute Base of Yesterday. The further
   development was a result of the author's realization that he had
   created a simple database framework which could be used for other
   types of data. The "of yesterday" in the acronyms I take to be an
   acknowledgement that Gaby uses semicolon-delimited ASCII text as its
   data-storage format rather than the more complex, less portable, and
   often binary formats common in the big database systems. ASCII text as
   a data format has been around for quite a few years, but still can be
   useful for even quite large databases; see issue 34 of the Gazette for
   an article about NoSQL, which uses tab-delimited ASCII text as its
   format.
   
   As installed, the executable gaby is symlinked to gbc. Invoking gbc
   starts up Gaby as a bookshelf organizer rather than as the default
   address-book. Gaby can display two different views of the user's data
   files, which are stored in the directory ~/.gaby.
   
   In the most recent version of Gaby (0.2.3 as of late November of 1998)
   a user can create any sort of database with whatever fields are
   appropriate. This is a new, not completely implemented feature and the
   documentation is scanty at this point, so I'll present a quick
   overview of how it can be done.
   
   Begin by creating a new empty directory called /etc/gaby. In this
   example I'm creating a database of prairie plants native to my area.
   In the Gaby source distribution is a sample template file named
   desc.gtest. Copy this file to etc/gaby, then rename it so that the
   suffix relates in a mnemonic fashion to the subject-matter of your
   trial database. In this example I renamed the file to desc.plants with
   the command mv desc.test desc.plants. Edit this desc.[whatever] file,
   changing the field names to reflect the nature of your data.
   
   Next create a symbolic link in the /usr/local/bin directory (which is
   where Gaby is installed by default), linking gaby to plants (or
   whatever suffix you chose) with the command ln -s gaby plants. Now you
   can start Gaby by typing the name of your symlink and a customized
   Gaby window will appear with your new field names ready to be filled
   in.
   
   The default view is the Form window, which shows the first entry in
   the address or book data-file:
   
   Gaby Form Window
   
   Any of the entries can be viewed in this window by means of the icons
   or menu-items, and new items can be added. In the menu-bar of this
   window is a List menu-item, which allows the user to sort the various
   items alphabetically according to any of the fields. Another menu-item
   provides the ability to export a list to either LaTeX or HTML tabular
   format.
   
   The other window available is the List view, which is an overview or
   index of all entries in the file:
   
   Gaby List Window
   
   Gaby is a good example of a free software project which is beginning
   to gain momentum as users begin contributing enhancements and
   providing feedback.. This naturally stimulates the developer to
   further augment the program. Gaby appeals to me because rather than
   being a fixed-function program, it can be extended by its users so
   that it can be used in ways not imagined by the author.
   
   The current release of Gaby can be obtained from the Gaby web-site.
     _________________________________________________________________
   
                            Notes-mode Revisited
                                      
   In issue 22 of the Gazette I reviewed an add-on mode for GNU Emacs
   called notes-mode. This useful editor extension was written by John
   Heidemann in an effort to bring order to his collections of academic
   notes. The core of this mode is a collection of Perl scripts, some of
   which are intended to be run automatically as a daily cron job (these
   index the files and establish internal links), while others time-stamp
   and initialize a new note.
   
   While I was impressed at the time of my initial review with
   notes-mode's capabilities, I didn't succeed in making it work with
   XEmacs, which is my preferred editor. Recently John Heideman released
   version 1.16, which (thanks to contributions by Ramesh Govindan) now
   functions well with XEmacs. I've been using the mode extensively since
   then, and have found it to be useful beyond its intended purpose.
   
   Notes-mode was developed to help organize academic notes, but it
   serves me well as an organizer for notes on various subjects. Every
   day a new file can be initialized including whatever user-defined
   categories are desired. The system allows easy keyboard navigation
   between the various days' category entries, and a temporary buffer can
   be summoned composed of all entries under a selected heading. The
   effect is similar to using links in a HTML file, with the advantage
   that entries are devoid of mark-up tags and don't require a browser
   for viewing. Another HTML-like feature is external file-linking. Using
   code adapted from Bill Perry's W3 Emacs web-browser, an entry such as
   file:///home/layers/xxx.txt can be selected with the mouse or a
   keystroke, causing the file to be loaded into the Emacs session. PGP
   encryption of individual entries is also supported (using the
   MailCrypt Emacs/PGP interface).
   
   In a sense, Notes-mode is another sort of personal database optimized
   for subject- and date-based navigation. Its capabilities are
   orthogonal to those of Gaby. Notes-mode has the limitation of being
   fully useful only for users of Emacs or XEmacs, while Gaby can be run
   by anyone, though only in an X session. They both are ASCII-text
   based, ensuring that the data is fully portable and accessible by any
   editor or text-processing utility. Either or both of these programs
   can be invaluable to anyone needing to impose some order upon
   collections of information.
   
   Version 1.16 of Notes-mode can be downloaded from this WWW site.
   Complete documentation is included with the archive in several
   formats.
     _________________________________________________________________
   
   Last modified: Sun 29 Nov 1998
     _________________________________________________________________
   
                       Copyright  1998, Larry Ayers
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                    Product Review: Partition Magic 4.0
                                      
                              By Ray Marshall
     _________________________________________________________________
   
   I recently used Partition Magic 4.0, and was quite impressed, although
   I did run into some interesting glitches.
   
                                Background:
                                      
   My machine was (and still is) partitioned like this:
   
     * FAT32, containing Win95
     * Extended partition
          + swap partition for Win95
          + swap partition for Linux
          + /dev/hda7 - /
          + /dev/hda8 - /usr
          + /dev/hda9 - /usr/src
          + /dev/hda10 - /usr/local
          + /dev/hda11 - /home
          + /dev/hda12 - /wrk
     * I also had some unused space above /dev/hda12, that I had
       previously been unable to utilize. Not much; less then 100K.
       
   I very rarely run Win95. I use Linux for everything I do at home.
   Professionally, I am a software/knowledge engineer, using several
   different flavors of UNIX every day -- exclusively unix.
   
                               Documentation:
                                      
   There was nothing in the PartitionMagic User Guide that was of any use
   to me. I opened it once, looking for references to either Linux or
   ext2 -- nothing in the Table of Contents -- nothing in the Index! I
   did find a few terse references, like "Ext2 is only used by Linux".
   
   While writing this, I decided to go through the PartitionMagic User
   Guide page-by-page, and see what I could find. Besides those few
   references, I found in Chapter3: Completing Hard Disk Operations,
   under Creating Partitions / Scenarios, a section titled Creating Linux
   Logical Partitions. Although this might be of some limited use to a
   neophyte, it might also lead them down a somewhat limiting path --
   only a swap, and one other Linux partition. But, that's a judgment
   call, and beyond the scope of this article.
   
   Pasted onto the cover of the PartitionMagic User Guide, was a sticker
   that said: "UPGRADE - PREVIOUS INSTALLATION REQUIRED". So, I figured
   that PM would remove much of the old version, replacing it with the
   new one. I subsequently forgot about V3.0, until many hours later.
   
                               Installation:
                                      
   I booted Win95, and started the PM4.0 installation.
   
   The installation went smoothly enough. Running it, however, yielded a
   few surprises.
   
                                 Execution:
                                      
   First off, I was very pleasantly surprised, and very impressed by the
   new GUI. There are several ways to select a partition, and to
   manipulate it. I particularly LOVE the way one can just move the whole
   partition (within the available space) back an forth. It was very
   intuitive. I give PowerQuest five stars (*****) for the GUI!
   
   With the GUI up, I merrily proceeded to make all of my desired
   adjustments, asked PM to Analyze them, and was given the go-ahead to
   implement them.
   
   But, to my surprise, when all was said and done (including an
   auto-reboot, and some complaints from my virus checker), only my
   Win95's C: partition was altered. :-( It was not very nice of PM, to
   tell me that everything was OK, and then ONLY make ONE of my changes.
   It was also fortunate that I had decided to check the results with PM,
   before rebooting to Linux. <heavy sigh>
   
   I proceeded to make all of the adjustments in the Extended partition.
   Notice, that I said ALL adjustments. That meant changing the sizes and
   locations of every remaining partition. I only realized after the next
   (unexpected) reboot that I had again wasted more time -- that only
   Win95's swap partition actually got adjusted. :-(
   
   This time, though, I just modified my Linux swap, and root partitions.
   When it was done, no reboot. <a BIG smile, this time>
   
   I then adjusted all of the rest of my Linux partitions! (Remember,
   this was the third time I had done them.) But, my tests of patience
   were not over. While it was chunking away, I got several 120? (I
   forget the last digit, maybe 4) error popups. This error is NOT in the
   User Guide. So, I prayed that it wasn't serious, and clicked on OK.
   
   [Subsequently I have looked for that error on their web site. So far,
   I have not been able to find it.]
   
   About two thirds of the way through the implementation of my changes,
   all activity on the status window stopped, right in the middle of
   processing the /usr partition, where the bulk of Linux lives.
   Rebooting at that point would have been disastrous!
   
   Hoping that this was not one of those frequent Win95 unrecoverable
   hangs, I decided to go to the store -- I needed some groceries,
   anyway. And, I needed some fresh, cold, night air, in order to relax.
   
   I returned about 45 minutes later, only to find the status window
   exactly as I had left it. <What to do... What to do... Don't panic...
   Don't press that button...>
   
   I suddenly noticed that the "NUM LOCK" light was on, and since I never
   leave it that way, I automatically pressed the Num Lock key to turn it
   off. And, to my surprise, and extreme pleasure, the status started to
   change. <My neighbors might have heard THAT sigh of relief.>
   
   <More of those 1204 errors. Just press OK, and pray.>
   
   Finally it completed! It looked good. Now I had room in /usr to
   upgrade to RedHat 5.2. So, I reboot to Linux.
   
                            Rebooting to Linux:
                                      
   WHOOPS! Linux didn't come up! At the point where I should have seen a
   "LILO boot:" prompt, I only saw "LI", and everything stopped.
   Everything except the fans, of course. I tried another lilo diskette.
   Same thing.
   
   I tried the RedHat Boot Diskette (Release 5.1). It said that it didn't
   support the rescue operation, and that I needed the diskettes that I
   created when I installed 5.1. I was sure glad I had done so, even
   though I had never had to use them before now.
   
   After a brief search for those diskettes, I finally find them. I tried
   the "Boot image" disk first. No good. I tried the "Primary Boot Disk"
   next, and cheers abounded! Linux was now up (and maybe my neighbors,
   too), although on a kernel with reduced functionality. But I was then
   able to rebuild my lilo diskette, and then reboot normally, everything
   working as expected.
   
                             Additional notes:
                                      
   Remember my previous reference to "UPGRADE"? Well, I examined the /win
   partition from Linux, and I found that PM3.0 was still in the "Start
   Menu", and that PM3.0 used up 4.92 Meg of disk space in /win/pqmagic,
   i.e. it was still there. So, the "upgrade" was actually an "install".
   And, now I have 4.92 Meg of space wasted on my C: partition. I hope I
   remember to remove 3.0, when I reboot back to Win95 in another month
   or six.
   
   I also mounted the CD under Linux, and discovered that there is a
   LINUX directory. I wonder why I wasn't told about that before.
   
   Examining it's contents, I discovered files named PQINST.SH and
   PQREADME.NOW. Reading them, I saw problems with both files.
   
     * In PQREADME.NOW it stated "Please remember linux is case
       sensitive." And yet, it refers to items on the CD, using the wrong
       case. Just a couple of examples (one from each file):

    cp /pqtemp.ins/cdrom/linux/bootflpy.dat /dev/fd0
       should be

    cp /pqtemp.ins/cdrom/LINUX/BOOTFLPY.DAT /dev/fd0
       and

    cp /pqtemp/linux/bootflpy.dat /dev/fd0
       should be

    cp /pqtemp/LINUX/BOOTFLPY.DAT /dev/fd0
       
   I manually performed the cp commands (with the correct case). I then
   booted from them, to see what would happen.
   
                 Experience with the Linux boot diskettes:
                                      
   When I booted the "Boot Diskette", it turned out to be a form of DOS
   from Caldera.
   
   This experience was less then optimal. Before the GUI came up, it
   appeared to stop loading, and there was a sound coming out of my PC,
   something like a horse running in the distance. There was also a black
   rectangle in the middle of my screen. I suppose there was text in that
   rectangle. But, it too, must have been black.
   
   I pressed <return>, and there was a very brief pause in the sound, and
   the black rectangle flickered. So I pressed it many times, and
   eventually a slightly abbreviated form of the GUI appeared.
   
   Although most of the GUI was there, the helpers at the bottom were
   not. I guess that made sense, since there was no mouse pointer either.
   The lack of a mouse, made it a bit cumbersome to use, i.e. usable, but
   not optimal -- especially without the ability to have it analyze my
   proposed changes.
   
   That strange sound, combined with the black rectangle, occurred
   several other times, while I was trying various features. Again, I
   pressed <return> and prayed, until the black rectangle went away.
   
   Since I had no idea what was happening when I just pressed <return>, I
   elected to just quit, and boot back to Linux without implementing my
   changes.
   
                              Trial with Wine:
                                      
   Wine is a Linux program, within which we can run a lot of Win95
   programs. It is still under development, so many programs do not yet
   work, or they function with aberrant behavior.
   
   It took me a while to discover that PM's executable is:
   
    /win/Program Files/PowerQuest/PartitionMagic4/Win9X/Pm409x.exe

   When I tried it under Wine, it didn't run at all. Quite literally, it
   crashed with a segfault. I suspect the problem is in Wine, or with
   something very unusual that PM does..
   
                                Conclusion:
                                      
   In spite of the problems I encountered, I still consider
   PartitionMagic4 an invaluable tool for the Linux community.
   
   For the average "User", i.e. those who just use the system as a tool,
   and don't want anything to do with changing its configuration; it
   seems to me that they MIGHT have a need to use PartitionMagic just
   once, IF they didn't allocate their partitions adequately to begin
   with. But, after that, they may never need it again. So, for them, I
   can not in good conscience, recommend the $69.95 (plus $6 shipping)
   expenditure. Besides, they might have much more difficulty getting
   rebooted back to Linux.
   
   But, for the hundreds (or maybe even thousands) of us who actually get
   into the system, move stuff around, and generally push the envelope of
   Linux, $69.95 is not really that much to pay, for the ease with which
   PartitionMagic allows one to adjust disk partition tables to meet
   changing needs.
   
   Since I had purchased version 3.0 almost two years ago, and therefore
   was able to upgrade for only $29.95 (plus $6 shipping), it was much
   easier to justify the expenditure.
   
   One final note: On the 8th of November (almost 3 weeks ago) I sent
   much of what I've documented above, to Customer Service at PowerQuest,
   imforming them that I was going to submit this to the Linux Gazette. I
   have yet to receive any reply.
     _________________________________________________________________
   
                       Copyright  1998, Ray Marshall
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
    "Linux Gazette...making Linux just a little more fun!"
     _________________________________________________________________
   
                      Quick and Dirty RAID Information
                                      
                            By Eugene Blanchard
     _________________________________________________________________
   
   If you are thinking about implementing a Linux software raid then here
   is the most important link that you should investigate before you
   start:
   
   Linas Vepsta's raid page: http://linas.org/linux/raid.html
   
   The date of this posting is Oct 29/98 and the present raid
   documentation is incomplete and confusing. This posting is to clear up
   problems that you will encounter implementing raid0 and raid1.
   
   I wanted to implement mirror over striping. The striping gives good
   read/write performance increases and the mirroring gives backup and
   read performance increases.
   
   I started with kernel 2.0.30 and implemented raid0 (striping). Then I
   upgraded my kernel to 2.0.35 and the fun began. After struggling to
   get raid0 working with 2.0.35, I tackled raid1. Well, guess what,
   throw everything that you learned about raid out the window and start
   from scratch! A good idea is to start simple, get raid0 up and running
   then add raid1. The story begins:
   
  Raid0 (striping) with kernel 2.0.30
  
   Linear and raid0 (striping) are implemented in the kernel since 2.x.
   You have to recompile your kernel with multiple devices installed. I
   recommend installing it in the kernel to start. You will have enough
   problems without implementing it as a module.
   
   To check if you have multiple devices installed. dmesg |more and look
   to see if you have the md dirver loaded and raid0 registered (can't
   remember the exact phrase - late at night ;-( )
   
   Or type cat /proc/mdstat to see the status of your md devices. You
   should see /dev/md0 to /dev/md3 inactive.
   
   Strangely, the kernel tools mdtools-0.35 are not usually supplied with
   distributions. These are the tools that are required for setting up
   the raid, running and stopping it.
   
   You can find them on the Slackware distribution at (23k in size)
   
   http://sunsite.unc.edu/pub/Linux/distributions/slackware/slakware/ap1/
   md.tgz
   
   Download to /usr/local/src then:

cd /
tar -zxvf /usr/local/src/md.tgz

   It will put the files in the correct place.
sbin/mdadd
sbin/mdcreate
usr/etc/mdtab
install/doinst.sh
usr/man/man5/mdtab.5.gz
usr/man/man8/mdadd.8.gz
usr/man/man8/mdcreate.8.gz
usr/doc/md/COPYING
usr/doc/md/ChangeLog
usr/doc/md/README
usr/doc/md/md_FAQ

   Read through the README file (ignore warnings of course) The
   documentation is quite good for kernel 2.0.30 and linear /raid0 mode.
   The Linux Journal (June or July 98) has an excellent article on how to
   implement raid0 (striping). It was what spiked my interest.
   
   The Linux Gazette has another article that helps:
   http://www.ssc.com/lg/issue17/raid.html
   
   You should start the raid array before fsck -a, usually located in
   /etc/rc.d/rc.s for slackware distributions and should stop the raid
   array in both /etc/rc.d/rc.0 and rc.6. (BTW since they are identical
   files in slackware, can't we just do a softlink from one to the other
   and modify only one?)
   
   To check to see if it is working, type cat /proc/mdstat, it should
   indicate what states the md devices are (/dev/md0 raid0 using
   /dev/sda1 and /dev/sdb1).
   
   Test, test, test your raid. Shutdown, power-up, see if it is working
   like you expected.
   
   I did some fancy copying using cp -rap switches to copy complete
   directory structures to the raid arrays. Then modified /etc/fstab to
   include the new drives.
   
   Swap partitions do not need to be striped. They are automatically
   striped if a priority is used. Check the Software-RAID-mini-HOWTO and
   the Bonehead question section for details. It is amazingly simple.
   
  Implement UPS NOW!
  
   If you lose power (AC line), you will lose your raid array and any
   data that is on it! You should implement a UPS backup power supply.
   The purpose of the UPS is to keep your system running for a short
   period of time during brownouts and power fails. The UPS should inform
   your system that the power has failed through a serial port. There is
   a daemon that runs in the background that monitors the serial port.
   When it is informed that there is a power failure, it will wait a
   preset period of time (usually 5 minutes) than perform a system
   shutdown. The idea is that after 5 minutes of no power, the power will
   be down for a long time.
   
   Most Linux distributions come with the basic UPS daemon powerd. Check
   "man powerd" for more info. It is a simple daemon that is implemented
   in /etc/inittab under what happens when the power fails. Basically, a
   dumb UPS, just closes a relay contact that is connected to the serial
   port. powerd monitors to see if the contact has closed. If it does it
   shuts down the PC after a predetermined time, warns users and can send
   an email to root.
   
   I used an APC smart UPS that communicates through the serial port.
   There is an excellent daemon called apcupsd that works like a charm.
   It is located here. Please read the notice and sympathize with the
   author, he has done an excellent job (kudos to the author!). The
   installation works like a charm and the documentation is excellent.
   
   http://www.dyer.vanderbilt.edu/server/apcupsd/
   
  RAID0 and 2.0.31 to 2.0.34
  
   Don't have a clue. I upgraded from 2.0.30 to 2.0.35 because 2.0.35 is
   the latest stable release.
   
  RAID0 and Kernel 2.0.35
  
   The mdtools compiled perfectly on my home machine (testbed running
   2.0.30) but would not compile on my work machine (upgraded to 2.0.35).
   I kept getting an error about MD_Version (can't remember the exact
   name) not being defined. After a lot of head scratching and searching,
   I found that /usr/src/include/md.h contains the version number of the
   md driver. With version 2.0.30, it was ver 0.35, with 2.0.35 it is ver
   0.36. If you "mdadd -V" it will indicate the version of md that mdadd
   will work with. So I had the wrong mdtools version. Here is the
   location of the correct version:
   
   ftp://ftp.kernel.org/pub/linux/daemons/raid/raidtools-0.41.tar.gz
   
   Download to /usr/local/src then

tar -zxvf raidtools-0.41.tar.gz

   A new directory will be made /usr/local/src/raidtools-0.41
   
   Change to the new directory and read the INSTALL file, then
./configure

   I can't remember if I had to then make and make install after this. I
   can't duplicate this now that I've upgraded to a new raid patch.
   
   You should have a new mkraid and mdadd binary. Type mdadd -V to check
   if your binaries are updated. It should respond with something that
   says something like mdadd 0.3d compiled for raidtools-0.41. Then read
   the QuickStart.RAID for the latest info. For raid0, not much has
   changed from the previous versions.
   
  RAID1 and Kernel 2.0.35
  
   You must patch the kernel to enable RAID 1, 4 and 5. The patch is
   located at
   
   ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raid0145-19981005-c-
   2.0.35.tz
   
   Copy to /usr/src directory and uncompress the patch:
tar -zxvf raid0145-19981005-c-2.0.35.tz

   Note the patch will be looking for /usr/src/linux-2.0.35 directory. If
   you have your 2.0.35 source installed as /usr/src/linux, you should mv
   /usr/src/linux /usr/src/linux-2.0.35 and soft link /usr/src/linux to
   it. ln -s /usr/src/linux-2.0.35 /usr/src/linux
   
   To apply the patch, in /usr/src:
patch -p0 <raid0145-19981005-C-2.0.35

   (someplace the lowercase c got changed to an uppercase C in my system?
   Maybe after tar?)
   
   You now get to recompile the kernel. When you select multiple devices,
   you will see options for raid 1, 4 and 5 available. So the steps are

make menuconfig (or config or xconfig)
make clean
make dep
make zImage
make modules            (if you are using modules)
make modules_install

   Copy the new kernel to wherever your distribution looks for it (/ or
   /boot). I suggest that you have a base kernel that works without raid
   and then a raid kernel. You can modify lilo.conf to allow you to
   select which kernel that you want to boot to. It's not difficult at
   all but at first glance it looks terrifying. Check /usr/lib/lilo for
   good examples and documentation.
   
   Check dmesg | more to see if you have md drivers loaded and raid0 & 1
   registered. Type cat /proc/mdstat to see if you have the new md
   driver. You should see 16 md devices instead of 4.
   
   You will have to upgrade your raidtools. mdadd, /etc/mdtab and
   mdcreate are obsolete as well as a bunch of others. The new tools are
   raidstart, /etc/raidtab and mkraid. At this point the documentation is
   well out of date.
   
   ftp://ftp.kernel.org/pub/linux/daemons/raid/alpha/raidtools-19981005-B
   -0.90.tar.gz
   
   Download to /usr/local/src then

tar -zxvf raidtools-19981005-B-0.90.tar.gz

   This will make a new directory /usr/local/src/raidtools-0.90. Change
   to it and

./configure

   Again, I can't remember if I had to then make and make install after
   this.
   
  New Simpler Method for RAID0 and Kernel 2.0.35
  
   Steps to make a raid0 array /dev/md0 using two scsi drives /dev/sda1
   and /dev/sdb1:
    1. Partition /dev/sda1 and /dev/sdb1 so that they have identical
       block sizes.
    2. Set the parition type to 0xfd. This is used by the new kernel to
       autodetect the raid on startup.
    3. Modify the /etc/raidtab file as per this example (the examples
       supplied with the raidtools are missing some important info):

        # Striping example
        # /dev/md0 using /dev/sda1 and /dev/sdb1
        
        raiddev /dev/md0
                raid-level              0
                nr-raid-disks           2
                persistent-superblocks  1
                nr-spare-disks          0
                chunk-size              32
                device                  /dev/sda1
                raid-disk               0
                device                  /dev/sdb1
                raid-disk               1
    4. Type mkraid -f /dev/md0 IMPORTANT - Read the error message and
       FOLLOW the directions explicitly!
    5. cat /proc/mdstat to see if the md device was made correctly
    6. Format the new raid device by mke2fs -c /dev/md0
    7. Make a directory to mount to (like /raidtest) just to test if it
       works.
    8. mount /dev/md0 /raidtest
    9. Copy a file to /raidtest to see if you can. If you have individual
       LEDs on your hard-drives, you should see both drives working.
   10. Reboot and see if the kernel autmatically shuts down the raid
       device md0. There should be some messges scroll past the screen.
       (anyone know how to read these shutdown messages like "dmesg"?)
   11. Check that on rebooting the computer that the kernel autodetects
       the raid devices and /dev/md0 comes up as a raid0 array. If not
       check that each of the previous steps have been followed
       especially step 2 and 4.
       
  New Method for RAID1 and Kernel 2.0.35
  
   Steps to make a raid1 array /dev/md2 using two striped pairs /dev/md0
   (/dev/sda1 + /dev/sdb1) and /dev/md1 (/dev/sdc1 + /dev/sdd1:
    1. Follow the steps above for /dev/md0 and duplicate for /dev/md1.
       IMPORTANT - You don't mount or mke2fs /dev/md0 and /dev/md1. This
       was only to test if the raid0 worked!
    2. Modify the /etc/raidtab file as per this example (the examples
       supplied with the raidtools are missing some important info):

        # Striping example
        # /dev/md0 using /dev/sda1 and /dev/sdb1
        
        raiddev /dev/md0
                raid-level              0
                nr-raid-disks           2
                persistent-superblocks  1
                nr-spare-disks          0
                chunk-size              32
                device                  /dev/sda1
                raid-disk               0
                device                  /dev/sdb1
                raid-disk               1

        # /dev/md1 using /dev/sdc1 and /dev/sdd1
        
        raiddev /dev/md1
                raid-level              0
                nr-raid-disks           2
                persistent-superblocks  1
                nr-spare-disks          0
                chunk-size              32
                device                  /dev/sdc1
                raid-disk               0
                device                  /dev/sdd1
                raid-disk               1

        # Mirror example
        # /dev/md2 using /dev/md0 and /dev/md1
        
        raiddev /dev/md2
                raid-level              1
                nr-raid-disks           2
                persistent-superblocks  1
                nr-spare-disks          0
                chunk-size              32
                device                  /dev/md0
                raid-disk               0
                device                  /dev/md1
                raid-disk               1
    3. Type "mkraid -f /dev/md2" IMPORTANT - Read the error message and
       FOLLOW the directions explicitly! This step will take a while as
       the disks are synced together (over 30 min)
    4. cat /proc/mdstat to see if the md devices was made correctly
    5. Format the new raid device by mke2fs -c /dev/md2
    6. Make a directory to mount to (like /raidtest_mirror)
    7. mount /dev/md0 /raidtest
    8. Copy a file to /raidtest to see if you can. If you have individual
       LEDs on your hard-drives, you should all drives working.
    9. Add raidstart /dev/md2 to your /etc/rc.d/rc.s file just before
       fsck -a. A good place is right after swapon -a. At this time, the
       kernel does not autodetect raid1. This will be added to the next
       patch.
   10. Modify /etc/fstab to mount /dev/md2 as /raidtest.

        /dev/md2        /raidtest       ext2    defaults        1       1
   11. Reboot and see if the kernel autmatically shuts down the raid
       device md0, md1 and md2. There should be some messges scroll past
       the screen. (anyone know how to read these shutdown messages like
       "dmesg"?)
   12. Check that on rebooting the computer that the kernel autodetects
       the raid devices /dev/md0 and /dev/md1 both come up as raid0
       arrays. Check that /dev/md2 is detected as a raid1 array.
   13. cat /proc/mdstat to see if the md devices were made correctly.
       
   You should have raid1 over raid0 array running.
   
   Other resources that you may want to look at if you run into trouble:
    1. The linux raid archives:
       http://www.linuxhq.com/lnxlists/linux-raid/
    2. Post a news message to comp.os.linux.setup
    3. Search www.dejanews.com - archive site of the past 5 years of news
       postings
    4. Absolutely last if you are really stuck, e-mail the Linux RAID
       Mailing List. To send an enquiry, e-mail
       linux-raid@vger.rutgers.edu
       To join the kernel RAID list, e-mail majordomo@vger.rutgers.edu
       and put in the body of the message subscribe linux-raid
    5. Don't e-mail me, everything I know is recorded here!
     _________________________________________________________________
   
                     Copyright  1998, Eugene Blanchard
           Published in Issue 35 of Linux Gazette, December 1998
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back Next 
     _________________________________________________________________
   
                          Linux Gazette Back Page
                                      
           Copyright  1998 Specialized Systems Consultants, Inc.
For information regarding copying and distribution of this material see the
                              Copying License.
     _________________________________________________________________
   
  Contents:
  
     * About This Month's Authors
     * Not Linux
     _________________________________________________________________
   
                         About This Month's Authors
     _________________________________________________________________
   
    Larry Ayers
    
   Larry lives on a small farm in northern Missouri, where he is
   currently engaged in building a timber-frame house for his family. He
   operates a portable band-saw mill, does general woodworking, plays the
   fiddle and searches for rare prairie plants, as well as growing
   shiitake mushrooms. He is also struggling with configuring a Usenet
   news server for his local ISP.
   
    Eugene Blanchard
    
   Eugene is an Instructor at the Southern Alberta Institute of
   Technology in Calgary, Alberta, Canada where he teaches electronics,
   digital, microprocessors, data communications, and operating
   systems/networking in the Novell, Windows and UNIX worlds. When he is
   not spending quality time with his wonderful wife and 18 month old
   daughter, he can be found in front of his Linux box. His hobbies are
   hiking, backpacking, bicycling and chess.
   
    Jim Dennis
    
   Jim is the proprietor of Starshine Technical Services. His
   professional experience includes work in the technical support,
   quality assurance, and information services (MIS) departments of
   software companies like Quarterdeck, Symantec/ Peter Norton Group, and
   McAfee Associates -- as well as positions (field service rep) with
   smaller VAR's. He's been using Linux since version 0.99p10 and is an
   active participant on an ever-changing list of mailing lists and
   newsgroups. He's just started collaborating on the 2nd Edition for a
   book on Unix systems administration. Jim is an avid science fiction
   fan -- and was married at the World Science Fiction Convention in
   Anaheim.
   
    Jeremy Dinsel
    
   Jeremy is an almost-graduate of California University of Pennsylvania,
   where he studies computer science and operates the Math and Computer
   Science Linux server. He welcomes questions and comments and
   encourages western Pennsylvanians to join WPLUG--a Linux organization
   (http://sighsy.cup.edu/~dinselj/wplug/).
   
    Miguel de Icaza
    
   Miguel is one of the GNU Midnight Commander authors as well as a
   developer of GNOME. He also worked on the Linux/SPARC kernel port. He
   can be reached via e-mail at miguel@gnu.ai.mit.edu.
   
    David Jao
    
   David is a gradual student working on his Ph.D. in Mathematics at
   Harvard University. With no practical programming experience, he has
   nevertheless managed to use Linux as his primary operating system for
   two years already. When he's not thinking about math, he is busy
   devising a master plan to rid Harvard of its institutional dependence
   on unstable computers running crashy proprietary programs.
   
    Ron Jenkins
    
   Ron has over 20 years experience in RF design, satellite systems, and
   UNIX/NT administration. He currently resides in Central Missouri where
   he will be spending the next 6 to 8 months recovering from knee
   surgery and looking for some telecommuting work. Ron is married and
   has two stepchildren.
   
    Ray Marshall
    
   Ray is a Software/Knowledge Engineer at NORTEL, working in RTP North
   Carolina, or telecommuting from his home in New Hampshire. He has over
   30 years of experience with Software design, development and
   maintenence, having started out writing hardware diagnostics in the
   mid '60s. He has only worked with PCs and Linux for a little over 2
   years, and now enjoys repaying the assistance he received, by
   assisting others. He can sometimes be found on Undernet #LinuxHelp.
   Beyond his professional interests he also enjoys various philosophical
   pursuits, and singing, and can be seen on the Inner Voices web page.
   
    Ren Tavera
    
   Ren is a shares trader at Value Casa de Bolsa, Mxico, D. F. When not
   working or learning about computer systems, he spends time with his
   wife or playing the guitar.
     _________________________________________________________________
   
                                 Not Linux
     _________________________________________________________________
   
   Thanks to all our authors, not just the ones above, but also those who
   wrote giving us their tips and tricks and making suggestions. Thanks
   also to our new mirror sites.
   
   I've had a fun and family-filled Thanksgiving, so have much to be
   thankful for this year. Hope this is true for all of you reading this
   too.
   
   Seattle is cold and rainy today and I am feeling a bit blue. Think
   that means it is time to shut it down and go home for awhile.
   
   Have fun!
     _________________________________________________________________
   
   Marjorie L. Richardson
   Editor, Linux Gazette, gazette@ssc.com
     _________________________________________________________________
   
   [ TABLE OF CONTENTS ] [ FRONT PAGE ] Back 
     _________________________________________________________________
   
   Linux Gazette Issue 35, December 1998, http://www.linuxgazette.com
   This page written and maintained by the Editor of Linux Gazette,
   gazette@ssc.com
