While it is hard errand to pick solid certification questions/answers assets regarding review, reputation and validity since individuals merit sham because of picking incorrectly benefit. Killexams.com ensure to serve its customers best to its assets as for exam dumps update and validity. The greater fragment of other's sham report objection customers advance to us for the brain dumps and pass their exams cheerfully and effortlessly. They never deal on their review, reputation and trait because killexams review, killexams reputation and killexams customer certainty is imperative to us. Extraordinarily they deal with killexams.com review, killexams.com reputation, killexams.com sham report grievance, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. On the off casual that you notice any False report posted by their rivals with the designation killexams sham report grievance web, killexams.com sham report, killexams.com scam, killexams.com protestation or something relish this, simply recollect there are constantly terrible individuals harming reputation of first-rate administrations because of their advantages. There are a noteworthy many fulfilled clients that pass their exams utilizing killexams.com brain dumps, killexams PDF questions, killexams questions, killexams exam simulator. Visit Killexams.com, their instance questions and test brain dumps, their exam simulator and you will realize that killexams.com is the best brain dumps site.
Back to Braindumps Menu
A2180-317 existent questions | LOT-822 sample test | 000-027 braindumps | 3203-1 brain dumps | 156-915-80 exercise questions | SCNP-EN test prep | HP0-311 cram | CAT-500 exam prep | 1Z0-599 questions answers | HP0-704 existent questions | 000-210 exercise Test | C9530-410 exercise exam | 156-915 VCE | HP2-W104 study guide | SF-040X free pdf download | L50-503 test questions | 920-259 free pdf | CNSC exercise test | 000-904 exercise test | PW0-071 existent questions |
VCS-256 Dumps and exercise software with existent Question
We are for the most fragment very much watchful that a noteworthy issue in the IT traffic is that there is an absence of value study materials. Their exam prep material gives you total that you should consume a certification exam. Their Veritas VCS-256 Exam will give you exam questions with confirmed answers that reflect the existent exam. high caliber and incentive for the VCS-256 Exam. They at killexams.com are resolved to enable you to pass your VCS-256 exam with high scores.
We hold their specialists operating ceaselessly for the gathering of existent test questions of VCS-256. total the pass4sure Questions and Answers of VCS-256 gathered by their crew are verified and updated through their Veritas certified team. they hold an approach to maintain connected to the candidates regarded within the VCS-256 exam to induce their evaluations just about the VCS-256 exam, they hold an approach to collect VCS-256 exam tips and hints, their fancy just about the techniques used at intervals the primary VCS-256 exam, the errors they accomplished within the actual test once that enhance their braindumps consequently.
Once you tolerate their pass4sure Questions and Answers, you will feel assured regarding total the themes of test and learning that your power has been greatly improved. These pass4sure Questions and Answers are not merely exercise questions, these are existent test Questions and Answers which will live adequate to pass the VCS-256 exam first attempt.
killexams.com Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for total exams on website
PROF17 : 10% Discount Coupon for Orders larger than $69
DEAL17 : 15% Discount Coupon for Orders over $99
SEPSPECIAL : 10% Special Discount Coupon for total Orders
The only artery to merit success in the Veritas VCS-256 exam is that you should obtain dependable preparatory materials. They guarantee that killexams.com is the most direct pathway towards Implementing Veritas Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux certificate. You will live victorious with replete confidence. You can view free questions at killexams.com before you buy the VCS-256 exam products. Their simulated tests are in multiple-choice the selfsame as the existent exam pattern. The questions and answers created by the certified professionals. They provide you with the undergo of taking the existent test. 100% guarantee to pass the VCS-256 actual test.
killexams.com Veritas Certification study guides are setup by IT professionals. Lots of students hold been complaining that there are too many questions in so many exercise exams and study guides, and they are just tired to afford any more. Seeing killexams.com experts work out this comprehensive version while silent guarantee that total the learning is covered after deep research and analysis. Everything is to get convenience for candidates on their road to certification.
We hold Tested and Approved VCS-256 Exams. killexams.com provides the most accurate and latest IT exam materials which almost hold total learning points. With the aid of their VCS-256 study materials, you dont necessity to waste your time on reading bulk of reference books and just necessity to spend 10-20 hours to master their VCS-256 existent questions and answers. And they provide you with PDF Version & Software Version exam questions and answers. For Software Version materials, Its offered to give the candidates simulate the Veritas VCS-256 exam in a existent environment.
We provide free update. Within validity period, if VCS-256 exam materials that you hold purchased updated, they will inform you by email to download latest version of . If you dont pass your Veritas Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux exam, They will give you replete refund. You necessity to forward the scanned copy of your VCS-256 exam report card to us. After confirming, they will quickly give you replete REFUND.
killexams.com Huge Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for total exams on website
PROF17 : 10% Discount Coupon for Orders greater than $69
DEAL17 : 15% Discount Coupon for Orders greater than $99
DECSPECIAL : 10% Special Discount Coupon for total Orders
If you prepare for the Veritas VCS-256 exam using their testing engine. It is effortless to succeed for total certifications in the first attempt. You dont hold to deal with total dumps or any free torrent / rapidshare total stuff. They present free demo of each IT Certification Dumps. You can check out the interface, question trait and usability of their exercise exams before you determine to buy.
VCS-256 Practice Test | VCS-256 examcollection | VCS-256 VCE | VCS-256 study guide | VCS-256 practice exam | VCS-256 cram
Killexams 1Z0-413 exercise test | Killexams 9A0-067 dump | Killexams 312-49 braindumps | Killexams 1Z0-526 VCE | Killexams PMBOK-5th test prep | Killexams P11-101 test prep | Killexams HP5-B05D exercise questions | Killexams ST0-236 braindumps | Killexams CRCM examcollection | Killexams 000-087 free pdf download | Killexams HP0-M31 questions and answers | Killexams A2180-178 cram | Killexams 650-175 pdf download | Killexams 1Z0-045 study guide | Killexams 000-301 free pdf | Killexams 156-715-70 questions answers | Killexams 6002 existent questions | Killexams 000-433 free pdf | Killexams MB0-001 exam prep | Killexams M9560-231 exam questions |
killexams.com huge List of Exam Study Guides
Killexams 000-674 exam prep | Killexams 1Z0-872 free pdf download | Killexams 650-196 study guide | Killexams HP2-056 test prep | Killexams ADM-211 existent questions | Killexams HP0-451 test questions | Killexams HP2-Z06 cram | Killexams 70-414 exercise test | Killexams 9A0-040 exercise exam | Killexams HP3-C32 dump | Killexams 500-451 free pdf | Killexams HP0-Y30 free pdf | Killexams 000-M43 braindumps | Killexams 00M-647 existent questions | Killexams PRINCE2-Practitioner exercise test | Killexams M2065-659 examcollection | Killexams C2140-047 existent questions | Killexams 000-560 brain dumps | Killexams E20-895 exercise Test | Killexams 000-906 study guide |
Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux
Pass 4 sure VCS-256 dumps | Killexams.com VCS-256 existent questions | https://www.textbookw.com/
January 24, 2000Web posted at: 12:11 p.m. EST (1711 GMT)
by John Bass and James Robinson, Network World Test Alliance
(IDG) -- It total boils down to what you're looking for in a network operating system (NOS).
Do you want it lank and flexible so you can install it any artery you please? Perhaps administration bells and management whistles are what you necessity so you can deploy several hundred servers. Or maybe you want an operating system that's robust enough so that you sleep relish a baby at night?
The first-rate intelligence is that there is a NOS waiting just for you. After the rash of recent software revisions, they took an in-depth spy at four of the major NOSes on the market: Microsoft's Windows 2000 Advanced Server, Novell's NetWare 5.1, Red Hat Software's Linux 6.1 and The Santa Cruz Operation's (SCO) UnixWare 7.1.1. Sun declined their invitation to submit Solaris because the company says it's working on a unique version.
Microsoft's Windows 2000 edges out NetWare for the Network World Blue Ribbon Award. Windows 2000 tops the field with its management interface, server monitoring tools, storage management facilities and security measures.
However, if it's performance you're after, no product came proximate to Novell's NetWare 5.1's numbers in their exhaustive file service and network benchmarks. With its lightning-fast engine and Novell's directory-based administration, NetWare offers a noteworthy basis for an enterprise network.
We establish the latest release of Red Hat's commercial Linux bundle led the list for flexibility because its modular design lets you pare down the operating system to suit the assignment at hand. Additionally, you can create scripts out of multiple Linux commands to automate tasks across a distributed environment.
While SCO's UnixWare seemed to lag behind the pack in terms of file service performance and NOS-based administration features, its scalability features get it a stout candidate for running enterprise applications.
The numbers are in
Regardless of the job you saddle your server with, it has to fulfill well at reading and writing files and sending them across the network. They designed two benchmark suites to measure each NOS in these two categories. To reflect the existent world, their benchmark tests regard a wide purview of server conditions.
NetWare was the hands-down leader in their performance benchmarking, taking first position in two-thirds of the file tests and earning top billing in the network tests.
Red Hat Linux followed NetWare in file performance overall and even outpaced the leader in file tests where the read/write loads were small. However, Linux did not fulfill well handling big loads - those tests in which there were more than 100 users. Under heavier user loads, Linux had a trend to desist servicing file requests for a short period and then start up again.
Windows 2000 demonstrated impoverished write performance across total their file tests. In fact, they establish that its write performance was about 10% of its read performance. After consulting with both Microsoft and Client/Server Solutions, the author of the Benchmark Factory testing appliance they used, they determined that the impoverished write performance could live due to two factors. One, which they were unable to verify, might live a possible performance problem with the SCSI driver for the hardware they used.
More significant, though, was an issue with their test software. Benchmark Factory sends a write-through flag in each of its write requests that is hypothetical to occasions the server to update cache, if appropriate, and then favor a write to disk. When the write to disk occurs, the write summon is released and the next request can live sent.
At first glance, it appeared as if Windows 2000 was the only operating system to honor this write-through flag because its write performance was so poor. Therefore, they ran a second round of write tests with the flag turned off.
With the flag turned off, NetWare's write performance increased by 30%. This test proved that Novell does indeed honor the write-through flag and will write to disk for each write request when that flag is set. But when the write-through flag is disabled, NetWare writes to disk in a more efficient manner by batching together contiguous blocks of data on the cache and writing total those blocks to disk at once.
Likewise, Red Hat Linux's performance increased by 10% to 15% when the write-through flag was turned off. When they examined the Samba file system code, they establish that it too honors the write-through flag. The Samba code then finds an optimum time during the read/write sequence to write to disk.
This second round of file testing proves that Windows 2000 is dependent on its file system cache to optimize write performance. The results of the testing with the write-through flag off were much higher - as much as 20 times faster. However, Windows 2000 silent fell behind both NetWare and RedHat Linux in the file write tests when the write-through flag was off.
SCO honors the write-through flag by default, since its journaling file system is constructed to maximize data integrity by writing to disk for total write requests. The results in the write tests with the write-through flag on were very similar to the test results with the write-through flag turned off.
For the network benchmark, they developed two tests. Their long TCP transaction test measured the bandwidth each server can sustain, while their short TCP transaction test measured each server's ability to handle big numbers of network sessions with miniature file transactions.
Despite a impoverished showing in the file benchmark, Windows 2000 came out on top in the long TCP transaction test. Windows 2000 is the only NOS with a multithreaded IP stack, which allows it to handle network requests with multiple processors. Novell and Red Hat converse they are working on integrating this capability into their products.
NetWare and Linux likewise registered stout long TCP test results, coming in second and third, respectively.
In the short TCP transaction test, NetWare came out the pellucid winner. Linux earned second position in spite of its necessity of champion for abortive TCP closes, a manner by which an operating system can quickly rip down TCP connections. Their testing software, Ganymede Software's Chariot, uses abortive closes in its TCP tests.
Moving into management
As enterprise networks grow to require more servers and champion more proximate users, NOS management tools become crucial elements in keeping networks under control. They looked at the management interfaces of each product and drilled down into how each handled server monitoring, client administration, file and print management, and storage management.
We establish Windows 2000 and NetWare provide equally useful management interfaces.
Microsoft Management Console (MMC) is the glue that holds most of the Windows 2000 management functionality together. This configurable graphical user interface (GUI) lets you snap in Microsoft and third-party applets that customize its functionality. It's a two-paned interface, much relish Windows Explorer, with a nested list on the left and selection details on the right. The console is effortless to consume and lets you configure many local server elements, including users, disks, and system settings such as time and date.
MMC likewise lets you implement management policies for groups of users and computers using vigorous Directory, Microsoft's unique directory service. From the vigorous Directory management appliance inside MMC, you can configure users and change policies.
The network configuration tools are establish in a divide application that opens when you click on the Network Places icon on the desktop. Each network interface is listed inside this window. You can add and change protocols and configure, enable and disable interfaces from here without rebooting.
NetWare offers several interfaces for server configuration and management. These tools present duplicate functionality, but each is useful depending from where you are trying to manage the system. The System Console offers a number of tools for server configuration. One of the most useful is NWConfig, which lets you change start-up files, install system modules and configure the storage subsystem. NWConfig is simple, intuitive and predictable.
ConsoleOne is a Java-based interface with a few graphical tools for managing and configuring NetWare. Third-party administration tools can plug into ConsoleOne and let you manage multiple services. They cerebrate ConsoleOne's interface is a bit unsophisticated, but it works well enough for those who must hold a Windows- based manager.
Novell likewise offers a Web-accessible management application called NetWare Management Portal, which lets you manage NetWare servers remotely from a browser, and NWAdmin32, a relatively simple client-side appliance for administering Novell Directory Services (NDS) from a Windows 95, 98 or NT client.
Red Hat's overall systems management interface is called LinuxConf and can elope as a graphical or text-based application. The graphical interface, which resembles that of MMC, works well but has some layout issues that get it difficult to consume at times. For example, when you elope a setup application that takes up a lot of the screen, the system resizes the application larger than the desktop size.
Still, you can manage pretty much anything on the server from LinuxConf, and you can consume it locally or remotely over the Web or via telnet. You can configure system parameters such as network addresses; file system settings and user accounts; and set up add-on services such as Samba - which is a service that lets Windows clients merit to files residing on a Linux server - and FTP and Web servers. You can apply changes without rebooting the system.
Overall, Red Hat's interface is useful and the underlying tools are powerful and flexible, but LinuxConf lacks the polish of the other vendors' tools.
SCO Admin is a GUI-based front proximate for about 50 SCO UnixWare configuration and management tools in one window. When you click on a tool, it brings up the application to manage that particular in a divide window.
Some of SCO's tools are GUI-based while others are text-based. The server required a reboot to apply many of the changes. On the plus side, you can manage multiple UnixWare servers from SCOAdmin.
SCO likewise offers a useful Java-based remote administration appliance called WebTop that works from your browser.
An eye on the servers and clients
One primary administration assignment is monitoring the server itself. Microsoft leads the pack in how well you can maintain an eye on your server's internals.
The Windows 2000 System Monitor lets you view a real-time, running graph of system operations, such as CPU and network utilization, and memory and disk usage. They used these tools extensively to determine the consequence of their benchmark tests on the operating system. Another appliance called Network Monitor has a basic network packet analyzer that lets you notice the types of packets coming into the server. Together, these Microsoft utilities can live used to compare performance and capacity across multiple Windows 2000 servers.
NetWare's Monitor utility displays processor utilization, memory usage and buffer utilization on a local server. If you know what to spy for, it can live a powerful appliance for diagnosing bottlenecks in the system. Learning the import of each of the monitored parameters is a bit of a challenge, though.
If you want to spy at performance statistics across multiple servers, you can tap into Novell's Web Management Portal.
Red Hat offers the standard Linux command-line tools for monitoring the server, such as iostat and vmstat. It has no graphical monitoring tools.
As with any Unix operating system, you can write scripts to automate these tools across Linux servers. However, these tools are typically cryptic and require a high flush of proficiency to consume effectively. A suite of graphical monitoring tools would live a noteworthy addition to Red Hat's Linux distribution.
UnixWare likewise offers a number of monitoring tools. System Monitor is UnixWare's simple but limited GUI for monitoring processor and memory utilization. The sar and rtpm command-line tools together list real-time system utilization of buffer, CPUs and disks. Together, these tools give you a first-rate overall idea of the load on the server.
Along with managing the server, you must manage its users. It's no surprise that the two NOSes that ship with an integrated directory service topped the field in client administration tools.
We were able to configure user permissions via Microsoft's vigorous Directory and the directory administration appliance in MMC. You can group users and computers into organizational units and apply policies to them.
You can manage Novell's NDS and NetWare clients with ConsoleOne, NWAdmin or NetWare Management Portal. Each can create users, manage file space, and set permissions and rights. Additionally, NetWare ships with a five-user version of Novell's ZENworks tool, which offers desktop administration services such as hardware and software inventory, software distribution and remote control services.
Red Hat Linux doesn't present much in the artery of client administration features. You must control local users through Unix consent configuration mechanisms.
UnixWare is similar to Red Hat Linux in terms of client administration, but SCO provides some Windows binaries on the server to remotely set file and directory permissions from a Windows client, as well as create and change users and their settings. SCO and Red Hat present champion for the Unix-based Network Information Service (NIS). NIS is a store for network information relish logon names, passwords and home directories. This integration helps with client administration.
Handling the staples: File and print
A NOS is nothing without the ability to partake file storage and printers. Novell and Microsoft collected top honors in these areas.
You can easily add and maintain printers in Windows 2000 using the print administration wizard, and you can add file shares using vigorous Directory management tools. Windows 2000 likewise offers Distributed File Services, which let you combine files on more than one server into a separate share.
Novell Distributed Print Services (NDPS) let you quickly incorporate printers into the network. When NDPS senses a unique printer on the network, it defines a Printer Agent that runs on the printer and communicates with NDS. You then consume NDS to define the policies for the unique printer.
You define NetWare file services by creating and then mounting a disk volume, which likewise manages volume policies.
Red Hat includes Linux's printtool utility for setting up server-connected and networks printers. You can likewise consume this GUI to create printcap entries to define printer access.
Linux has a set of command-line file system configuration tools for mounting and unmounting partitions. Samba ships with the product and provides some integration for Windows clients. You can configure Samba only through a cryptic configuration ASCII file - a grave drawback.
UnixWare provides a flexible GUI-based printer setup appliance called Printer SetUp Manager. For file and volume management, SCO offers a appliance called VisionFS for interoperability with Windows clients. They used VisionFS to allow their NT clients to access the UnixWare server. This service was effortless to configure and use.
Windows 2000 provides the best tools for storage management. Its graphical Manage Disks appliance for local disk configuration includes software RAID management; you can dynamically add disks to a volume set without having to reboot the system. Additionally, a signature is written to each of the disks in an array so that they can live moved to another 2000 server without having to configure the volume on the unique server. The unique server recognizes the drives as members of a RAID set and adds the volume to the file system dynamically.
NetWare's volume management tool, NWConfig, is effortless to use, but it can live a dinky confusing to set up a RAID volume. Once they knew what they were doing, they had no problems formatting drives and creating a RAID volume. The appliance looks a dinky primitive, but they give it high marks for functionality and ease of use.
Red Hat Linux offers no graphical RAID configuration tools, but its command line tools made RAID configuration easy.
To configure disks on the UnixWare server, they used the Veritas Volume Manager graphical disk and volume administration appliance that ships with UnixWare. They had some problems initially getting the appliance to recognize the drives so they could live formatted. They managed to work around the disk configuration problem using an assortment of command line tools, after which Volume Manager worked well.
While they did not probe these NOSes extensively to expose any security weaknesses, they did spy at what they offered in security features.
Microsoft has made significant strides with Windows 2000 security. Windows 2000 supports Kerberos public key certificates as its primary authentication mechanism within a domain, and allows additional authentication with smart cards. Microsoft provides a Security Configuration appliance that integrates with MMC for effortless management of security objects in the vigorous Directory Services system, and a unique Encrypting File System that lets you designate volumes on which files are automatically stored using encryption.
Novell added champion for a public-key infrastructure into NetWare 5 using a public certificate schema developed by RSA Security that lets you tap into NDS to generate certificates.
Red Hat offers a basic Kerberos authentication mechanism. With Red Hat Linux, as with most Unix operating systems, the network services can live individually controlled to increase security. Red Hat offers Pluggable Authentication Modules as a artery of allowing you to set authentication policies across programs running on the server. Passwords are protected with a shadow file. Red Hat likewise bundles firewall and VPN services.
UnixWare has a set of security tools called Security Manager that lets you set up varying degrees of intrusion protection across your network services, from no restriction to turning total network services off. It's a first-rate management time saver, though you could manually modify the services to achieve the selfsame result.
Stability and foible tolerance
The most feature-rich NOS is of dinky value if it can't maintain a server up and running. Windows 2000 offers software RAID 0, 1 and 5 configurations to provide foible tolerance for onboard disk drives, and has a built-in network load-balancing feature that allows a group of servers to spy relish one server and partake the selfsame network designation and IP address. The group decides which server will service each request. This not only distributes the network load across several servers, it likewise provides foible tolerance in case a server goes down. On a lesser scale, you can consume Microsoft's Failover Clustering to provide basic failover services between two servers.
As with NT 4.0, Windows 2000 provides memory protection, which means that each process runs in its own segment.
There are likewise backup and restore capabilities bundled with Windows 2000.
Novell has an add-on product for NetWare called Novell Cluster Services that allows you to cluster as many as eight servers, total managed from one location using ConsoleOne, NetWare Management Portal or NWAdmin32. But Novell presently offers no clustering products to provide load balancing for applications or file services. NetWare has an complicated memory protection scheme to segregate the memory used for the kernel and applications, and a Storage Management Services module to provide a highly flexible backup and restore facility. Backups can live all-inclusive, cover parts of a volume or store a differential snapshot.
Red Hat provides a load-balancing product called piranha with its Linux. This package provides TCP load balancing between servers in a cluster. There is no hard circumscribe to the number of servers you can configure in a cluster. Red Hat Linux likewise provides software RAID champion through command line tools, has memory protection capabilities and provides a rudimentary backup facility.
SCO provides an optional feature to cluster several servers in a load-balancing environment with Non-Stop Clustering for a high flush of fault-tolerance. Currently, Non-Stop Clustering supports six servers in a cluster. UnixWare provides software RAID champion that is managed using SCO's On-Line Data Manager feature. total the standard RAID levels are supported. Computer Associates' bundled ArcServeIT 6.6 provides backup and restore capabilities. UnixWare has memory protection capabilities.
Because their testing was conducted before Windows 2000's generic availability ship date, they were not able to evaluate its hard-copy documentation. The online documentation provided on a CD is extensive, useful and well-organized, although a Web interface would live much easier to consume if it gave more than a yoke of sentences at a time for a particular back topic.
NetWare 5 comes with two manuals: a circumstantial manual for installing and configuring the NOS with first-rate explanations of concepts and features along with an overview of how to configure them, and a miniature spiral-bound booklet of quick start cards. Novell's online documentation is very helpful.
Red Hat Linux comes with three manuals - an installation guide, a getting started sheperd and a reference manual - total of which are effortless to follow.
Despite being the most difficult product to install, UnixWare offers the best documentation. It comes with two manuals: a system handbook and a getting started guide. The system handbook is a reference for conducting the installation of the operating system. It does a first-rate job of reflecting this painful experience. The getting started sheperd is well-written and well-organized. It covers many of the tools needed to configure and maintain the operating system. SCO's online documentation looks nice and is effortless to follow.
The bottom line is that these NOSes present a wide purview of characteristics and provide enterprise customers with a noteworthy deal of preference regarding how each can live used in any given corporate network.
If you want a good, generic purpose NOS that can deliver enterprise-class services with total the bells and whistles imaginable, then Windows 2000 is the strongest contender. However, for high performance, enterprise file and print services, their tests expose that Novell leads the pack. If you're willing to pay a higher expense for scalability and reliability, SCO UnixWare would live a safe bet. But if you necessity an inexpensive alternative that will give you bare-bones network services with decent performance, Red Hat Linux can certainly proper the bill.
The preference is yours.
Bass is the technical director and Robinson is a senior technical staff member at Centennial Networking Labs (CNL) at North Carolina condition University in Raleigh. CNL focuses on performance, capacity and features of networking and server technologies and equipment.
Debate will focus on Linux vs. LinuxJanuary 20, 2000Some Windows 2000 PCs will jump the gunJanuary 19, 2000IBM throws Linux lovefestJanuary 19, 2000Corel Linux will elope Windows appsJanuary 10, 2000Novell's eDirectory spans platformsNovember 16, 1999New NetWare embraces Web appsNovember 2, 1999Microsoft sets a date for Windows 2000October 28, 1999
RELATED IDG.net STORIES:
Fusion's Forum: Square off with the vendors over who has the best NOS(Network World Fusion)How they did it: Details of the testing(Network World Fusion)Find out the tuning parameters(Network World Fusion)Download the Config files(Network World Fusion)The Shootout results(Network World Fusion)Fusion's NOS resources(Network World Fusion)With Windows 2000, NT grows up(Network World Fusion)Fireworks expected at NOS showdown(Network World Fusion)
Note: Pages will open in a unique browser window
External sites are not endorsed by CNN Interactive.
Novell, Inc.Microsoft Corp.The Santa Cruz Operation, Inc. (SCO)Red Hat, Inc.
Note: Pages will open in a unique browser window
External sites are not endorsed by CNN Interactive.
ActiveBatch Gets Blackberry Functionality
Administrators hold long been able to receive pages when servers vanish down, but now they can restart servers with their pagers. Advanced Systems Concepts Inc. has added the Blackberry line of pagers to its list of clients for the ActiveBatch Job Scheduling and Management System.
The ActiveBatch Wireless Client is a module for the management software that enables administrators to monitor systems and initiate processes from the Blackberry. ActiveBatch Job Scheduling and Management System allows users to set up calendars to initiate processes such as backups or printing, or initiate processes from remote clients.
Ben Rosenberg, CEO of Advanced Systems, says the company chose to champion the Blackberry first since it was the handheld best suited for round-the-clock monitoring. "The battery life is three weeks, and it’s always on," he says. Advanced Systems supports both the pager-sized and PDA-sized Blackberries.
If a system sends out an SNMP signal, administrators can configure the system to forward an e-mail to a Blackberry, alerting the administrator. The e-mail gives the administrator the option to initiate processes, such as rebooting a server, through the Blackberry. "With the Blackberry, e-mails are always actionable by you," Rosenberg says.
Rosenberg sees two advantages to system management through wireless devices. First, it obviates the necessity to give instructions over the phone to a less experienced operator. Second, high-level administrators who travel can maintain an eye on the system. "If you’re on the road, you’re able to know if something is wrong," he says. With both advantages, administrators will live better able to guarantee uptime, with less impact on their lives.
In addition to the three levels of encryption standards on Blackberry devices, ActiveBatch provides additional security features, such as a password login to the system. This keeps random users, including thieves, from wreaking havoc on corporate systems. "Use of ActiveBatch is always secure," Rosenberg says.
ActiveBatch can manage Windows, OpenVMS and Unix-based systems with an agent on each server. The agent sends information to a central Windows console. The software integrates with the Windows Management Instrument, which likewise serves as a SNMP provider. ActiveBatch provides three plug-ins for remote clients: e-mail, browser and now the Blackberry.
Rosenberg says Advanced Systems is working to bring ActiveBatch to PocketPC handhelds. He says that although users can already consume them with the browser-based system, the company will adjust the system to better meet the needs and limitations of the PocketPC platform.
Contact: Advanced Systems Concepts Inc., www.advsyscon.com, (201) 798-6400
SafeStone Provides iSeries champion to RSA Security
Security management provider SafeStone Technologies plc. has added iSeries 400 features to an existing partnership with RSA Security Inc. Under the enhanced agreement, SafeStone is making RSA’s SecurID authentication appliance usable on an iSeries 400 platform.
Using its DetectIT Agent 400 interface, SafeStone is enabling two-factor authentication. Two-factor authentication requires an individual to live verified twice before access is allowed to systems.
DetectIT is an offering designed by SafeStone to protect iSeries 400 exit points from unauthorized user access to confidential data, application and resources within an open-connectivity environment.
Through DetectIT, RSA’s iSeries-based users will live able to leverage software solutions for auditing, data and system management, e-business security, and application and access control for separate or multiple networked iSeries 400s.
As fragment of its agreement with RSA, SafeStone will act as RSA’s IBM iSeries traffic partner, handling total sales and champion responsibility for DetectIT. In this role, SafeStone, which is likewise an IBM colleague for systems management and development, will present DetectIT to RSA’s customers as either a standalone or fully integrated offering.
Contact: RSA Security, Inc., www.rsa.com, (781) 301-5000
SafeStone Technologies plc, www.safestone.com
Vendors get Linux Itanium-Ready
With Intel Corp.’s May release of its 64-bit Itanium processor, Linux vendors are lining up to champion the unique architecture. Red Hat Inc., TurboLinux Inc., SuSE AG and Caldera International Inc. total formally released distributions for Itanium.
To coincide with the announcement, TurboLinux released its Operating System 7 for the Itanium processor. "It’s production-ready," says Thrane Jensen, product manager for Itanium. However, Jensen admits that many users will consume early Itanium machines for testing and development, rather than using them in production environments yet.
Bill Claybrook, research director for Linux and open source at the Aberdeen Group, confirms that "most people are waiting for McKinley." He believes that users will wait for Intel to release McKinley—its second-generation IA-64 processor—before they integrate IA-64 into their environments. "They’re being a dinky bit leery of it [in] a production environment," he says. Jensen says TurboLinux is working on its McKinley version of Linux already.
Jensen says that porting Linux to the IA-64 processor had its challenges. The 64-bit nature of the processor created challenging issues for piteous applications over to the unique chip. "Dependencies on 32-bit create problems," he says. Some applications addressed specific 32-bit features that did not exist in Itanium. For the most part, applications could live recompiled for the chip. "In general, it’s along the selfsame code line," he says, "but the kernel has [alot of] different stuff."
In addition to the core operating system, Jensen says many celebrated Linux applications are likewise ready for prime time. Apache and other commonly used applications are production-ready, but "ISVs are going to live doing more application development," he says.
Red Hat released its Red Hat Linux 7.1 for the Itanium processor in mid-June. Using the 2.4 kernel, Red Hat positions the unique release as a platform for testing 64-bit applications ported from 32-bit and RISC machines. The distribution is likewise suited to enterprise server needs; it runs on up to eight processors and offers unique configuration tools for BIND, Apache and printing.
At the selfsame time, Linux vendor SuSE released an Itanium-specific distribution. SuSE Linux 7.2 for IA-64 uses six CD-ROMs to carry over 1,500 applications for the emerging platform. relish Red Hat, the company bills the package as a solution for evaluating and deploying Itanium-based servers.
Although a preview version was already available from the Caldera FTP site at ftp.caldera.com/ia64, Caldera released two unique versions in May, accompanied by a public announcement. The final production version of OpenLinux Server 64 should live available late in the third quarter.
Biff Traber, senior vice president and generic manager of the server traffic line at Caldera, says Caldera has dinky to lose by waiting to release a production version. Customers will spy to the distribution for evaluation purposes, so a beta release meets their needs. "It’s a combination of testing, development and prototyping," he says.
The Trillian project, which initiated development of a Linux kernel for the Itanium processor, first released a kernel in February 2000, predating Itanium’s generic availability by over a year. Intel was aggressive in getting prototype chips to developers to ensure a market providing hardware, remote servers and emulators to enable open source developers to hold Linux ready for the release date.
The project later changed its designation to the more formal-sounding IA-64 Linux Project and worked to further enhance the development of Linux on Itanium. Itanium is not the first 64-bit platform to elope Linux—there were already flavors of Linux for Sun Microsystems Inc.’s Sparc processor and Compaq Computer Corp.’s Alpha. In addition to the distributors, the IA-64 Linux Project likewise boasted hardware vendors, Hewlett-Packard Co., IBM Corp., Silicon Graphics Inc., VA Linux Systems Inc. and NEC Corp., as well as Intel and Swiss research laboratory CERN.
This chapter is from the engage
If they regard filesystems as a mechanism for both storing and locating data, then the two key elements for any filesystem are the items being stored and the list of where those items are. The deeper details of how a given filesystem manipulates its data and meta-information vanish beyond the scope of this chapter but are addressed further in Appendix B, "Anatomy of a Filesystem."
Filesystem Components That the Admin Needs to Know About
As always, they necessity to merit a handle on the vocabulary before they can understand how the elements of a filesystem work together. The next three sections relate the basic components with which you, as a sysadmin, necessity to live familiar.
The most intuitively obvious components of a filesystem are, of course, its files. Because everything in UNIX is a file, special functions are differentiated by file type. There are fewer file types than you might imagine, as Table 3.2 shows.
Table 3.2 File Types and Purposes, with Examples
Maintains information for directory structure
Buffered device file
Raw device file
UNIX domain socket
Interprocess communication (IPC)
See output of commands for files Linux: netstat –x Solaris: netstat -f unix
Named pipe special (FIFO device)
First-in-first-out IPC mechanism, Invoked by name
Linux: /dev/initctl Solaris: /etc/utmppipe/etc/cron.d/FIFO
Pointer to another file (any type)
/usr/tmp -> ../var/tmp
All other files; holds data of total other types
Text files kick files Database files Executables/binaries
Notice that directories are a kind of file. The key is that they hold a specific kind of format and contents (see Appendix B for more details). A directory holds the filenames and index numbers (see the following section, "Inodes") of total its constituent files, including subdirectories.
Directory files are not flat (or regular) files, but are indexed (like a database), so that you can silent locate a file quickly when you hold a big number of files in the selfsame directory.13
Even though file handling is generally transparent, it is primary to recollect that a file's data blocks14 may not live stored sequentially (or even in the selfsame generic disk region). When data blocks are widely scattered in an uncoordinated manner, it can palpate access times and increase I/O overhead.
Meta-information about files is stored in structures called index nodes, or inodes. Their contents vary based on the particular filesystem in use, but total inodes hold the following information about the file they index:15
Inode identification number
Owners: user and group
ctime: terminal file status change time
mtime: terminal data modification time16
atime: terminal access time
Physical location information for data blocks
Notice that the filename is not stored in the inode, but as an entry in the file's closest parent directory.
All other information about a file that ls displays is stored in an inode somewhere. With a few handy options, you can drag out lots of useful information. Let's converse that you want to know the inode number of the Solaris kernel.17 You just give the –i option, and voilá:
[sun:10 ~]ls -i /kernel/genunix
Of course, ls –l is an used friend, telling you most everything that you want to know. Looking at the Solaris kernel again, you merit the output in design 3.4.
Figure 3.4 Diagrammed Output of ls on a File
Notice that the timestamp shown by default in a long listing is mtime. You can pass various options to ls to view ctime and atime instead. For other nifty permutations, notice the ls man page.
File Permissions and Ownership Refresher
Because UNIX was designed to champion many users, the question naturally arises how to know who can notice what files. The first and simplest respond is simply to permit users to examine only their own files. This, of course, would get it difficult, if not impossible, to share, creating noteworthy difficulties in collaborative environments and causing a string of other problems: Why can't I elope ls? Because the system created it, not you, is only the most obvious instance of such problems.
Users and Groups
UNIX uses a three-part system to determine file access: There's what you, as the file owner, are allowed to do; there's what the group is allowed to do; and there's what other people are allowed to do. Let's notice what Elvis's permissions spy like:
[ elvis@frogbog elvis ]$ ls -l
drwxr-xr-x 5 elvis users 4096 Dec 9 21:55 Desktop
drwxr-xr-x 2 elvis users 4096 Dec 9 22:00 Mail
-rw-r--r-- 1 elvis users 36 Dec 9 22:00 README
-rw-r--r-- 1 elvis users 22 Dec 9 21:59 ThisFile
drwxr-xr-x 2 elvis users 4096 Dec 12 19:57 arc
drwxr-xr-x 2 elvis users 4096 Dec 10 00:40 songs
-rw-r--r-- 1 elvis users 46 Dec 12 19:52 tao.txt
-rw-r--r-- 1 elvis users 21 Dec 9 21:59 thisfile
-rw-r--r-- 1 elvis users 45 Dec 12 19:52 west.txt
As long as we're here, let's fracture down exactly what's being displayed. First, they hold a 10-character string of letters and hyphens. This is the representation of permissions, which I'll fracture down in a minute. The second particular is a number, usually a separate digit. This is the number of hard links to that directory. I'll dispute this later in this chapter. The third thing is the username of the file owner, and the fourth is the designation of the file's group. The fifth column is a number representing the size of the file, in bytes. The sixth contains the date and time of terminal modification for the file, and the final column shows the filename.
Every user on the system has a username and a number that is associated with that user. This number generally is referred to as the UID, short for user ID. If a user has been deleted but, for some reason, his files remain, the username is replaced with that user's UID. Similarly, if a group is deleted but silent owns files, the GID (group number) shows up instead of a designation in the group field. There are likewise other circumstances in which the system can't correlate the designation and the number, but these should live relatively rare occurrences.
As a user, you can't change the owner of your files: This would open up some grave security holes on the system. Only root can chown files, but if he makes a mistake, you can now expect root to chown the files to you. As a user, you can chgrp a file to a different group of which you are a member. That is, if Elvis is a member of a group named users and a group named elvis, he can chgrp elvis west.txt or chgrp users west.txt, but because he's not a member of the group beatles, he can't chgrp beatles west.txt. A user can belong to any number of groups. Generally (although this varies more or less by flavor), files created belong to the group to which the directory belongs. On most modern UNIX variants, the group that owns files is whatever group is listed as your primary group by the system in the /etc/passwd file and can live changed via the newgrp command. On these systems, Elvis can chgrp users if he wants his files to belong to the users group, or he can chgrp elvis if he wants his files to belong to the elvis group.
So, what were those silly strings of letters and hyphens at the climb of each long directory listing? I already said that they represented the permissions of the file, but that's not especially helpful. The 10 characters of that string represent the consent bits for each file. The first character is separate, and the terminal nine are three very similar groups of three characters. I'll clarify each of these in turn.
If you spy back to Elvis's long listing of his directory, you'll notice that most of the files simply hold a hyphen as the first character, whereas several possess a d in this field. The more astute reader might note that the files with a d in that first field total betide to live directories. There's a first-rate judgement for this: The first permissions character denotes whether that file is a special file of one sort or another.
What's a special file? It's either something that isn't really a file (in the sense of a sequential stream of bytes on a disk) but that UNIX treats as a file, such as a disk or a video display, or something that is really a file but that is treated differently. A directory, by necessity, is a stream of bytes on disk, but that d means that it's treated differently.
The next three characters represent what the user who owns the file can finish with it. From left to right, these permissions are read, write, and execute. Read consent is just that—the capability to notice the contents of a file. Write consent implies not only the birthright to change the contents of a file, but likewise the birthright to delete it. If I finish not possess write consent to a file, rm not_ permitted.txt fails.
Execute consent determines whether the file is likewise a command that can live elope on the system. Because UNIX sees everything as a file, total commands are stored in files that can live created, modified, and deleted relish any other file. The computer then needs a artery to exhibit what can and can't live run. The execute bit does this.
Another primary judgement that you necessity to worry about whether a file is executable is that some programs are designed to live elope only by the system administrator: These programs can modify the computer's configuration or can live uncertain in some other way. Because UNIX enables you to specify permissions for the owner, the group, and other users, the execute bit enables the administrator to restrict the consume of uncertain programs.
Directories handle the execute consent differently. If a directory does not hold execute permissions, that user (or group, or other users on the system) can't cd into that directory and can't spy at information about the files in that directory. (You usually can find the names of the files, however.) Even if you hold permissions for the files in that directory, you generally can't spy at them. (This varies more or less by platform.)
The second set of three characters is the group permissions (read, write, and execute, in that order), and the final set of three characters is what other users on the system are permitted to finish with that file. Because of security concerns (either due to other users on your system or due to pervasive networks such as the Internet), giving write access to other users is highly discouraged.
Great, you can now read the permissions in the directory listing, but what can you finish with them? Let's converse that Elvis wants to get his directory readable only by himself. He can chmod go-rwx ~/songs: That means remove the read, write, and execute permissions for the group and others on the system. If Elvis decides to let Nashville artists consume a spy at his material but not change it (and if there's a group nashville on the system), he can first chgrp nashville songs and then chmod g+r songs.
If Elvis does this, however, he'll find that (at least, on some platforms) members of group nashville can't spy at them. Oops! With a simple chmod g+x songs, the problem is solved:
[ elvis@frogbog elvis ]$ ls -l
drwxr-xr-x 5 elvis users 4096 Dec 9 21:55 Desktop
drwxr-xr-x 2 elvis users 4096 Dec 9 22:00 Mail
-rw-r--r-- 1 elvis users 36 Dec 9 22:00 README
-rw-r--r-- 1 elvis users 22 Dec 9 21:59 ThisFile
drwxr-xr-x 2 elvis users 4096 Dec 12 19:57 arc
drwxr-x--- 2 elvis nashvill 4096 Dec 15 14:21 songs
-rw-r--r-- 1 elvis users 46 Dec 12 19:52 tao.txt
-rw-r--r-- 1 elvis users 21 Dec 9 21:59 thisfile
-rw-r--r-- 1 elvis users 45 Dec 12 19:52 west.txt
In addition to the read, write, and execute bits, there exists special permissions used by the system to determine how and when to suspend the tolerable consent rules. Any thorough understanding of UNIX requires an understanding of the setuid, setgid, and sticky bits. For tolerable system users, only a generic understanding of these is necessary, and this discussion is thus brief. first-rate documentation on this topic exists elsewhere for budding system administrators and programmers.
The setuid bit applies only to executable files and directories. In the case of executable programs, it means that the given program runs as though the file owner were running it. That is, xhextris, a variant on Tetris, has the following permissions on my system:
1 games games 32516 May 18 1999 /usr/X11R6/bin/xhextris
There's a pseudouser called games on the system, which can't live logged into and has no home directory. When the xhextris program executes, it can read and write to files that only the game's pseudouser normally would live permitted. In this case, there's a high-score file stored on the system that writeable only by that user. When Elvis runs the game, the system acts as though he were the user games, and thus he is able to store the high-score file. To set the setuid bit on a file, you can exhibit chmod to give it mode u+s. (You can cerebrate of this as uid set, although this isn't technically accurate.)
The setgid bit, which stands for "set group id," works almost identically to setuid, except that the system acts as though the user's group is that of the given file. If xhextris had used setgid games instead of setuid games, the high score would live writeable to any directory owned by the group games. It is used by the system administrator in ways fundamentally similar to the setuid permission.
When applied to directories on Linux, Irix, and Solaris (and probably most other POSIX-compliant UNIX flavors as well), the setgid bit means that unique files are given the parent directory's group rather than the user's primary or current group. This can live useful for, say, a directory for fonts built by (and for) a given program. Any user might generate the fonts via a setgid command that writes to a setgid directory. setgid on directories varies by platform; check your documentation. To set the setgid bit, you can exhibit chmod to consume g+s (gid set).
Although a file in a group or world-writeable directory without the sticky bit can live deleted by anyone with write consent for that directory (user, group, or other), a file in a directory with the sticky bit set can live deleted only by either the file's owner or root. This is particularly useful for creating temporary directories or scratch space that can live used by anyone without one's files being deleted by others. You can set consent +t in chmod to give something the sticky bit.
Like almost everything else on UNIX, permissions hold a number associated with them. It's generally considered that permissions are a group of four digits, each between 0 and 7. Each of those digits represents a group of three permissions, each of which is a yes/no answer. From left to right, those digits represent special permissions, user permissions, group permissions, and other permissions.
So, About Those consent Bits...
Most programs reading consent bits expect four digits, although often only three are given. Shorter numbers are filled in with leading zeros: 222 is treated as 0222, and 5 is treated as 0005. The three rightmost digits are, as previously mentioned, user (owner) permissions, group permissions, and other permissions, from birthright to left.
Each of these digits is calculated in the following manner: read consent has a value of 4, write consent has a value of 2, and execute consent has a value of 1. Simply add these values together, and you've got that consent value. Read, write, and execute would live 7, read and write without execute would live 6, and no consent to finish anything would live 0. Read, write, and execute for the file owner, with read and execute for the group and nothing at total for anyone else, would live 750. Read and write for the user and group, but only read for others, would live 664.
The special permissions are 4 for setuid, 2 for setgid, and 1 for sticky. This digit is prepended to the three-digit numeric permission: A temporary directory with sticky read, write, and execute consent for everyone would live mode 1777. A setuid root directory writeable by nobody else would live 4700. You can consume chmod to set numeric permissions directly, as in chmod 1777 /tmp.
In addition to a more precise consume of chmod, numeric permissions are used with the umask command, which sets the default permissions. More precisely, it "masks" the default permissions: The umask value is subtracted from the maximum possible settings.* umask deals only with the three-digit permission, not the full-fledged four-digit value. A umask of 002 or 022 is most commonly the default. 022, subtracted from 777, is 755: read, write, and execute for the user, and read and execute for the group and others. 002 from 777 is 775: read, write, and execute for the user and group, and read and execute for others. I minister to set my umask to 077: read, write, and execute for myself, and nothing for my group or others. (Of course, when working on a group project, I set my umask to 007: My group and I can read, write, or execute anything, but others can't finish anything with their files.)
You should note that the umask assumes that the execute bit on the file will live set. total umasks are subtracted from 777 rather than 666, and those extra ones are subtracted later, if necessary. (See Appendix B for more details on consent bits and umask workings.)
*Actually, the consent bits are XORed with the maximum possible settings, if you're a computer science type.
Also notice that the first bit of output prepended to the permissions string indicates the file type. This is one handy artery of identifying a file's type. Another is the file command, as shown in Table 3.3.
Table 3.3 ls File Types and file Output Sample
ls File kind Character
File panoply Example
[either:1 ~]file /usr/usr: directory
Block special device
[linux: 10 ~] file /dev/hda1/dev/hda1: screen special (3/1)[sun:10 root ~]file /dev/dsk/c0t0d0s0/dev/dsk/c0t0d0s0: screen special(136/0)
Character special device
[linux:11 ~] file /dev/tty0/dev/tty0: character special (4/0)
[ensis:11 ~]file /dev/rdsk/c0t0d0s0/dev/rdsk/c0t0d0s0: character special (136/0)
UNIX domain socket
[linux:12 ~] file /dev/log/dev/log: socket
[sun:12 ~]file /dev/ccv/dev/ccv: socket
Named pipe special (FIFO device)
[linux:13 ~] file /dev/initctl/dev/initctl: fifo (named pipe)
[sun:13 ~]file /etc/utmppipe/etc/utmppipe: fifo
[linux:14 ~] file /usr/tmp/usr/tmp: symbolic link to ../var/tmp
[sun:14 ~]file -h /usr/tmp/usr/tmp: symbolic link to ¬../var/tmp
[linux:15 ~] file /etc/passwd/etc/passwd: ASCII text
[linux:15 ~] file /boot/vmlinux-2.4.2-2/boot/vmlinux-2.4.2-2: ELF 32-bit LSB executable,
¬Intel 80386, version 1,statically linked, not stripped
[linux:15 ~] file /etc/rc.d/init.d/sshd/etc/rc.d/init.d/sshd: Bourne-Again shell script text executable
[sun:15 ~]file /etc/passwd
/etc/passwd: ascii text
[sun:15 ~]file /kernel/genunix
-/kernel/genunix: ELF 32-bit MSB relocatable
¬SPARC Version 1
[sun:15 ~]file /etc/init.d/sshd
Notice the in-depth information that file gives—in many cases, it shows details about the file that no other command will readily panoply (such as what kind of executable the file is). These low-level details are beyond the scope of their discussion, but the man page has more information.
Important Points about the file ommand
file tries to design out what kind a file is based on three types of test:
The file kind that the ls –l command returns.
-The presence of a magic number at the climb of the file identifying the file type. These numbers are defined in the file /usr/share/magic in Red Hat Linux 7.1 and /usr/lib/locale/locale/LC_MESSAGES/magic (or /etc/magic) in Solaris 8. Typically, only binary files will hold magic numbers.
-In the case of a regular/text file, the first few bytes are tested to determine the kind of text representation and then to determine whether the file has a recognized purpose, such as C code or a Perl script.
file actually opens the file and changes the atime in the inode.
Inode lists are maintained by the filesystem itself, including which ones are free for use. Inode allocation and manipulation is total transparent to both sysadmins and users.
Inodes become significant at two times for the sysadmin: at filesystem creation time and when the filesystem runs out of free inodes. At filesystem creation time, the total number of inodes for the filesystem is allocated. Although they are not in use, space is set aside for them. You cannot add any more inodes to a filesystem after it has been created. When you elope out of inodes, you must either free some up (by deleting or piteous files) or migrate to another, larger filesystem.
Without inodes, files are just a random assortment of ones and zeros on the disk. There is no guarantee that the file will live stored sequentially within a sector or track, so without an inode to point the artery to the data blocks, the file is lost. In fact, every file is uniquely identified by the combination of its filesystem designation and inode number.
See Appendix B for more circumstantial information on the exact content of inodes and their structure.
Linux has a very useful command called stat that dumps the contents of an inode in a tidy format:
[linux:9 ~]stat .
Size: 16384 Filetype: Directory
Mode: (0755/drwxr-xr-x) Uid: (19529/ robin) Gid:(20/users)
Device: 0,4 Inode: 153288707 Links: 78
Access: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Modify: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Change: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Boot screen and Superblock
When a filesystem is created, two structures are automatically created, whether they are immediately used or not. The first is called the boot block, where boot-time information is stored. Because a partition may live made bootable at will, this structure needs to live available at total times.
The other structure, of more interest here, is the superblock. Just as an inode contains meta-information about a file, a superblock contains metainformation about a filesystem. Some of the more critical contents are listed here:18
Timestamp: terminal update
Superblock condition flag
Filesystem condition flag: clean, stable, active
Number of free blocks
List of free blocks
Pointer to next free block
Size of inode list
Number of free inodes
List of free inodes
Pointer to next free inode
Lock fields for free blocks and inodes
Summary data block
And you thought inodes were complex.
The superblock keeps track of free file blocks and free inodes so that the filesystem can store unique files. Without these lists and pointers, a long, sequential search would hold to live performed to find free space every time a file was created.
In much the selfsame artery that files without inodes are lost, filesystems without intact superblocks are inaccessible. That's why there is a superblock condition flag—to testify whether the superblock was properly and completely updated before the disk (or system) was terminal taken offline. If it was not, then a consistency check must live performed for the total filesystem and the results stored back in the superblock.
Again, more circumstantial information about the superblock and its role in UNIX filesystems may live establish in Appendix B.
Both Red Hat and Solaris recognize a multitude of different filesystem types, although you will generally proximate up using and supporting just a few. There are three standard types of filesystem—local, network, and pseudo—and a fourth "super-filesystem" kind that is actually losing ground, given the size of modern disks.
Local filesystems are common to every system that has its own local disk.19 Although there are many instances of this kind of filesystem, they are total designed to work within a system, managing the components discussed in the terminal section and interfacing with the physical drive(s).
Only a few local filesystems are specifically designed to live cross-platform (and sometimes even cross–OS-type). They advance in handy, though, when you hold a nondisk hardware failure; you can just consume the disk and achieve it into another machine to retrieve the data.20 The UNIX File System, or ufs, was designed for this; both Solaris and Red Hat Linux machines can consume disks with this filesystem. Note that Solaris uses ufs filesystems by default. Red Hat's default local filesystem is ext2.
Another local, cross-platform filesystem is ISO9660, the CD-ROM standard. This is why you can read your Solaris CD in a Red Hat box's reader.
Local filesystems advance in two related but distinct flavors. The original, standard model filesystem is silent in broad consume today. The newer journaling filesystem kind is just climb to really advance into its own. The major dissimilarity between the two types is the artery they track changes and finish integrity checks.
Standard, nonjournaling filesystems reckon on flags in the superblock for consistency regulation. If the superblock flag is not set to "clean," then the filesystem knows that it was not shut down properly: not total write buffers were flushed to disk, and so on. Inconsistency in a filesystem means that allocated inodes could live overwritten; free inodes could live counted as in use—in short, rampant file corruption, mass hysteria.
But there is a filesystem integrity checker to reclaim the day: fsck. This command is usually invoked automatically at boot-time to verify that total filesystems are clean and stable. If the / or /usr filesystems are inconsistent, the system might prompt you to start up a miniroot shell and manually elope fsck. A few of the more critical items checked and corrected are listed here:
Unclaimed blocks and inodes (not in free list or in use)
Unreferenced but allocated blocks and inodes
Multiply claimed blocks and inodes
Bad inode formats
Bad directory formats
Bad free screen or inode list formats
Incorrect free screen or inode counts
Superblock counts and flags
Note that a filesystem should live unmounted before running fsck (see the later section "Administering Local Filesystems"). Running fsck on a mounted filesystem might occasions a system panic and crash, or it might simply refuse to elope at all. It's likewise best, though not required, that you elope fsck on the raw device, when possible. notice the man page for more details and options.
So where does fsck achieve orphans, the blocks and inodes that are clearly in consume but aren't referenced anywhere? Enter the lost+found directories. There is always a /lost+found directory on every system; other directories accrue them as fsck finds orphans in their purview. fsck automatically creates the directories as needed and renames the lost blocks into there by inode number. notice the man pages "mklost+found" on Red Hat and "fsck_ufs" on Solaris.
Journaling filesystems finish away with fsck and its concomitant superblock structures. total filesystem condition information is internally tracked and monitored, in much the selfsame artery that databases systems set up checkpoints and self-verifications.
With journaling filesystems, you hold a better casual of replete data recovery in the event of a system crash. Even unsaved data in buffers can live recovered thanks to the internal log.21 This kind of foible tolerance makes journaling filesystems useful in high- availability environments.
The drawback, of course, is that when a filesystem relish this gets corrupted somehow, it presents major difficulties for recovery. Most journaling filesystems provide their own salvaging programs for consume in case of emergency. This underscores how critical backups are, no matter what kind of filesystem software you've invested in. notice Chapter 16, "Backups," for more information.
One of the earliest journaling filesystems is silent a commercial venture: VxFS by Veritas. Another pioneer has decided to release its software into the public domain under GPL22 licensing: JFS23 by IBM. SGI's xfs journaling filesystem has been freely available under GPL since about 1999, although it is only designed to work under IRIX and Linux.24
Maintenance of filesystem condition incurs an overhead when using journaling filesystems. As a result, these filesystems fulfill suboptimally for miniature filesystem sizes. Generally, journaling filesystems are commandeer for filesystem sizes of 500Mb or more.
Network-based filesystems are really add-ons to local filesystems because the file server must hold the actual data stored in one of its own local filesystems.25 Network file- systems hold both a server and client program.
The server usually runs as a daemon on the system that is sharing disk space. The server's local filesystems are unaffected by this extra process. In fact, the daemon generally only puts a few messages in the syslog and is otherwise only visible through ps.
The system that wants to access the server's disk space runs the client program to mount the shared filesystems across the network. The client program handles total the I/O so that the network filesystem behaves just a relish a local filesystem toward the client machine.
The used standby for network-based filesystems is the Network File System (NFS). The NFS standard is currently up to revision 3, though there are quite a number of implementations with their own version numbers. Both Red Hat and Solaris advance standard with NFS client and server packages. For more details on the inner workings and configuration of NFS, notice Chapter 13, "File Sharing."
Other network-based filesystems embrace AFS (IBM's Andrew File System) and DFS/DCE (Distributed File System, fragment of the Open Group's Distributed Computing Environment). The mechanisms of these advanced filesystems vanish beyond the scope of this book, although their goal is silent the same: to efficiently partake files across the network transparently to the user.
Pseudofilesystems are an enthralling development in that they are not actually related to disk-based partitions. They are instead purely analytic constructs that represent information and meta-information in a hierarchical structure. Because of this structure and because they can live manipulated with the mount command, they are silent referred to as filesystems.
The best instance of pseudofilesystems exists on both Red Hat and Solaris systems: /proc. Under Solaris, /proc is restricted to just managing process information:
[sun:1 ~]ls /proc
0 145 162 195 206 230 262 265 272 286 299 303 342 370 403 408 _672 752
1 155 185 198 214 243 263 266 278 292 3 318 360 371 404 52 _674
142 157 192 2 224 252 264 268 280 298 302 319 364 400 406 58 _678
Note that these directories are total named according to the process numbers corresponding to what you would find in the output of ps. The contents of each directory are the various meta-information that the system needs to manage the process.
Under Red Hat, /proc provides information about processes as well as about various system components and statistics:
[linux:1 ~] ls /proc
1 18767 23156 24484 25567 28163 4 493 674 8453 ksyms _stat
13557 18933 23157 24486 25600 3 405 5 675 9833 loadavg _swaps
13560 18934 23158 24487 25602 3050 418 5037 676 9834 locks _sys
13561 18937 23180 24512 25603 3051 427 5038 7386 9835 mdstat _tty
1647 19709 23902 24541 25771 3052 441 5054 7387 bus meminfo _uptime
1648 19730 23903 24775 25772 30709 455 5082 7388 cmdline misc _version
1649 19732 23936 25494 25773 30710 473 510 7414 cpuinfo modules
16553 19733 24118 25503 25824 30712 485 5101 7636 devices mounts
18658 2 24119 25504 25882 30729 486 524 7637 dma mtrr
18660 21450 24120 25527 25920 320 487 558 7638 filesystems net
18661 21462 24144 25533 26070 335 488 6 7662 fs _partitions
18684 21866 24274 25534 26071 337 489 670 8426 interrupts pci
18685 21869 24276 25541 26072 338 490 671 8427 ioports scsi
18686 21870 24277 25542 28161 339 491 672 8428 kcore self
18691 21954 24458 25543 28162 365 492 673 8429 kmsg slabinfo
Again they notice the directories named for process numbers, but they likewise notice directories with indicative names such as cpuinfo and loadavg. Because this is a hierarchical filesystem, you can cd into these directories and read the various files for their system information.
The most enthralling thing about /proc is that it allows even processes to live treated relish files.26 This means that pretty much everything in UNIX, whether it is something that just exists or something that actually happens, can now live considered a file.
For more information under Red Hat, kind man proc. For more information under Solaris, kind man –s 4 proc.
Finally, there are the "super-filesystems" or analytic volumes that finish what the other major types of filesystem cannot: surmount the barriers of partitions. You may well expect why anyone would want to finish that. There are two reasons. First, because disks used to live a lot smaller and more costly, you used what you had at hand. If you needed a big pool of disk space, analytic volumes allowed you to aggregate remnants into something useable. Second, even with larger disks, you silent might not live able to achieve the kind of disk space required by a particular researcher or program. Once again, analytic volumes allow you to aggregate partitions across disks to figure one big filesystem.
Crossing disk boundaries with a analytic volume is referred to as disk spanning. Once you hold analytic volumes, you can likewise hold some fairly knotty data management methods and performance-enhancing techniques. Disk striping, for example, is a performance booster. Instead of sequentially filling one disk and then the next in series, it spreads the data in discrete chunks across disks, allowing better I/O response through parallel operations.
RAID27 implements analytic volumes at 10 distinct levels, with various features at each level. This implementation can live done either in hardware or in software, although the nomenclature for both is the same.28
Table 3.4 RAID Levels
-Requires extra drives for data duplication
(Very similar to RAID-3)
Requires divide parity disk
Requires divide parity disk
(Very similar to RAID-5)
Slowest for writes, but
Rotating parity array
good for reads
Requires three to five divide parity disks
Reconstruction by parity data (not duplication)
RAID-5 + secondary
Not in broad use
RAID-5 + real-time embedded controller
Not in broad use
-RAID-0 array duplicated (mirrored)
-Each stripe is RAID-1 (mirrored) array
Array of parity stripes
Each stripe is RAID-3 array
Clearly, the kind of complexity inherent in total analytic volume systems requires some kind of back-end management system. Red Hat offers the analytic Volume Manager (LVM) as a kernel module. While the details of LVM are beyond the scope of this book, it is enthralling to note that you can achieve any filesystem that you want on top of the analytic volume. Start at http://www.linuxdoc.org/HOWTO/LVM-HOWTO.htmlfor more details.
Although Sun offers analytic volume management, it is through a for-pay program called "Solstice DiskSuite." The filesystem on DiskSuite analytic volumes must live ufs. For more information, start at http://docs.sun.com/ab2/coll.260.2/DISKSUITEREF.
Another commercial analytic volume manager for Solaris comes from Veritas; see: http://www.veritas.com/us/products/volumemanager/faq.html#a24
The beauty of total analytic volumes is that they appear to live just another local filesystem and are completely transparent to the user. However, analytic volumes finish add some complexity for the systems administrator, and the schema should live carefully documented on paper, in case it needs to live re-created.
Normally, a file server's disks are directly attached to the file server. With network-attached storage (NAS), the file server and the disks that it serves are divide entities, communicating over the local network. The storage disks require an aggregate controller that arbitrates file I/O requests from the external server(s). The server(s) and the aggregate controller each hold distinct network IP addresses. To serve the files to clients, a file (or application) server sends file I/O requests to the NAS aggregate controller and relays the results back to client systems.
NAS is touched on here for completeness—entire books can live written about NAS design and implementation. NAS does not really represent a kind of filesystem, but rather it is a mechanism to mitigate the file server from the details of hardware disk access by isolating them in the network-attached storage unit.
Red Hat Filesystem Reference Table
Table 3.5 lists major filesystems that currently champion (or are supported by) Red Hat.29 The filesystem types that are currently natively supported are listed in /usr/src/linux/ fs/filesytems.c.
Table 3.5 Filesystem Types and Purposes, with Examples (Red Hat)
Specific Instances (as Used in /etc/fstab)
Red Hat default filesystem
Journaling filesystem from IBM
Journaling filesystem from SGI
Windows compatibility: DOS
Windows compatibility: NT
Windows compatibility: FAT-32
Adfs hfs romfs
Affs hpfs smbfs
Coda mnix udf
devpts ncpfs umsdos
Deprecated, pre-kernel 2.1.21
Network-based remote communication
Store process (and other system) meta-information
Solaris Filesystem Reference Table
Table 3.6 lists major filesystems that currently champion (or are supported by) Solaris. The filesystem types that currently are natively supported are listed as directories under /usr/lib/fs.
Table 3.6 Filesystem Types and Purposes, with Examples (Solaris)
Specific Instances (as Used in /etc/vfstab)
Solaris default filesystem; Red Hat-compatible
Journaling filesystem from IBM
Network-based remote communication
Store process metainformation
Fdfs swapfs tmpfs
Mount metainformation areas as filesystems
mntfs cachefs lofs
fifofs specfs udfs namefs