Killexams.com VCS-256 Dumps and existent Questions
100% existent Questions - Exam Pass Guarantee with elevated Marks - Just Memorize the Answers
VCS-256 exam Dumps Source : Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux
Test Code : VCS-256
Test cognomen : Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux
Vendor cognomen : Veritas
: 84 existent Questions
quick, gross and actual bank latest VCS-256 exam.
VCS-256 Exam was my goal for this year. A very long current Years resolution to establish it in replete VCS-256 . I actually thought that studying for this exam, preparing to pass and sitting the VCS-256 exam would live just as crazy as it sounds. Thankfully, I create some reviews of killexams.com online and decided to utilize it. It ended up being totally worth it as the bundle had included every question I got on the VCS-256 exam. I passed the VCS-256 totally stress-free and came out of the testing focus satisfied and relaxed. Definitely worth the money, I cogitate this is the best exam experience possible.
found complete VCS-256 Questions in dumps that I saw in actual buy a peep at.
I cleared complete the VCS-256 exams effortlessly. This website proved very useful in clearing the exams as well as understanding the concepts. complete questions are explanined very well.
These VCS-256 dumps works noteworthy in the existent test.
i am thankful to killexams.com for his or her mock buy a peep at on VCS-256. I may want to bypass the examination without problems. thanks once more. i fill besides taken mock test from you for my other tests. im locating it very useful and am assured of clearing this exam with the aid of achieving extra than 85%. Your question bank could live very useful and explainations are besides excellent. i will give you a four superstar rating.
right here are pointers & hints with dumps to certify VCS-256 examination with inordinate rankings.
i am not a fan of online thoughts dumps, because they are frequently posted with the aid of using irresponsible individuals who betray you into gaining lore of belongings you dont exigency and lacking matters that you really exigency to realise. No longer killexams. This organization gives virtually cogent questions solutions that attend you gain via your examination guidance. That is how I passed VCS-256 examination. First time, First I relied on free on line stuff and that i failed. I were given killexams.Com VCS-256 exam simulator - and that i exceeded. That is the handiest evidence I need. Thanks killexams.
Right condition to find VCS-256 dumps paper.
Surpassed the VCS-256 examination the alternative day. Id fill in no manner achieved it without your examination prep substances. Some months within the past I failed that examination the primary time I took it. Your questions are very just dote existent one. I exceeded the exam very without problem this time. Thank you very a brilliant deal to your assist.
what is pass ratio of VCS-256 exam?
I had taken the VCS-256 arrangement from the killexams.Com as that turned into a median diploma for the readiness which had in the desist given the exceptional degree of the planning to set off the ninety % scores in the VCS-256 checktests. I without a doubt extremely joyful inside the device I were given issues the matters emptied the exciting technique and thru the helpof the equal; I had at lengthy last got the detail out and about. It had made my association a ton of much less tough and with the attend of the killexams.Com I fill been organized to expand properly inside the existence.
I exigency dumps cutting-edge VCS-256 exam.
Many thanks for your VCS-256 dumps. I identified maximum of the questions and besides you had complete the simulations that i wasrequested. I were given 97 percentage rating. After trying numerous books, i was quite disappointed now not getting the perquisite materials. i was looking for a guiding principle for examination VCS-256 with simple language and nicely-preparedcontent. killexams.com fulfilled my need, as it explained the knotty subjects inside the simplest manner. inside the existent exam I were given ninety seven%, which was past my expectation. thanks killexams.com, for your exceptional guide-line!
Questions had been precisely equal as I got!
I desired to fill certification in check VCS-256 and i gain it with killexams. Nice pattern of current modules facilitate me to try complete the 38 questions within the given time body. I rating more than 87. I exigency to express that I may additionally exigency to in no artery ever fill completed it by myself what i was capable of achieve with killexams.Com . Killexams.Com tender the cutting-edge module of questions and cover the associated subjects. artery to killexams.Com .
fantastic possibility to gain licensed VCS-256 exam.
I might regularly pass over schooling and that might live a stout problem for me if my dad and mom determined out. I needed tocowl my errors and bear certain that they could dependence in me. I knew that one manner to cover my errors become to conclude nicely in my VCS-256 test that became very near. If I did nicely in my VCS-256 buy a peep at, my parents would really dote me once more and that they did because of the reality i was capable of lucid the test. It changed into this killexams.Com that gave me an preempt commands. Thank you.
I had no time to peep at VCS-256 books and training!
I am now VCS-256 certified and it could not live feasible with out killexams.Com VCS-256 attempting out engine. killexams.com testing engine has been tailor-made maintaining in intelligence the requirements of the students which they confront on the time of taking VCS-256 exam. This attempting out engine could live very tons exam focus and every concern depend has been addressed in element virtually to preserve apprised the students from every and each information. Killexams.Com team is sensible about that this is the manner to retain college students confident and ever geared up for taking examination.
Whilst it is very arduous job to elect amenable exam questions / answers resources regarding review, reputation and validity because people gain ripoff due to choosing incorrect service. Killexams. com bear it inescapable to provide its clients far better to their resources with respect to exam dumps update and validity. Most of other peoples ripoff report complaint clients arrive to us for the brain dumps and pass their exams enjoyably and easily. They never compromise on their review, reputation and quality because killexams review, killexams reputation and killexams client self confidence is famous to complete of us. Specially they manage killexams.com review, killexams.com reputation, killexams.com ripoff report complaint, killexams.com trust, killexams.com validity, killexams.com report and killexams.com scam. If perhaps you discern any bogus report posted by their competitor with the cognomen killexams ripoff report complaint internet, killexams.com ripoff report, killexams.com scam, killexams.com complaint or something dote this, just retain in intelligence that there are always foul people damaging reputation of safe services due to their benefits. There are a great number of satisfied customers that pass their exams using killexams.com brain dumps, killexams PDF questions, killexams drill questions, killexams exam simulator. Visit Killexams.com, their test questions and sample brain dumps, their exam simulator and you will definitely know that killexams.com is the best brain dumps site.
Google About me
850-001 drill questions | 311-019 braindumps | BAS-012 existent questions | 920-806 sample test | 002-ARXTroubleshoot study guide | C2090-730 exam prep | 920-271 drill exam | 650-293 braindumps | BCP-520 free pdf | 700-038 questions answers | P2090-054 questions and answers | 72-640 test prep | BCP-810 brain dumps | HPE6-A43 bootcamp | 70-567-CSharp existent questions | 9L0-964 drill questions | HP0-M19 drill test | C2010-501 exam prep | 9L0-353 drill test | 650-302 brain dumps |
VCS-256 exam questions | VCS-256 free pdf | VCS-256 pdf download | VCS-256 test questions | VCS-256 real questions | VCS-256 practice questions
Ensure your success with this VCS-256 question bank
Simply experience their Questions bank and feel inescapable about the VCS-256 test. You will pass your exam at elevated marks or your cash back. They fill collected a database of VCS-256 Dumps from existent exams to allow you to prepare and pass VCS-256 exam on the specific first attempt. Basically set up their Exam Simulator and prepare. You will pass the exam.
Veritas VCS-256 exam has given another bearing to the IT enterprise. It is presently needed to certify because of the qualification that prompts a brighter future. live that because it will, you wish to condition noteworthy exertion in Veritas Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux test, in elegant of the actual fact that there will live no escape out of poring over. killexams.com fill created your swish, currently your test preparing for VCS-256 Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux is not intense from now on.
killexams.com Discount Coupons and Promo Codes are as under;
WC2017 : 60% Discount Coupon for complete exams on website
PROF17 : 10% Discount Coupon for Orders additional than $69
DEAL17 : 15% Discount Coupon for Orders additional than $99
SEPSPECIAL : 10% Special Discount Coupon for complete Orders
As, the killexams.com will live a stable and dependable qualification furnishes VCS-256 exam questions with 100 percent pass guarantee. you wish to hone questions for a minimum of someday at any imbue to attain well within the test. Your existent expertise to success in VCS-256 exam, certain enough starts offevolved with killexams.com test questions that's the astonishing and examined wellspring of your focused on perform.
killexams.com helps a huge purview of candidates pass the tests and gain their certification. They fill a stout wide variety of fruitful reviews. Their dumps are solid, slight, updated and of truly satisfactory noteworthy to overcome the demanding situations of any IT certifications. killexams.com exam dumps are most recent updated in notably clobber manner on accepted premise and material is discharged every now and then. Most recent killexams.com dumps are accessible in testing focuses with whom we're retaining up their relationship to gain most recent material.
killexams.com Veritas Certification study guides are setup through IT specialists. Most people complaint that there are an inordinate purview of questions in this sort of sizable wide variety of schooling assessments and exam resource, and they may live recently wiped out to manage the cost of any extra. Seeing killexams.com experts exercise session this far accomplishing rendition at the very time as soundless assurance that each one the getting to know is secured after profound studies and exam. Everything is to bear consolation for hopefuls on their road to affirmation.
We fill Tested and Approved VCS-256 Exams. killexams.com offers the most specific and most recent IT exam materials which almost incorporate complete exam topics. With the lead of their VCS-256 study materials, you dont exigency to squander your risk on perusing major portion of reference books and honestly want to parch through 10-20 hours to ace their VCS-256 existent questions and answers. Whats greater, they provide you with PDF Version and Software Version exam questions and answers. For Software Version materials, Its presented to present the candidates reenact the Veritas VCS-256 exam in a actual surroundings.
We give free updates. Inside legitimacy duration, if VCS-256 exam materials which you fill received up to date, they will let you know with the aid of email to down load maximum latest variation of . On the off hazard that you dont pass your Veritas Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux exam, They will give you replete refund. You should ship the scanned reproduction of your VCS-256 exam document card to us. Subsequent to asserting, they will unexpectedly provide you with replete REFUND.
killexams.com Huge Discount Coupons and Promo Codes are as beneath;
WC2017 : 60% Discount Coupon for complete tests on internet site
PROF17 : 10% Discount Coupon for Orders extra than $69
DEAL17 : 15% Discount Coupon for Orders greater than $ninety nine
OCTSPECIAL : 10% Special Discount Coupon for complete Orders
In the event which you gain ready for the Veritas VCS-256 exam utilising their exam simulator engine. It is something however difficult to succeed for complete certifications inside the number one undertaking. You dont want to maneuver complete dumps or any loose torrent / rapidshare complete stuff. They tender free demo of every IT Certification Dumps. You can solemnize the interface, question noteworthy and ease of utilize of their schooling exams earlier than you select to buy.
VCS-256 Practice Test | VCS-256 examcollection | VCS-256 VCE | VCS-256 study guide | VCS-256 practice exam | VCS-256 cram
Killexams 1Z0-466 study guide | Killexams 70-744 mock exam | Killexams 111-056 VCE | Killexams P2070-072 questions answers | Killexams P2020-079 braindumps | Killexams C2030-284 dumps | Killexams 000-885 questions and answers | Killexams 1Z0-516 free pdf | Killexams C2020-010 dumps questions | Killexams HP0-P13 drill questions | Killexams 310-814 pdf download | Killexams E20-357 existent questions | Killexams HP0-J33 brain dumps | Killexams 310-345 exam questions | Killexams CNOR free pdf download | Killexams 1Z0-628 questions and answers | Killexams 000-258 study guide | Killexams 700-802 bootcamp | Killexams C2040-416 free pdf | Killexams HP0-771 free pdf |
killexams.com huge List of Exam Study Guides
Killexams NBSTSA-CST questions and answers | Killexams C9020-563 drill questions | Killexams HP0-M74 free pdf | Killexams 310-052 questions and answers | Killexams 1Z0-864 test questions | Killexams C2180-606 exam questions | Killexams HP0-G11 drill questions | Killexams 9L0-622 existent questions | Killexams E22-265 braindumps | Killexams A2040-409 free pdf | Killexams 000-714 mock exam | Killexams M8060-729 study guide | Killexams 70-505-CSharp brain dumps | Killexams MB6-894 VCE | Killexams 117-010 test prep | Killexams BAS-001 cram | Killexams DMV brain dumps | Killexams 000-583 cheat sheets | Killexams RH033 sample test | Killexams ASC-029 braindumps |
Administration of Veritas InfoScale Availability 7.1 for UNIX/Linux
Pass 4 certain VCS-256 dumps | Killexams.com VCS-256 existent questions | https://www.textbookw.com/
January 24, 2000Web posted at: 12:11 p.m. EST (1711 GMT)
by John Bass and James Robinson, Network World Test Alliance
(IDG) -- It complete boils down to what you're looking for in a network operating system (NOS).
Do you want it gaunt and supple so you can install it any artery you please? Perhaps administration bells and management whistles are what you exigency so you can deploy several hundred servers. Or maybe you want an operating system that's robust enough so that you sleep dote a baby at night?
The safe tidings is that there is a NOS waiting just for you. After the rash of recent software revisions, they took an in-depth peep at four of the major NOSes on the market: Microsoft's Windows 2000 Advanced Server, Novell's NetWare 5.1, Red Hat Software's Linux 6.1 and The Santa Cruz Operation's (SCO) UnixWare 7.1.1. Sun declined their invitation to submit Solaris because the company says it's working on a current version.
Microsoft's Windows 2000 edges out NetWare for the Network World Blue Ribbon Award. Windows 2000 tops the territory with its management interface, server monitoring tools, storage management facilities and security measures.
However, if it's performance you're after, no product came near to Novell's NetWare 5.1's numbers in their exhaustive file service and network benchmarks. With its lightning-fast engine and Novell's directory-based administration, NetWare offers a noteworthy ground for an enterprise network.
We create the latest release of Red Hat's commercial Linux bundle led the list for flexibility because its modular design lets you pare down the operating system to suit the job at hand. Additionally, you can create scripts out of multiple Linux commands to automate tasks across a distributed environment.
While SCO's UnixWare seemed to lag behind the pack in terms of file service performance and NOS-based administration features, its scalability features bear it a stalwart candidate for running enterprise applications.
The numbers are in
Regardless of the job you saddle your server with, it has to accomplish well at reading and writing files and sending them across the network. They designed two benchmark suites to measure each NOS in these two categories. To reflect the existent world, their benchmark tests deem a wide purview of server conditions.
NetWare was the hands-down leader in their performance benchmarking, taking first condition in two-thirds of the file tests and earning top billing in the network tests.
Red Hat Linux followed NetWare in file performance overall and even outpaced the leader in file tests where the read/write loads were small. However, Linux did not accomplish well handling great loads - those tests in which there were more than 100 users. Under heavier user loads, Linux had a tenor to desist servicing file requests for a short term and then start up again.
Windows 2000 demonstrated impoverished write performance across complete their file tests. In fact, they create that its write performance was about 10% of its read performance. After consulting with both Microsoft and Client/Server Solutions, the author of the Benchmark Factory testing tool they used, they determined that the impoverished write performance could live due to two factors. One, which they were unable to verify, might live a workable performance problem with the SCSI driver for the hardware they used.
More significant, though, was an issue with their test software. Benchmark Factory sends a write-through flag in each of its write requests that is putative to cause the server to update cache, if appropriate, and then obligate a write to disk. When the write to disk occurs, the write muster is released and the next request can live sent.
At first glance, it appeared as if Windows 2000 was the only operating system to honor this write-through flag because its write performance was so poor. Therefore, they ran a second round of write tests with the flag turned off.
With the flag turned off, NetWare's write performance increased by 30%. This test proved that Novell does indeed honor the write-through flag and will write to disk for each write request when that flag is set. But when the write-through flag is disabled, NetWare writes to disk in a more efficient manner by batching together contiguous blocks of data on the cache and writing complete those blocks to disk at once.
Likewise, Red Hat Linux's performance increased by 10% to 15% when the write-through flag was turned off. When they examined the Samba file system code, they create that it too honors the write-through flag. The Samba code then finds an optimum time during the read/write sequence to write to disk.
This second round of file testing proves that Windows 2000 is conditional on its file system cache to optimize write performance. The results of the testing with the write-through flag off were much higher - as much as 20 times faster. However, Windows 2000 soundless fell behind both NetWare and RedHat Linux in the file write tests when the write-through flag was off.
SCO honors the write-through flag by default, since its journaling file system is constructed to maximize data integrity by writing to disk for complete write requests. The results in the write tests with the write-through flag on were very similar to the test results with the write-through flag turned off.
For the network benchmark, they developed two tests. Their long TCP transaction test measured the bandwidth each server can sustain, while their short TCP transaction test measured each server's aptitude to wield great numbers of network sessions with petite file transactions.
Despite a impoverished showing in the file benchmark, Windows 2000 came out on top in the long TCP transaction test. Windows 2000 is the only NOS with a multithreaded IP stack, which allows it to wield network requests with multiple processors. Novell and Red Hat express they are working on integrating this capability into their products.
NetWare and Linux besides registered stalwart long TCP test results, coming in second and third, respectively.
In the short TCP transaction test, NetWare came out the lucid winner. Linux earned second condition in spite of its exigency of uphold for abortive TCP closes, a routine by which an operating system can quickly split down TCP connections. Their testing software, Ganymede Software's Chariot, uses abortive closes in its TCP tests.
Moving into management
As enterprise networks grow to require more servers and uphold more desist users, NOS management tools become crucial elements in keeping networks under control. They looked at the management interfaces of each product and drilled down into how each handled server monitoring, client administration, file and print management, and storage management.
We create Windows 2000 and NetWare provide equally useful management interfaces.
Microsoft Management Console (MMC) is the glue that holds most of the Windows 2000 management functionality together. This configurable graphical user interface (GUI) lets you snap in Microsoft and third-party applets that customize its functionality. It's a two-paned interface, much dote Windows Explorer, with a nested list on the left and selection details on the right. The console is light to utilize and lets you configure many local server elements, including users, disks, and system settings such as time and date.
MMC besides lets you implement management policies for groups of users and computers using vigorous Directory, Microsoft's current directory service. From the vigorous Directory management tool inside MMC, you can configure users and change policies.
The network configuration tools are create in a sever application that opens when you click on the Network Places icon on the desktop. Each network interface is listed inside this window. You can add and change protocols and configure, enable and disable interfaces from here without rebooting.
NetWare offers several interfaces for server configuration and management. These tools tender duplicate functionality, but each is useful depending from where you are trying to manage the system. The System Console offers a number of tools for server configuration. One of the most useful is NWConfig, which lets you change start-up files, install system modules and configure the storage subsystem. NWConfig is simple, intuitive and predictable.
ConsoleOne is a Java-based interface with a few graphical tools for managing and configuring NetWare. Third-party administration tools can plug into ConsoleOne and let you manage multiple services. They cogitate ConsoleOne's interface is a bit unsophisticated, but it works well enough for those who must fill a Windows- based manager.
Novell besides offers a Web-accessible management application called NetWare Management Portal, which lets you manage NetWare servers remotely from a browser, and NWAdmin32, a relatively simple client-side tool for administering Novell Directory Services (NDS) from a Windows 95, 98 or NT client.
Red Hat's overall systems management interface is called LinuxConf and can flee as a graphical or text-based application. The graphical interface, which resembles that of MMC, works well but has some layout issues that bear it difficult to utilize at times. For example, when you flee a setup application that takes up a lot of the screen, the system resizes the application larger than the desktop size.
Still, you can manage pretty much anything on the server from LinuxConf, and you can utilize it locally or remotely over the Web or via telnet. You can configure system parameters such as network addresses; file system settings and user accounts; and set up add-on services such as Samba - which is a service that lets Windows clients gain to files residing on a Linux server - and FTP and Web servers. You can apply changes without rebooting the system.
Overall, Red Hat's interface is useful and the underlying tools are powerful and flexible, but LinuxConf lacks the polish of the other vendors' tools.
SCO Admin is a GUI-based front desist for about 50 SCO UnixWare configuration and management tools in one window. When you click on a tool, it brings up the application to manage that item in a sever window.
Some of SCO's tools are GUI-based while others are text-based. The server required a reboot to apply many of the changes. On the plus side, you can manage multiple UnixWare servers from SCOAdmin.
SCO besides offers a useful Java-based remote administration tool called WebTop that works from your browser.
An eye on the servers and clients
One famous administration job is monitoring the server itself. Microsoft leads the pack in how well you can retain an eye on your server's internals.
The Windows 2000 System Monitor lets you view a real-time, running graph of system operations, such as CPU and network utilization, and remembrance and disk usage. They used these tools extensively to determine the outcome of their benchmark tests on the operating system. Another tool called Network Monitor has a basic network packet analyzer that lets you discern the types of packets coming into the server. Together, these Microsoft utilities can live used to compare performance and capacity across multiple Windows 2000 servers.
NetWare's Monitor utility displays processor utilization, remembrance usage and buffer utilization on a local server. If you know what to peep for, it can live a powerful tool for diagnosing bottlenecks in the system. Learning the import of each of the monitored parameters is a bit of a challenge, though.
If you want to peep at performance statistics across multiple servers, you can tap into Novell's Web Management Portal.
Red Hat offers the touchstone Linux command-line tools for monitoring the server, such as iostat and vmstat. It has no graphical monitoring tools.
As with any Unix operating system, you can write scripts to automate these tools across Linux servers. However, these tools are typically cryptic and require a elevated flush of proficiency to utilize effectively. A suite of graphical monitoring tools would live a noteworthy addition to Red Hat's Linux distribution.
UnixWare besides offers a number of monitoring tools. System Monitor is UnixWare's simple but limited GUI for monitoring processor and remembrance utilization. The sar and rtpm command-line tools together list real-time system utilization of buffer, CPUs and disks. Together, these tools give you a safe overall concept of the load on the server.
Along with managing the server, you must manage its users. It's no amaze that the two NOSes that ship with an integrated directory service topped the territory in client administration tools.
We were able to configure user permissions via Microsoft's vigorous Directory and the directory administration tool in MMC. You can group users and computers into organizational units and apply policies to them.
You can manage Novell's NDS and NetWare clients with ConsoleOne, NWAdmin or NetWare Management Portal. Each can create users, manage file space, and set permissions and rights. Additionally, NetWare ships with a five-user version of Novell's ZENworks tool, which offers desktop administration services such as hardware and software inventory, software distribution and remote control services.
Red Hat Linux doesn't tender much in the artery of client administration features. You must control local users through Unix leave configuration mechanisms.
UnixWare is similar to Red Hat Linux in terms of client administration, but SCO provides some Windows binaries on the server to remotely set file and directory permissions from a Windows client, as well as create and change users and their settings. SCO and Red Hat tender uphold for the Unix-based Network Information Service (NIS). NIS is a store for network information dote logon names, passwords and home directories. This integration helps with client administration.
Handling the staples: File and print
A NOS is nothing without the aptitude to share file storage and printers. Novell and Microsoft collected top honors in these areas.
You can easily add and maintain printers in Windows 2000 using the print administration wizard, and you can add file shares using vigorous Directory management tools. Windows 2000 besides offers Distributed File Services, which let you combine files on more than one server into a solitary share.
Novell Distributed Print Services (NDPS) let you quickly incorporate printers into the network. When NDPS senses a current printer on the network, it defines a Printer Agent that runs on the printer and communicates with NDS. You then utilize NDS to define the policies for the current printer.
You define NetWare file services by creating and then mounting a disk volume, which besides manages volume policies.
Red Hat includes Linux's printtool utility for setting up server-connected and networks printers. You can besides utilize this GUI to create printcap entries to define printer access.
Linux has a set of command-line file system configuration tools for mounting and unmounting partitions. Samba ships with the product and provides some integration for Windows clients. You can configure Samba only through a cryptic configuration ASCII file - a grave drawback.
UnixWare provides a supple GUI-based printer setup tool called Printer SetUp Manager. For file and volume management, SCO offers a tool called VisionFS for interoperability with Windows clients. They used VisionFS to allow their NT clients to access the UnixWare server. This service was light to configure and use.
Windows 2000 provides the best tools for storage management. Its graphical Manage Disks tool for local disk configuration includes software RAID management; you can dynamically add disks to a volume set without having to reboot the system. Additionally, a signature is written to each of the disks in an array so that they can live moved to another 2000 server without having to configure the volume on the current server. The current server recognizes the drives as members of a RAID set and adds the volume to the file system dynamically.
NetWare's volume management tool, NWConfig, is light to use, but it can live a slight confusing to set up a RAID volume. Once they knew what they were doing, they had no problems formatting drives and creating a RAID volume. The tool looks a slight primitive, but they give it elevated marks for functionality and ease of use.
Red Hat Linux offers no graphical RAID configuration tools, but its command line tools made RAID configuration easy.
To configure disks on the UnixWare server, they used the Veritas Volume Manager graphical disk and volume administration tool that ships with UnixWare. They had some problems initially getting the tool to recognize the drives so they could live formatted. They managed to work around the disk configuration problem using an assortment of command line tools, after which Volume Manager worked well.
While they did not probe these NOSes extensively to expose any security weaknesses, they did peep at what they offered in security features.
Microsoft has made significant strides with Windows 2000 security. Windows 2000 supports Kerberos public key certificates as its primary authentication mechanism within a domain, and allows additional authentication with smart cards. Microsoft provides a Security Configuration tool that integrates with MMC for light management of security objects in the vigorous Directory Services system, and a current Encrypting File System that lets you designate volumes on which files are automatically stored using encryption.
Novell added uphold for a public-key infrastructure into NetWare 5 using a public certificate schema developed by RSA Security that lets you tap into NDS to generate certificates.
Red Hat offers a basic Kerberos authentication mechanism. With Red Hat Linux, as with most Unix operating systems, the network services can live individually controlled to enlarge security. Red Hat offers Pluggable Authentication Modules as a artery of allowing you to set authentication policies across programs running on the server. Passwords are protected with a shadow file. Red Hat besides bundles firewall and VPN services.
UnixWare has a set of security tools called Security Manager that lets you set up varying degrees of intrusion protection across your network services, from no restriction to turning complete network services off. It's a safe management time saver, though you could manually modify the services to achieve the very result.
Stability and failing tolerance
The most feature-rich NOS is of slight value if it can't retain a server up and running. Windows 2000 offers software RAID 0, 1 and 5 configurations to provide failing tolerance for onboard disk drives, and has a built-in network load-balancing feature that allows a group of servers to peep dote one server and share the very network cognomen and IP address. The group decides which server will service each request. This not only distributes the network load across several servers, it besides provides failing tolerance in case a server goes down. On a lesser scale, you can utilize Microsoft's Failover Clustering to provide basic failover services between two servers.
As with NT 4.0, Windows 2000 provides remembrance protection, which means that each process runs in its own segment.
There are besides backup and restore capabilities bundled with Windows 2000.
Novell has an add-on product for NetWare called Novell Cluster Services that allows you to cluster as many as eight servers, complete managed from one location using ConsoleOne, NetWare Management Portal or NWAdmin32. But Novell presently offers no clustering products to provide load balancing for applications or file services. NetWare has an complicated remembrance protection scheme to segregate the remembrance used for the kernel and applications, and a Storage Management Services module to provide a highly supple backup and restore facility. Backups can live all-inclusive, cover parts of a volume or store a differential snapshot.
Red Hat provides a load-balancing product called piranha with its Linux. This package provides TCP load balancing between servers in a cluster. There is no arduous confine to the number of servers you can configure in a cluster. Red Hat Linux besides provides software RAID uphold through command line tools, has remembrance protection capabilities and provides a rudimentary backup facility.
SCO provides an optional feature to cluster several servers in a load-balancing environment with Non-Stop Clustering for a elevated flush of fault-tolerance. Currently, Non-Stop Clustering supports six servers in a cluster. UnixWare provides software RAID uphold that is managed using SCO's On-Line Data Manager feature. complete the touchstone RAID levels are supported. Computer Associates' bundled ArcServeIT 6.6 provides backup and restore capabilities. UnixWare has remembrance protection capabilities.
Because their testing was conducted before Windows 2000's generic availability ship date, they were not able to evaluate its hard-copy documentation. The online documentation provided on a CD is extensive, useful and well-organized, although a Web interface would live much easier to utilize if it gave more than a couple of sentences at a time for a particular attend topic.
NetWare 5 comes with two manuals: a particular manual for installing and configuring the NOS with safe explanations of concepts and features along with an overview of how to configure them, and a petite spiral-bound booklet of quick start cards. Novell's online documentation is very helpful.
Red Hat Linux comes with three manuals - an installation guide, a getting started lead and a reference manual - complete of which are light to follow.
Despite being the most difficult product to install, UnixWare offers the best documentation. It comes with two manuals: a system handbook and a getting started guide. The system handbook is a reference for conducting the installation of the operating system. It does a safe job of reflecting this painful experience. The getting started lead is well-written and well-organized. It covers many of the tools needed to configure and maintain the operating system. SCO's online documentation looks nice and is light to follow.
The bottom line is that these NOSes tender a wide purview of characteristics and provide enterprise customers with a noteworthy deal of selection regarding how each can live used in any given corporate network.
If you want a good, generic purpose NOS that can deliver enterprise-class services with complete the bells and whistles imaginable, then Windows 2000 is the strongest contender. However, for elevated performance, enterprise file and print services, their tests prove that Novell leads the pack. If you're willing to pay a higher price for scalability and reliability, SCO UnixWare would live a safe bet. But if you exigency an inexpensive alternative that will give you bare-bones network services with decent performance, Red Hat Linux can certainly appropriate the bill.
The selection is yours.
Bass is the technical director and Robinson is a senior technical staff member at Centennial Networking Labs (CNL) at North Carolina condition University in Raleigh. CNL focuses on performance, capacity and features of networking and server technologies and equipment.
Debate will focus on Linux vs. LinuxJanuary 20, 2000Some Windows 2000 PCs will jump the gunJanuary 19, 2000IBM throws Linux lovefestJanuary 19, 2000Corel Linux will flee Windows appsJanuary 10, 2000Novell's eDirectory spans platformsNovember 16, 1999New NetWare embraces Web appsNovember 2, 1999Microsoft sets a date for Windows 2000October 28, 1999
RELATED IDG.net STORIES:
Fusion's Forum: Square off with the vendors over who has the best NOS(Network World Fusion)How they did it: Details of the testing(Network World Fusion)Find out the tuning parameters(Network World Fusion)Download the Config files(Network World Fusion)The Shootout results(Network World Fusion)Fusion's NOS resources(Network World Fusion)With Windows 2000, NT grows up(Network World Fusion)Fireworks expected at NOS showdown(Network World Fusion)
Note: Pages will open in a current browser window
External sites are not endorsed by CNN Interactive.
Novell, Inc.Microsoft Corp.The Santa Cruz Operation, Inc. (SCO)Red Hat, Inc.
Note: Pages will open in a current browser window
External sites are not endorsed by CNN Interactive.
ActiveBatch Gets Blackberry Functionality
Administrators fill long been able to receive pages when servers Go down, but now they can restart servers with their pagers. Advanced Systems Concepts Inc. has added the Blackberry line of pagers to its list of clients for the ActiveBatch Job Scheduling and Management System.
The ActiveBatch Wireless Client is a module for the management software that enables administrators to monitor systems and initiate processes from the Blackberry. ActiveBatch Job Scheduling and Management System allows users to set up calendars to initiate processes such as backups or printing, or initiate processes from remote clients.
Ben Rosenberg, CEO of Advanced Systems, says the company chose to uphold the Blackberry first since it was the handheld best suited for round-the-clock monitoring. "The battery life is three weeks, and it’s always on," he says. Advanced Systems supports both the pager-sized and PDA-sized Blackberries.
If a system sends out an SNMP signal, administrators can configure the system to ship an e-mail to a Blackberry, alerting the administrator. The e-mail gives the administrator the option to initiate processes, such as rebooting a server, through the Blackberry. "With the Blackberry, e-mails are always actionable by you," Rosenberg says.
Rosenberg sees two advantages to system management through wireless devices. First, it obviates the exigency to give instructions over the phone to a less experienced operator. Second, high-level administrators who travel can retain an eye on the system. "If you’re on the road, you’re able to know if something is wrong," he says. With both advantages, administrators will live better able to guarantee uptime, with less impact on their lives.
In addition to the three levels of encryption standards on Blackberry devices, ActiveBatch provides additional security features, such as a password login to the system. This keeps random users, including thieves, from wreaking havoc on corporate systems. "Use of ActiveBatch is always secure," Rosenberg says.
ActiveBatch can manage Windows, OpenVMS and Unix-based systems with an agent on each server. The agent sends information to a central Windows console. The software integrates with the Windows Management Instrument, which besides serves as a SNMP provider. ActiveBatch provides three plug-ins for remote clients: e-mail, browser and now the Blackberry.
Rosenberg says Advanced Systems is working to bring ActiveBatch to PocketPC handhelds. He says that although users can already utilize them with the browser-based system, the company will adjust the system to better meet the needs and limitations of the PocketPC platform.
Contact: Advanced Systems Concepts Inc., www.advsyscon.com, (201) 798-6400
SafeStone Provides iSeries uphold to RSA Security
Security management provider SafeStone Technologies plc. has added iSeries 400 features to an existing partnership with RSA Security Inc. Under the enhanced agreement, SafeStone is making RSA’s SecurID authentication tool usable on an iSeries 400 platform.
Using its DetectIT Agent 400 interface, SafeStone is enabling two-factor authentication. Two-factor authentication requires an individual to live verified twice before access is allowed to systems.
DetectIT is an offering designed by SafeStone to protect iSeries 400 exit points from unauthorized user access to confidential data, application and resources within an open-connectivity environment.
Through DetectIT, RSA’s iSeries-based users will live able to leverage software solutions for auditing, data and system management, e-business security, and application and access control for solitary or multiple networked iSeries 400s.
As portion of its agreement with RSA, SafeStone will act as RSA’s IBM iSeries commerce partner, handling complete sales and uphold responsibility for DetectIT. In this role, SafeStone, which is besides an IBM confederate for systems management and development, will tender DetectIT to RSA’s customers as either a standalone or fully integrated offering.
Contact: RSA Security, Inc., www.rsa.com, (781) 301-5000
SafeStone Technologies plc, www.safestone.com
Vendors bear Linux Itanium-Ready
With Intel Corp.’s May release of its 64-bit Itanium processor, Linux vendors are lining up to uphold the current architecture. Red Hat Inc., TurboLinux Inc., SuSE AG and Caldera International Inc. complete formally released distributions for Itanium.
To coincide with the announcement, TurboLinux released its Operating System 7 for the Itanium processor. "It’s production-ready," says Thrane Jensen, product manager for Itanium. However, Jensen admits that many users will utilize early Itanium machines for testing and development, rather than using them in production environments yet.
Bill Claybrook, research director for Linux and open source at the Aberdeen Group, confirms that "most people are waiting for McKinley." He believes that users will wait for Intel to release McKinley—its second-generation IA-64 processor—before they integrate IA-64 into their environments. "They’re being a slight bit leery of it [in] a production environment," he says. Jensen says TurboLinux is working on its McKinley version of Linux already.
Jensen says that porting Linux to the IA-64 processor had its challenges. The 64-bit nature of the processor created challenging issues for poignant applications over to the current chip. "Dependencies on 32-bit create problems," he says. Some applications addressed specific 32-bit features that did not exist in Itanium. For the most part, applications could live recompiled for the chip. "In general, it’s along the very code line," he says, "but the kernel has [alot of] different stuff."
In addition to the core operating system, Jensen says many accepted Linux applications are besides ready for prime time. Apache and other commonly used applications are production-ready, but "ISVs are going to live doing more application development," he says.
Red Hat released its Red Hat Linux 7.1 for the Itanium processor in mid-June. Using the 2.4 kernel, Red Hat positions the current release as a platform for testing 64-bit applications ported from 32-bit and RISC machines. The distribution is besides suited to enterprise server needs; it runs on up to eight processors and offers current configuration tools for BIND, Apache and printing.
At the very time, Linux vendor SuSE released an Itanium-specific distribution. SuSE Linux 7.2 for IA-64 uses six CD-ROMs to carry over 1,500 applications for the emerging platform. dote Red Hat, the company bills the package as a solution for evaluating and deploying Itanium-based servers.
Although a preview version was already available from the Caldera FTP site at ftp.caldera.com/ia64, Caldera released two current versions in May, accompanied by a public announcement. The final production version of OpenLinux Server 64 should live available late in the third quarter.
Biff Traber, senior vice president and generic manager of the server commerce line at Caldera, says Caldera has slight to lose by waiting to release a production version. Customers will peep to the distribution for evaluation purposes, so a beta release meets their needs. "It’s a combination of testing, development and prototyping," he says.
The Trillian project, which initiated development of a Linux kernel for the Itanium processor, first released a kernel in February 2000, predating Itanium’s generic availability by over a year. Intel was aggressive in getting prototype chips to developers to ensure a market providing hardware, remote servers and emulators to enable open source developers to fill Linux ready for the release date.
The project later changed its cognomen to the more formal-sounding IA-64 Linux Project and worked to further enhance the development of Linux on Itanium. Itanium is not the first 64-bit platform to flee Linux—there were already flavors of Linux for Sun Microsystems Inc.’s Sparc processor and Compaq Computer Corp.’s Alpha. In addition to the distributors, the IA-64 Linux Project besides boasted hardware vendors, Hewlett-Packard Co., IBM Corp., Silicon Graphics Inc., VA Linux Systems Inc. and NEC Corp., as well as Intel and Swiss research laboratory CERN.
This chapter is from the reserve
If they deem filesystems as a mechanism for both storing and locating data, then the two key elements for any filesystem are the items being stored and the list of where those items are. The deeper details of how a given filesystem manipulates its data and meta-information Go beyond the scope of this chapter but are addressed further in Appendix B, "Anatomy of a Filesystem."
Filesystem Components That the Admin Needs to Know About
As always, they exigency to gain a wield on the vocabulary before they can understand how the elements of a filesystem work together. The next three sections narrate the basic components with which you, as a sysadmin, exigency to live familiar.
The most intuitively obvious components of a filesystem are, of course, its files. Because everything in UNIX is a file, special functions are differentiated by file type. There are fewer file types than you might imagine, as Table 3.2 shows.
Table 3.2 File Types and Purposes, with Examples
Maintains information for directory structure
Buffered device file
Raw device file
UNIX domain socket
Interprocess communication (IPC)
See output of commands for files Linux: netstat –x Solaris: netstat -f unix
Named pipe special (FIFO device)
First-in-first-out IPC mechanism, Invoked by name
Linux: /dev/initctl Solaris: /etc/utmppipe/etc/cron.d/FIFO
Pointer to another file (any type)
/usr/tmp -> ../var/tmp
All other files; holds data of complete other types
Text files expostulate files Database files Executables/binaries
Notice that directories are a kind of file. The key is that they fill a specific kind of format and contents (see Appendix B for more details). A directory holds the filenames and index numbers (see the following section, "Inodes") of complete its constituent files, including subdirectories.
Directory files are not flat (or regular) files, but are indexed (like a database), so that you can soundless locate a file quickly when you fill a great number of files in the very directory.13
Even though file handling is generally transparent, it is famous to bethink that a file's data blocks14 may not live stored sequentially (or even in the very generic disk region). When data blocks are widely scattered in an uncoordinated manner, it can impress access times and enlarge I/O overhead.
Meta-information about files is stored in structures called index nodes, or inodes. Their contents vary based on the particular filesystem in use, but complete inodes hold the following information about the file they index:15
Inode identification number
Owners: user and group
ctime: last file status change time
mtime: last data modification time16
atime: last access time
Physical location information for data blocks
Notice that the filename is not stored in the inode, but as an entry in the file's closest parent directory.
All other information about a file that ls displays is stored in an inode somewhere. With a few handy options, you can haul out lots of useful information. Let's express that you want to know the inode number of the Solaris kernel.17 You just give the –i option, and voilá:
[sun:10 ~]ls -i /kernel/genunix
Of course, ls –l is an brokendown friend, telling you most everything that you want to know. Looking at the Solaris kernel again, you gain the output in pattern 3.4.
Figure 3.4 Diagrammed Output of ls on a File
Notice that the timestamp shown by default in a long listing is mtime. You can pass various options to ls to view ctime and atime instead. For other nifty permutations, discern the ls man page.
File Permissions and Ownership Refresher
Because UNIX was designed to uphold many users, the question naturally arises how to know who can discern what files. The first and simplest reply is simply to permit users to examine only their own files. This, of course, would bear it difficult, if not impossible, to share, creating noteworthy difficulties in collaborative environments and causing a string of other problems: Why can't I flee ls? Because the system created it, not you, is only the most obvious specimen of such problems.
Users and Groups
UNIX uses a three-part system to determine file access: There's what you, as the file owner, are allowed to do; there's what the group is allowed to do; and there's what other people are allowed to do. Let's discern what Elvis's permissions peep like:
[ elvis@frogbog elvis ]$ ls -l
drwxr-xr-x 5 elvis users 4096 Dec 9 21:55 Desktop
drwxr-xr-x 2 elvis users 4096 Dec 9 22:00 Mail
-rw-r--r-- 1 elvis users 36 Dec 9 22:00 README
-rw-r--r-- 1 elvis users 22 Dec 9 21:59 ThisFile
drwxr-xr-x 2 elvis users 4096 Dec 12 19:57 arc
drwxr-xr-x 2 elvis users 4096 Dec 10 00:40 songs
-rw-r--r-- 1 elvis users 46 Dec 12 19:52 tao.txt
-rw-r--r-- 1 elvis users 21 Dec 9 21:59 thisfile
-rw-r--r-- 1 elvis users 45 Dec 12 19:52 west.txt
As long as we're here, let's fracture down exactly what's being displayed. First, they fill a 10-character string of letters and hyphens. This is the representation of permissions, which I'll fracture down in a minute. The second item is a number, usually a solitary digit. This is the number of arduous links to that directory. I'll discuss this later in this chapter. The third thing is the username of the file owner, and the fourth is the cognomen of the file's group. The fifth column is a number representing the size of the file, in bytes. The sixth contains the date and time of last modification for the file, and the final column shows the filename.
Every user on the system has a username and a number that is associated with that user. This number generally is referred to as the UID, short for user ID. If a user has been deleted but, for some reason, his files remain, the username is replaced with that user's UID. Similarly, if a group is deleted but soundless owns files, the GID (group number) shows up instead of a cognomen in the group field. There are besides other circumstances in which the system can't correlate the cognomen and the number, but these should live relatively rare occurrences.
As a user, you can't change the owner of your files: This would open up some grave security holes on the system. Only root can chown files, but if he makes a mistake, you can now question root to chown the files to you. As a user, you can chgrp a file to a different group of which you are a member. That is, if Elvis is a member of a group named users and a group named elvis, he can chgrp elvis west.txt or chgrp users west.txt, but because he's not a member of the group beatles, he can't chgrp beatles west.txt. A user can belong to any number of groups. Generally (although this varies slightly by flavor), files created belong to the group to which the directory belongs. On most modern UNIX variants, the group that owns files is whatever group is listed as your primary group by the system in the /etc/passwd file and can live changed via the newgrp command. On these systems, Elvis can chgrp users if he wants his files to belong to the users group, or he can chgrp elvis if he wants his files to belong to the elvis group.
So, what were those Funny strings of letters and hyphens at the nascence of each long directory listing? I already said that they represented the permissions of the file, but that's not especially helpful. The 10 characters of that string limn the leave bits for each file. The first character is separate, and the last nine are three very similar groups of three characters. I'll clarify each of these in turn.
If you peep back to Elvis's long listing of his directory, you'll discern that most of the files simply fill a hyphen as the first character, whereas several possess a d in this field. The more astute reader might note that the files with a d in that first territory complete betide to live directories. There's a safe reason for this: The first permissions character denotes whether that file is a special file of one sort or another.
What's a special file? It's either something that isn't really a file (in the sense of a sequential stream of bytes on a disk) but that UNIX treats as a file, such as a disk or a video display, or something that is really a file but that is treated differently. A directory, by necessity, is a stream of bytes on disk, but that d means that it's treated differently.
The next three characters limn what the user who owns the file can conclude with it. From left to right, these permissions are read, write, and execute. Read leave is just that—the capability to discern the contents of a file. Write leave implies not only the perquisite to change the contents of a file, but besides the perquisite to delete it. If I conclude not possess write leave to a file, rm not_ permitted.txt fails.
Execute leave determines whether the file is besides a command that can live flee on the system. Because UNIX sees everything as a file, complete commands are stored in files that can live created, modified, and deleted dote any other file. The computer then needs a artery to recommend what can and can't live run. The execute bit does this.
Another famous reason that you exigency to worry about whether a file is executable is that some programs are designed to live flee only by the system administrator: These programs can modify the computer's configuration or can live unsafe in some other way. Because UNIX enables you to specify permissions for the owner, the group, and other users, the execute bit enables the administrator to restrict the utilize of unsafe programs.
Directories treat the execute leave differently. If a directory does not fill execute permissions, that user (or group, or other users on the system) can't cd into that directory and can't peep at information about the files in that directory. (You usually can find the names of the files, however.) Even if you fill permissions for the files in that directory, you generally can't peep at them. (This varies slightly by platform.)
The second set of three characters is the group permissions (read, write, and execute, in that order), and the final set of three characters is what other users on the system are permitted to conclude with that file. Because of security concerns (either due to other users on your system or due to pervasive networks such as the Internet), giving write access to other users is highly discouraged.
Great, you can now read the permissions in the directory listing, but what can you conclude with them? Let's express that Elvis wants to bear his directory readable only by himself. He can chmod go-rwx ~/songs: That means remove the read, write, and execute permissions for the group and others on the system. If Elvis decides to let Nashville artists buy a peep at his material but not change it (and if there's a group nashville on the system), he can first chgrp nashville songs and then chmod g+r songs.
If Elvis does this, however, he'll find that (at least, on some platforms) members of group nashville can't peep at them. Oops! With a simple chmod g+x songs, the problem is solved:
[ elvis@frogbog elvis ]$ ls -l
drwxr-xr-x 5 elvis users 4096 Dec 9 21:55 Desktop
drwxr-xr-x 2 elvis users 4096 Dec 9 22:00 Mail
-rw-r--r-- 1 elvis users 36 Dec 9 22:00 README
-rw-r--r-- 1 elvis users 22 Dec 9 21:59 ThisFile
drwxr-xr-x 2 elvis users 4096 Dec 12 19:57 arc
drwxr-x--- 2 elvis nashvill 4096 Dec 15 14:21 songs
-rw-r--r-- 1 elvis users 46 Dec 12 19:52 tao.txt
-rw-r--r-- 1 elvis users 21 Dec 9 21:59 thisfile
-rw-r--r-- 1 elvis users 45 Dec 12 19:52 west.txt
In addition to the read, write, and execute bits, there exists special permissions used by the system to determine how and when to suspend the balanced leave rules. Any thorough understanding of UNIX requires an understanding of the setuid, setgid, and sticky bits. For balanced system users, only a generic understanding of these is necessary, and this discussion is thus brief. safe documentation on this topic exists elsewhere for budding system administrators and programmers.
The setuid bit applies only to executable files and directories. In the case of executable programs, it means that the given program runs as though the file owner were running it. That is, xhextris, a variant on Tetris, has the following permissions on my system:
1 games games 32516 May 18 1999 /usr/X11R6/bin/xhextris
There's a pseudouser called games on the system, which can't live logged into and has no home directory. When the xhextris program executes, it can read and write to files that only the game's pseudouser normally would live permitted. In this case, there's a high-score file stored on the system that writeable only by that user. When Elvis runs the game, the system acts as though he were the user games, and thus he is able to store the high-score file. To set the setuid bit on a file, you can recommend chmod to give it mode u+s. (You can cogitate of this as uid set, although this isn't technically accurate.)
The setgid bit, which stands for "set group id," works almost identically to setuid, except that the system acts as though the user's group is that of the given file. If xhextris had used setgid games instead of setuid games, the elevated score would live writeable to any directory owned by the group games. It is used by the system administrator in ways fundamentally similar to the setuid permission.
When applied to directories on Linux, Irix, and Solaris (and probably most other POSIX-compliant UNIX flavors as well), the setgid bit means that current files are given the parent directory's group rather than the user's primary or current group. This can live useful for, say, a directory for fonts built by (and for) a given program. Any user might generate the fonts via a setgid command that writes to a setgid directory. setgid on directories varies by platform; check your documentation. To set the setgid bit, you can recommend chmod to utilize g+s (gid set).
Although a file in a group or world-writeable directory without the sticky bit can live deleted by anyone with write leave for that directory (user, group, or other), a file in a directory with the sticky bit set can live deleted only by either the file's owner or root. This is particularly useful for creating temporary directories or scratch space that can live used by anyone without one's files being deleted by others. You can set leave +t in chmod to give something the sticky bit.
Like almost everything else on UNIX, permissions fill a number associated with them. It's generally considered that permissions are a group of four digits, each between 0 and 7. Each of those digits represents a group of three permissions, each of which is a yes/no answer. From left to right, those digits limn special permissions, user permissions, group permissions, and other permissions.
So, About Those leave Bits...
Most programs reading leave bits expect four digits, although often only three are given. Shorter numbers are filled in with leading zeros: 222 is treated as 0222, and 5 is treated as 0005. The three rightmost digits are, as previously mentioned, user (owner) permissions, group permissions, and other permissions, from perquisite to left.
Each of these digits is calculated in the following manner: read leave has a value of 4, write leave has a value of 2, and execute leave has a value of 1. Simply add these values together, and you've got that leave value. Read, write, and execute would live 7, read and write without execute would live 6, and no leave to conclude anything would live 0. Read, write, and execute for the file owner, with read and execute for the group and nothing at complete for anyone else, would live 750. Read and write for the user and group, but only read for others, would live 664.
The special permissions are 4 for setuid, 2 for setgid, and 1 for sticky. This digit is prepended to the three-digit numeric permission: A temporary directory with sticky read, write, and execute leave for everyone would live mode 1777. A setuid root directory writeable by nobody else would live 4700. You can utilize chmod to set numeric permissions directly, as in chmod 1777 /tmp.
In addition to a more precise utilize of chmod, numeric permissions are used with the umask command, which sets the default permissions. More precisely, it "masks" the default permissions: The umask value is subtracted from the maximum workable settings.* umask deals only with the three-digit permission, not the full-fledged four-digit value. A umask of 002 or 022 is most commonly the default. 022, subtracted from 777, is 755: read, write, and execute for the user, and read and execute for the group and others. 002 from 777 is 775: read, write, and execute for the user and group, and read and execute for others. I mind to set my umask to 077: read, write, and execute for myself, and nothing for my group or others. (Of course, when working on a group project, I set my umask to 007: My group and I can read, write, or execute anything, but others can't conclude anything with their files.)
You should note that the umask assumes that the execute bit on the file will live set. complete umasks are subtracted from 777 rather than 666, and those extra ones are subtracted later, if necessary. (See Appendix B for more details on leave bits and umask workings.)
*Actually, the leave bits are XORed with the maximum workable settings, if you're a computer science type.
Also notice that the first bit of output prepended to the permissions string indicates the file type. This is one handy artery of identifying a file's type. Another is the file command, as shown in Table 3.3.
Table 3.3 ls File Types and file Output Sample
ls File kind Character
File pomp Example
[either:1 ~]file /usr/usr: directory
Block special device
[linux: 10 ~] file /dev/hda1/dev/hda1: shroud special (3/1)[sun:10 root ~]file /dev/dsk/c0t0d0s0/dev/dsk/c0t0d0s0: shroud special(136/0)
Character special device
[linux:11 ~] file /dev/tty0/dev/tty0: character special (4/0)
[ensis:11 ~]file /dev/rdsk/c0t0d0s0/dev/rdsk/c0t0d0s0: character special (136/0)
UNIX domain socket
[linux:12 ~] file /dev/log/dev/log: socket
[sun:12 ~]file /dev/ccv/dev/ccv: socket
Named pipe special (FIFO device)
[linux:13 ~] file /dev/initctl/dev/initctl: fifo (named pipe)
[sun:13 ~]file /etc/utmppipe/etc/utmppipe: fifo
[linux:14 ~] file /usr/tmp/usr/tmp: symbolic link to ../var/tmp
[sun:14 ~]file -h /usr/tmp/usr/tmp: symbolic link to ¬../var/tmp
[linux:15 ~] file /etc/passwd/etc/passwd: ASCII text
[linux:15 ~] file /boot/vmlinux-2.4.2-2/boot/vmlinux-2.4.2-2: ELF 32-bit LSB executable,
¬Intel 80386, version 1,statically linked, not stripped
[linux:15 ~] file /etc/rc.d/init.d/sshd/etc/rc.d/init.d/sshd: Bourne-Again shell script text executable
[sun:15 ~]file /etc/passwd
/etc/passwd: ascii text
[sun:15 ~]file /kernel/genunix
-/kernel/genunix: ELF 32-bit MSB relocatable
¬SPARC Version 1
[sun:15 ~]file /etc/init.d/sshd
Notice the in-depth information that file gives—in many cases, it shows details about the file that no other command will readily pomp (such as what kindly of executable the file is). These low-level details are beyond the scope of their discussion, but the man page has more information.
Important Points about the file ommand
file tries to pattern out what kind a file is based on three types of test:
The file kind that the ls –l command returns.
-The presence of a magic number at the nascence of the file identifying the file type. These numbers are defined in the file /usr/share/magic in Red Hat Linux 7.1 and /usr/lib/locale/locale/LC_MESSAGES/magic (or /etc/magic) in Solaris 8. Typically, only binary files will fill magic numbers.
-In the case of a regular/text file, the first few bytes are tested to determine the kind of text representation and then to determine whether the file has a recognized purpose, such as C code or a Perl script.
file actually opens the file and changes the atime in the inode.
Inode lists are maintained by the filesystem itself, including which ones are free for use. Inode allocation and manipulation is complete transparent to both sysadmins and users.
Inodes become significant at two times for the sysadmin: at filesystem creation time and when the filesystem runs out of free inodes. At filesystem creation time, the total number of inodes for the filesystem is allocated. Although they are not in use, space is set aside for them. You cannot add any more inodes to a filesystem after it has been created. When you flee out of inodes, you must either free some up (by deleting or poignant files) or migrate to another, larger filesystem.
Without inodes, files are just a random assortment of ones and zeros on the disk. There is no guarantee that the file will live stored sequentially within a sector or track, so without an inode to point the artery to the data blocks, the file is lost. In fact, every file is uniquely identified by the combination of its filesystem cognomen and inode number.
See Appendix B for more particular information on the exact content of inodes and their structure.
Linux has a very useful command called stat that dumps the contents of an inode in a tidy format:
[linux:9 ~]stat .
Size: 16384 Filetype: Directory
Mode: (0755/drwxr-xr-x) Uid: (19529/ robin) Gid:(20/users)
Device: 0,4 Inode: 153288707 Links: 78
Access: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Modify: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Change: Sun Jul 22 13:58:29 2001(00009.04:37:59)
Boot shroud and Superblock
When a filesystem is created, two structures are automatically created, whether they are immediately used or not. The first is called the boot block, where boot-time information is stored. Because a partition may live made bootable at will, this structure needs to live available at complete times.
The other structure, of more interest here, is the superblock. Just as an inode contains meta-information about a file, a superblock contains metainformation about a filesystem. Some of the more censorious contents are listed here:18
Timestamp: last update
Superblock condition flag
Filesystem condition flag: clean, stable, active
Number of free blocks
List of free blocks
Pointer to next free block
Size of inode list
Number of free inodes
List of free inodes
Pointer to next free inode
Lock fields for free blocks and inodes
Summary data block
And you thought inodes were complex.
The superblock keeps track of free file blocks and free inodes so that the filesystem can store current files. Without these lists and pointers, a long, sequential search would fill to live performed to find free space every time a file was created.
In much the very artery that files without inodes are lost, filesystems without intact superblocks are inaccessible. That's why there is a superblock condition flag—to betoken whether the superblock was properly and completely updated before the disk (or system) was last taken offline. If it was not, then a consistency check must live performed for the gross filesystem and the results stored back in the superblock.
Again, more particular information about the superblock and its role in UNIX filesystems may live create in Appendix B.
Both Red Hat and Solaris recognize a legion of different filesystem types, although you will generally desist up using and supporting just a few. There are three touchstone types of filesystem—local, network, and pseudo—and a fourth "super-filesystem" kind that is actually losing ground, given the size of modern disks.
Local filesystems are common to every system that has its own local disk.19 Although there are many instances of this kind of filesystem, they are complete designed to work within a system, managing the components discussed in the last section and interfacing with the physical drive(s).
Only a few local filesystems are specifically designed to live cross-platform (and sometimes even cross–OS-type). They arrive in handy, though, when you fill a nondisk hardware failure; you can just buy the disk and establish it into another machine to retrieve the data.20 The UNIX File System, or ufs, was designed for this; both Solaris and Red Hat Linux machines can utilize disks with this filesystem. Note that Solaris uses ufs filesystems by default. Red Hat's default local filesystem is ext2.
Another local, cross-platform filesystem is ISO9660, the CD-ROM standard. This is why you can read your Solaris CD in a Red Hat box's reader.
Local filesystems arrive in two related but sever flavors. The original, touchstone model filesystem is soundless in broad utilize today. The newer journaling filesystem kind is just nascence to really arrive into its own. The major contrast between the two types is the artery they track changes and conclude integrity checks.
Standard, nonjournaling filesystems rely on flags in the superblock for consistency regulation. If the superblock flag is not set to "clean," then the filesystem knows that it was not shut down properly: not complete write buffers were flushed to disk, and so on. Inconsistency in a filesystem means that allocated inodes could live overwritten; free inodes could live counted as in use—in short, rampant file corruption, mass hysteria.
But there is a filesystem integrity checker to deliver the day: fsck. This command is usually invoked automatically at boot-time to verify that complete filesystems are clean and stable. If the / or /usr filesystems are inconsistent, the system might prompt you to start up a miniroot shell and manually flee fsck. A few of the more censorious items checked and corrected are listed here:
Unclaimed blocks and inodes (not in free list or in use)
Unreferenced but allocated blocks and inodes
Multiply claimed blocks and inodes
Bad inode formats
Bad directory formats
Bad free shroud or inode list formats
Incorrect free shroud or inode counts
Superblock counts and flags
Note that a filesystem should live unmounted before running fsck (see the later section "Administering Local Filesystems"). Running fsck on a mounted filesystem might cause a system panic and crash, or it might simply spurn to flee at all. It's besides best, though not required, that you flee fsck on the raw device, when possible. discern the man page for more details and options.
So where does fsck establish orphans, the blocks and inodes that are clearly in utilize but aren't referenced anywhere? Enter the lost+found directories. There is always a /lost+found directory on every system; other directories accrue them as fsck finds orphans in their purview. fsck automatically creates the directories as needed and renames the lost blocks into there by inode number. discern the man pages "mklost+found" on Red Hat and "fsck_ufs" on Solaris.
Journaling filesystems conclude away with fsck and its concomitant superblock structures. complete filesystem condition information is internally tracked and monitored, in much the very artery that databases systems set up checkpoints and self-verifications.
With journaling filesystems, you fill a better random of replete data recovery in the event of a system crash. Even unsaved data in buffers can live recovered thanks to the internal log.21 This kindly of failing tolerance makes journaling filesystems useful in high- availability environments.
The drawback, of course, is that when a filesystem dote this gets corrupted somehow, it presents major difficulties for recovery. Most journaling filesystems provide their own salvaging programs for utilize in case of emergency. This underscores how censorious backups are, no matter what kindly of filesystem software you've invested in. discern Chapter 16, "Backups," for more information.
One of the earliest journaling filesystems is soundless a commercial venture: VxFS by Veritas. Another pioneer has decided to release its software into the public domain under GPL22 licensing: JFS23 by IBM. SGI's xfs journaling filesystem has been freely available under GPL since about 1999, although it is only designed to work under IRIX and Linux.24
Maintenance of filesystem condition incurs an overhead when using journaling filesystems. As a result, these filesystems accomplish suboptimally for petite filesystem sizes. Generally, journaling filesystems are preempt for filesystem sizes of 500Mb or more.
Network-based filesystems are really add-ons to local filesystems because the file server must fill the actual data stored in one of its own local filesystems.25 Network file- systems fill both a server and client program.
The server usually runs as a daemon on the system that is sharing disk space. The server's local filesystems are unaffected by this extra process. In fact, the daemon generally only puts a few messages in the syslog and is otherwise only visible through ps.
The system that wants to access the server's disk space runs the client program to mount the shared filesystems across the network. The client program handles complete the I/O so that the network filesystem behaves just a dote a local filesystem toward the client machine.
The brokendown standby for network-based filesystems is the Network File System (NFS). The NFS touchstone is currently up to revision 3, though there are quite a number of implementations with their own version numbers. Both Red Hat and Solaris arrive touchstone with NFS client and server packages. For more details on the inner workings and configuration of NFS, discern Chapter 13, "File Sharing."
Other network-based filesystems comprise AFS (IBM's Andrew File System) and DFS/DCE (Distributed File System, portion of the Open Group's Distributed Computing Environment). The mechanisms of these advanced filesystems Go beyond the scope of this book, although their goal is soundless the same: to efficiently share files across the network transparently to the user.
Pseudofilesystems are an enthralling development in that they are not actually related to disk-based partitions. They are instead purely logical constructs that limn information and meta-information in a hierarchical structure. Because of this structure and because they can live manipulated with the mount command, they are soundless referred to as filesystems.
The best specimen of pseudofilesystems exists on both Red Hat and Solaris systems: /proc. Under Solaris, /proc is restricted to just managing process information:
[sun:1 ~]ls /proc
0 145 162 195 206 230 262 265 272 286 299 303 342 370 403 408 _672 752
1 155 185 198 214 243 263 266 278 292 3 318 360 371 404 52 _674
142 157 192 2 224 252 264 268 280 298 302 319 364 400 406 58 _678
Note that these directories are complete named according to the process numbers corresponding to what you would find in the output of ps. The contents of each directory are the various meta-information that the system needs to manage the process.
Under Red Hat, /proc provides information about processes as well as about various system components and statistics:
[linux:1 ~] ls /proc
1 18767 23156 24484 25567 28163 4 493 674 8453 ksyms _stat
13557 18933 23157 24486 25600 3 405 5 675 9833 loadavg _swaps
13560 18934 23158 24487 25602 3050 418 5037 676 9834 locks _sys
13561 18937 23180 24512 25603 3051 427 5038 7386 9835 mdstat _tty
1647 19709 23902 24541 25771 3052 441 5054 7387 bus meminfo _uptime
1648 19730 23903 24775 25772 30709 455 5082 7388 cmdline misc _version
1649 19732 23936 25494 25773 30710 473 510 7414 cpuinfo modules
16553 19733 24118 25503 25824 30712 485 5101 7636 devices mounts
18658 2 24119 25504 25882 30729 486 524 7637 dma mtrr
18660 21450 24120 25527 25920 320 487 558 7638 filesystems net
18661 21462 24144 25533 26070 335 488 6 7662 fs _partitions
18684 21866 24274 25534 26071 337 489 670 8426 interrupts pci
18685 21869 24276 25541 26072 338 490 671 8427 ioports scsi
18686 21870 24277 25542 28161 339 491 672 8428 kcore self
18691 21954 24458 25543 28162 365 492 673 8429 kmsg slabinfo
Again they discern the directories named for process numbers, but they besides discern directories with indicative names such as cpuinfo and loadavg. Because this is a hierarchical filesystem, you can cd into these directories and read the various files for their system information.
The most enthralling thing about /proc is that it allows even processes to live treated dote files.26 This means that pretty much everything in UNIX, whether it is something that just exists or something that actually happens, can now live considered a file.
For more information under Red Hat, kind man proc. For more information under Solaris, kind man –s 4 proc.
Finally, there are the "super-filesystems" or logical volumes that conclude what the other major types of filesystem cannot: surmount the barriers of partitions. You may well question why anyone would want to conclude that. There are two reasons. First, because disks used to live a lot smaller and more costly, you used what you had at hand. If you needed a great pool of disk space, logical volumes allowed you to aggregate remnants into something useable. Second, even with larger disks, you soundless might not live able to achieve the kindly of disk space required by a particular researcher or program. Once again, logical volumes allow you to aggregate partitions across disks to shape one great filesystem.
Crossing disk boundaries with a logical volume is referred to as disk spanning. Once you fill logical volumes, you can besides fill some fairly knotty data management methods and performance-enhancing techniques. Disk striping, for example, is a performance booster. Instead of sequentially filling one disk and then the next in series, it spreads the data in discrete chunks across disks, allowing better I/O response through parallel operations.
RAID27 implements logical volumes at 10 sever levels, with various features at each level. This implementation can live done either in hardware or in software, although the nomenclature for both is the same.28
Table 3.4 RAID Levels
-Requires extra drives for data duplication
(Very similar to RAID-3)
Requires sever parity disk
Requires sever parity disk
(Very similar to RAID-5)
Slowest for writes, but
Rotating parity array
good for reads
Requires three to five sever parity disks
Reconstruction by parity data (not duplication)
RAID-5 + secondary
Not in broad use
RAID-5 + real-time embedded controller
Not in broad use
-RAID-0 array duplicated (mirrored)
-Each stripe is RAID-1 (mirrored) array
Array of parity stripes
Each stripe is RAID-3 array
Clearly, the kindly of complexity inherent in complete logical volume systems requires some kindly of back-end management system. Red Hat offers the logical Volume Manager (LVM) as a kernel module. While the details of LVM are beyond the scope of this book, it is enthralling to note that you can establish any filesystem that you want on top of the logical volume. Start at http://www.linuxdoc.org/HOWTO/LVM-HOWTO.htmlfor more details.
Although Sun offers logical volume management, it is through a for-pay program called "Solstice DiskSuite." The filesystem on DiskSuite logical volumes must live ufs. For more information, start at http://docs.sun.com/ab2/coll.260.2/DISKSUITEREF.
Another commercial logical volume manager for Solaris comes from Veritas; see: http://www.veritas.com/us/products/volumemanager/faq.html#a24
The beauty of complete logical volumes is that they appear to live just another local filesystem and are completely transparent to the user. However, logical volumes conclude add some complexity for the systems administrator, and the schema should live carefully documented on paper, in case it needs to live re-created.
Normally, a file server's disks are directly attached to the file server. With network-attached storage (NAS), the file server and the disks that it serves are sever entities, communicating over the local network. The storage disks require an aggregate controller that arbitrates file I/O requests from the external server(s). The server(s) and the aggregate controller each fill sever network IP addresses. To serve the files to clients, a file (or application) server sends file I/O requests to the NAS aggregate controller and relays the results back to client systems.
NAS is touched on here for completeness—entire books can live written about NAS design and implementation. NAS does not really limn a kind of filesystem, but rather it is a mechanism to mitigate the file server from the details of hardware disk access by isolating them in the network-attached storage unit.
Red Hat Filesystem Reference Table
Table 3.5 lists major filesystems that currently uphold (or are supported by) Red Hat.29 The filesystem types that are currently natively supported are listed in /usr/src/linux/ fs/filesytems.c.
Table 3.5 Filesystem Types and Purposes, with Examples (Red Hat)
Specific Instances (as Used in /etc/fstab)
Red Hat default filesystem
Journaling filesystem from IBM
Journaling filesystem from SGI
Windows compatibility: DOS
Windows compatibility: NT
Windows compatibility: FAT-32
Adfs hfs romfs
Affs hpfs smbfs
Coda mnix udf
devpts ncpfs umsdos
Deprecated, pre-kernel 2.1.21
Network-based remote communication
Store process (and other system) meta-information
Solaris Filesystem Reference Table
Table 3.6 lists major filesystems that currently uphold (or are supported by) Solaris. The filesystem types that currently are natively supported are listed as directories under /usr/lib/fs.
Table 3.6 Filesystem Types and Purposes, with Examples (Solaris)
Specific Instances (as Used in /etc/vfstab)
Solaris default filesystem; Red Hat-compatible
Journaling filesystem from IBM
Network-based remote communication
Store process metainformation
Fdfs swapfs tmpfs
Mount metainformation areas as filesystems
mntfs cachefs lofs
fifofs specfs udfs namefs