What is zpool in solaris




















The file system itself is bit, allowing for quadrillion zettabytes of storage. All metadata is allocated dynamically, so no need exists to preallocate inodes or otherwise limit the scalability of the file system when it is first created.

All the algorithms have been written with scalability in mind. Directories can have up to 2 48 trillion entries, and no limit exists on the number of file systems or the number of files that can be contained within a file system.

A snapshot is a read-only copy of a file system or volume. Snapshots can be created quickly and easily. Initially, snapshots consume no additional disk space within the pool. As data within the active dataset changes, the snapshot consumes disk space by continuing to reference the old data. As a result, the snapshot prevents the data from being freed back to the pool. Most importantly, ZFS provides a greatly simplified administration model. Through the use of a hierarchical file system layout, property inheritance, and automatic management of mount points and NFS share semantics, ZFS makes it easy to create and manage file systems without requiring multiple commands or the editing configuration files.

You can easily set quotas or reservations, turn compression on or off, or manage mount points for numerous file systems with a single command. You can examine or replace devices without learning a separate set of volume manager commands. You can send and receive file system snapshot streams. ZFS manages file systems through a hierarchy that allows for this simplified management of properties such as quotas, reservations, compression, and mount points. Now, when we run the status sub-command we will see that the mirror has been re-silvered or the data copied to the new drive.

As we mentioned earlier, as well as mirroring we can implement RAID5 style arrays, where a disk is used to hold pariy information.

In the event of a failed drive any data can be evaluated based on the parity data. If we use just two drives then this is similar to mirror, we have the data drive and parity information. So we two drives we have no advantage over mirroring but for each additional drive added we can make full use of the space. This would use the keyword raidz or raidz1.

If we wanted to use two parity drives we can allow for up to to failed drives. We need a minimum of 3 drives for this, but should have 4 or more to have advantages over mirroring; we would use raidz2 to set this up. If we wanted to allow for 3 failed drives, the maximum, then we could use raidz3. For this we would need a minimum of 4 drives but practically would use a minimum of 5 to have any advantage of mirroring. Using the three disks, each of M will gives us M or so free space and one disk used for parity.

We have to look at the file system, not the pool space as the pool space will show all the space available including the parity information. We have seen how we can create simple pools without fault tolerance with ZFS. This still simplifies our filesystem management as so much of the leg work is done for us.

ZFS mirror maintains a copy of all data on a different location from the original. When the checksum mechanism detects a data corruption, the corrupted data is repaired by replacing it with correct data from the mirror.

Top of Page. Skip to main content. Archived content NOTE: this is an archived page and the content is likely to be out of date. The ZFS Web-based management tool also simplifies your data management. Now we can import the exported pool. To know which pools can be imported, run import command without any options. As you can see in the output each pool has a unique ID, which comes handy when you have multiple pools with same names.

In that case a pool can be imported using the pool ID. Importing Pools with files. So to see pools that are importable with files as their devices we can use :. Okay all said and done, Now we can import the pool we want :. Similar to export we can force a pool import. Creating a ZFS file system.

The best part about zfs is that oracle or should I say Sun has kept the commands for it pretty easy to understand and remember. To create a file system fs1 in an existing zfs pool geekpool:. Now by default when you create a filesystem into a pool, it can take up all the space in the pool.

So too limit the usage of file system we define reservation and quota. Let us consider an example to understand quota and reservation. We also create a new file system fs2 without any quota and reservation.

So now for fs1 MB is reserved out of 1GB pool size , which no other file system can have it. It can also take upto MB quota out of the pool , but if its is free. On ther other hand when you do a zfs list , you would be able to see the available space for the file system equal to the quota defined for it if space not occupied by other file systems and not the reservation as expected. To set servation and quota on fs1 as stated above:. To set mount point for the file system.



0コメント

  • 1000 / 1000