Sun Solaris 10 Download X86 Dvd Iso
Posted in:admin
• Per directory: 2 48 • Per file system: unlimited Max. Filename length 255 characters (fewer for multibyte such as ) Features Yes (called 'extended attributes', but they are full-fledged streams) Attributes POSIX, NFSv4 ACLs Transparent compression Yes Yes Yes Yes Other Supported,, distributions,,, (only read-only support),, via third-party or ZFS-, ZFS is a combined and designed.
Jul 27, 2013. After installing Virtual Box on your PC, download the Full DVD (ISO image) – Oracle Solaris 10 (x86) from Oracle (approx 2.1 GB download). Download the Full DVD ISO Image for x86 (not SPARC). Download Solaris 11. Create CDs, DVDs or populate a USB drive with these images. Download templates for Oracle VM VirtualBox, for Oracle VM Server for SPARC or for x86, and for an Oracle Solaris 10 zone to run on Oracle Solaris 11. Download the PreFlight Checker for Applications and ORAchk Health Checks for the Oracle Stack.
The features of ZFS include protection against, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and, and clones, continuous integrity checking and automatic repair, and native. The ZFS name is registered as a trademark of; although it was briefly given the expanded name 'Zettabyte File System', it is no longer considered an initialism. Originally, ZFS was proprietary, developed internally by Sun as part of, with a team led by the CTO of Sun's storage business unit and Sun Fellow,. In 2005, the bulk of Solaris, including ZFS, was licensed as under the (CDDL), as the project. ZFS became a standard feature of Solaris 10 in June 2006. In 2010, Oracle stopped the releasing of source code for new OpenSolaris and ZFS development, effectively their closed-source development from the open-source.
In response, was created as a new open-source development umbrella project, aiming at bringing together individuals and companies that use the ZFS filesystem in an open-source manner. This section does not any.
Unsourced material may be challenged and. (January 2017) () ZFS compared to most other file systems [ ] Historically, the management of stored data has involved two aspects — the physical management of such as and, and devices such as that present a logical single device based upon multiple physical devices (often undertaken by a,, or suitable device ), and the management of files stored as logical units on these logical block devices (a ).
Example: A of 2 hard drives and an SSD caching disk is controlled by, part of the and built into a desktop computer. The user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as if a disk fails). The management of the individual devices and their presentation as a single device, is distinct from the management of the files held on that apparent device. ZFS is unusual, because unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their condition, status, their logical arrangement into volumes, and also of all the files stored on them). ZFS is designed to ensure (subject to suitable ) that data stored on disks cannot be lost due to physical error or misprocessing by the hardware or, or events and which may happen over time, and its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that storage controller cards, and separate volume and file managers cannot achieve. ZFS also includes a mechanism for and, including snapshot; the former is described by the documentation as one of its 'most powerful features', having features that 'even other file systems with snapshot functionality lack'.
Very large numbers of snapshots can be taken, without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ('live') file system to be fully snapshotted several times an hour, in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back 'live' or the file system at previous points in time viewed, even on very large file systems, leading to 'tremendous' savings in comparison to formal backup and restore processes, or cloned 'on the spot' to form new independent file systems. Summary of key differentiating features [ ] Examples of features specific to ZFS which facilitate its objective include: • Designed for long term storage of data, and indefinitely scaled datastore sizes with zero data loss, and high configurability. • Hierarchical of all data and, ensuring that the entire storage system can be verified on use, and confirmed to be correctly stored, or remedied if corrupt. Checksums are stored with a block's parent, rather than with the block itself. This contrasts with many file systems where checksums (if held) are stored with the data so that if the data is lost or corrupt, the checksum is also likely to be lost or incorrect. • Can store a user-specified number of copies of data or metadata, or selected types of data, to improve the ability to recover from data corruption of important files and structures.
• Automatic rollback of recent changes to the file system and data, in some circumstances, in the event of an error or inconsistency. • Automated and (usually) silent self-healing of data inconsistencies and write failure when detected, for all errors where the data is capable of reconstruction. Data can be reconstructed using all of the following: error detection and correction checksums stored in each block's parent block; multiple copies of data (including checksums) held on the disk; write intentions logged on the SLOG (ZIL) for writes that should have occurred but did not occur (after a power failure); parity data from RAID/RAIDZ disks and volumes; copies of data from mirrored disks and volumes.
• Native handling of standard RAID levels and additional ZFS RAID layouts ('). The RAIDZ levels stripe data across only the disks required, for efficiency (many RAID systems stripe indiscriminately across all devices), and checksumming allows rebuilding of inconsistent or corrupted data to be minimised to those blocks with defects; • Native handling of tiered storage and caching devices, which is usually a volume related task. Because it also understands the file system, it can use file-related knowledge to inform, integrate and optimize its tiered storage handling which a separate device cannot; • Native handling of snapshots and backup/ which can be made efficient by integrating the volume and file handling.
ZFS can routinely take snapshots several times an hour of the data system, efficiently and quickly. (Relevant tools are provided at a low level and require external scripts and software for utilization). • Native and, although the latter is largely handled in and is memory hungry.
• Efficient rebuilding of RAID arrays — a RAID controller often has to rebuild an entire disk, but ZFS can combine disk and file knowledge to limit any rebuilding to data which is actually missing or corrupt, greatly speeding up rebuilding; • Ability to identify data that would have been found in a cache but has been discarded recently instead; this allows ZFS to reassess its caching decisions in light of later use and facilitates very high cache hit levels; • Alternative caching strategies can be used for data that would otherwise cause delays in data handling. For example, synchronous writes which are capable of slowing down the storage system can be converted to asynchronous writes by being written to a fast separate caching device, known as the SLOG (sometimes called the ZIL – ZFS Intent Log). • Highly tunable – many internal parameters can be configured for optimal functionality. • Can be used for clusters and computing, although not fully designed for this use.
Inappropriately specified systems [ ] Unlike many file systems, ZFS is intended to work in a specific way and towards specific ends. It expects or is designed with the assumption of a specific kind of hardware environment.
If the system is not suitable for ZFS, then ZFS may underperform significantly. ZFS developers Calomel stated in their 2017 ZFS benchmarks that: 'On mailing lists and forums there are posts which state ZFS is slow and unresponsive. We have shown in the previous section you can get incredible speeds out of the file system if you understand the limitations of your hardware and how to properly setup your raid. We suspect that many of the objectors of ZFS have setup their ZFS system using slow or otherwise substandard I/O subsystems.' Common system design failures: • Inadequate RAM — ZFS may use a large amount of memory in many scenarios; • Inadequate disk free space — ZFS uses for data storage; its performance may suffer if the disk pool gets too close to full. Around 70% is a recommended limit for good performance.
Above a certain percentage, typically set to around 80%, ZFS switches to a space-conserving rather than speed-oriented approach, and performance plumments as it focuses on preserving working space on the volume; • No efficient dedicated SLOG device, when synchronous writing is prominent — this is notably the case for and; even SSD based systems may need a separate SLOG device for expected performance. The SLOG device is only used for writing apart from when recovering from a system error. It can often be small (for example, in, the SLOG device only needs to store the largest amount of data likely to be written in about 10 seconds (or the size of two 'transaction groups'), although it can be made larger to allow longer lifetime of the device). SLOG is therefore unusual in that its main criteria are pure write functionality, low latency, and loss protection – usually little else matters.
• Lack of suitable caches, or misdesigned caches — for example, ZFS can cache read data in RAM ('ARC') or a separate device ('L2ARC'); in some cases adding extra ARC is needed, in other cases adding extra L2ARC is needed, and in some situations adding extra L2ARC can even degrade performance, by forcing RAM to be used for for the slower L2ARC, at the cost of less room for data in the ARC. • Use of hardware RAID cards, perhaps in the mistaken belief that these will 'help' ZFS. While routine for other filing systems, ZFS handles RAID natively, and is designed to work with a raw and unmodified view of storage devices, so it can fully use its functionality. A separate RAID card may leave ZFS less efficient and reliable. For example, ZFS checksums all data, but most RAID cards will not do this as effectively, or for cached data.
Separate cards can also mislead ZFS about the state of data, for example after a, or by mis-signalling exactly when data has safely been written, and in some cases this can lead to issues and data loss. Separate cards can also slow down the system, sometimes greatly, by adding to every data read/write operation, or by undertaking full rebuilds of damaged arrays where ZFS would have only needed to do minor repairs of a few seconds. • Use of poor quality components – Calomel identify poor quality RAID and network cards as common culprits for low performance. • Poor configuration/tuning – ZFS options allow for a wide range of tuning, and mis-tuning can affect performance. For example, suitable memory caching parameters for file shares on are likely to be different from those required for block access shares using and. A memory cache that would be appropriate for the former, can cause errors and start-stop issues as data caches are flushed - because the time permitted for a response is likely to be much shorter on these kinds of connections, the client may believe the connection has failed, if there is a delay due to 'writing out' a large cache. Similarly, an inappropriately large in-memory write cache can cause 'freezing' (without timeouts) on file share protocols, even when the connection does not time out.
See also: and One major feature that distinguishes ZFS from other is that it is designed with a focus on data integrity by protecting the user's data on disk against caused by, spikes, bugs in disk, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc. A 2012 research showed that neither any of the then-major and widespread filesystems (such as,,,, or ) nor hardware RAID (which has ) provided sufficient protection against data corruption problems. Initial research indicates that ZFS protects data better than earlier efforts. It is also faster than UFS and can be seen as its replacement. ZFS data integrity [ ] For ZFS, data integrity is achieved by using a checksum or a hash throughout the file system tree.
Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a. In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so the entire pool self-validates. When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it 'should' be.
If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides (such as with internal ), assuming that the copy of data is undamaged and with matching checksums. It is optionally possible to provide additional in-pool redundancy by specifying copies=2 (or copies=3 or more), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for copies=3, reducing to one third) the storage capacity of the disk. Additionally some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety, even with the default copies=1 setting. If other copies of the damaged data exist or can be reconstructed from checksums and data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism), and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored. RAID [ ] ZFS and hardware RAID [ ] If the disks are connected to a RAID controller, it is most efficient to configure it as a in mode (i.e.
Turn off RAID function). If a hardware RAID card is used, ZFS always detects all data corruption but cannot always repair data corruption because the hardware RAID card will interfere. Therefore, the recommendation is to not use a hardware RAID card, or to flash a hardware RAID card into JBOD/IT mode. For ZFS to be able to guarantee data integrity, it needs to either have access to a RAID set (so all data is copied to at least two disks), or if one single disk is used, ZFS needs to enable redundancy (copies) which duplicates the data on the same logical drive. Using ZFS copies is a good feature to use on notebooks and desktop computers, since the disks are large and it at least provides some limited redundancy with just a single drive. There are several reasons as to why it is better to rely solely on ZFS by using several independent disks and or mirroring. When using hardware RAID, the controller usually adds controller-dependent data to the drives which prevents software RAID from accessing the user data.
While it is possible to read the data with a compatible hardware RAID controller, this inconveniences consumers as a compatible controller usually isn't readily available. Using the JBOD/RAID-Z combination, any disk controller can be used to resume operation after a controller failure. Note that hardware RAID configured as JBOD may still detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require /CCTL/ERC-enabled drives to prevent drive dropouts. Software RAID using ZFS [ ] ZFS offers software RAID through its RAID-Z and mirroring organization schemes. RAID-Z is a data/parity distribution scheme like, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the.
RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual sequence. As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this. In addition to handling whole-disk failures, RAID-Z can also detect and correct, offering 'self-healing data': when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor. RAID-Z does not require any special hardware: it does not need NVRAM for reliability, and it does not need write buffering for good performance.
With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks. There are three different RAID-Z modes: RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), and RAID-Z3 (Also referred to as RAID 7 allows three disks to fail). The need for RAID-Z3 arose recently because RAID configurations with future disks (say, 6–10 TB) may take a long time to repair, the worst case being weeks. During those weeks, the rest of the disks in the RAID are stressed more because of the additional intensive repair process and might subsequently fail, too. By using RAID-Z3, the risk involved with disk replacement is reduced. Mirroring, the other ZFS RAID option, is essentially the same as RAID 1, allowing any number of disks to be mirrored. Like RAID 1 it also allows faster read and resilver/rebuild speeds since all drives can be used simultaneously and data is not calculated separately, and mirrored vdevs can be split to create identical copies of the pool.
Resilvering and scrub [ ] ZFS has no tool equivalent to (the standard Unix and Linux data checking and repair tool for file systems). Instead, ZFS has a built-in function which regularly examines all data and repairs silent corruption and other problems.
Some differences are: • fsck must be run on an offline filesystem, which means the filesystem must be unmounted and is not usable while being repaired, while scrub is designed to be used on a mounted, live filesystem, and does not need the ZFS filesystem to be taken offline. • fsck usually only checks metadata (such as the journal log) but never checks the data itself. This means, after an fsck, the data might still not match the original data as stored. • fsck cannot always validate and repair data when checksums are stored with data (often the case in many file systems), because the checksums may also be corrupted or unreadable.
ZFS always stores checksums separately from the data they verify, improving reliability and the ability of scrub to repair the volume. ZFS also stores multiple copies of data – metadata in particular may have upwards of 4 or 6 copies (multiple copies per disk and multiple disk mirrors per volume), greatly improving the ability of scrub to detect and repair extensive damage to the volume, compared to fsck. • scrub checks everything, including metadata and the data.
The effect can be observed by comparing fsck to scrub times – sometimes a fsck on a large RAID completes in a few minutes, which means only the metadata was checked. Traversing all metadata and data on a large RAID takes many hours, which is exactly what scrub does. The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week. Capacity [ ] ZFS is a file system, so it can address 1.84 × 10 19 times more data than 64-bit systems such as. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2 128 bits of data would require 10 24 3 TB hard disk drives. Some theoretical limits in ZFS are: • 2 48: number of entries in any individual directory • 16 (2 64 bytes): maximum size of a single file • 16 exbibytes: maximum size of any attribute • 256 quadrillion (2 128 bytes): maximum size of any zpool • 2 56: number of attributes of a file (actually constrained to 2 48 for the number of files in a directory) • 2 64: number of devices in any zpool • 2 64: number of zpools in a system • 2 64: number of file systems in a zpool Encryption [ ] With Oracle Solaris, the encryption capability in ZFS is embedded into the I/O pipeline.
During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys. A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism.
Other features [ ] Storage devices, spares, and quotas [ ] Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis.
Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to diverse filesystems [ ] as needed. Arbitrary storage device types can be added to existing pools to expand their size. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A can be set to limit the amount of space a file system instance can occupy, and a can be set to guarantee that space will be available to a file system instance. Caching mechanisms: ARC (L1), L2ARC, Transaction groups, SLOG (ZIL) [ ] ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive.
Therefore, data is automatically cached in a hierarchy to optimize performance versus cost; these are often called 'hybrid storage pools'. Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as (SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM. ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually (SSDs)), for a total of four caches. Where stored Read cache Write cache First level cache In RAM Known as ARC, due to its use of a variant of the (ARC) algorithm. RAM will always be used for caching, thus this level is always present.
The efficiency of the ARC means that disks will often not need to be accessed, provided the ARC size is sufficiently large. If RAM is too small there will hardly be any ARC at all; in this case, ZFS always needs to access the underlying disks which impacts performance considerably.
Handled by means of 'transaction groups' – writes are collated over a short period (typically 5 – 30 seconds) up to a given limit, with each group being written to disk ideally while the next group is being collated. This allows writes to be organized more efficiently for the underlying disks at the risk of minor data loss of the most recent transactions upon power interruption or hardware fault. In practice the power loss risk is avoided by ZFS write and by the SLOG/ZIL second tier write cache pool (see below), so writes will only be lost if a write failure happens at the same time as a total loss of the second tier SLOG pool, and then only when settings related to synchronous writing and SLOG use are set in a way that would allow such a situation to arise. If data is received faster than it can be written, data receipt is paused until the disks can catch up. Second level cache On fast storage devices (which can be added or removed from a 'live' system without disruption in current versions of ZFS, although not always in older versions) Known as L2ARC ('Level 2 ARC'), optional. ZFS will cache as much data in L2ARC as it can, which can be tens or hundreds of in many cases. L2ARC will also considerably speed up if the entire deduplication table can be cached in L2ARC.
It can take several hours to fully populate the L2ARC from empty (before ZFS has decided which data are 'hot' and should be cached). If the L2ARC device is lost, all reads will go out to the disks which slows down performance, but nothing else will happen (no data will be lost). Known as SLOG or ZIL ('ZFS Intent Log'), optional but an SLOG will be created on the main storage devices if no cache device is provided. This is the second tier write cache, and is often misunderstood. Strictly speaking, ZFS does not use the SLOG device to cache its disk writes. Rather, it uses SLOG to ensure writes are captured to a permanent storage medium as quickly as possible, so that in the event of power loss or write failure, no data which was acknowledged as written, will be lost. The SLOG device allows ZFS to speedily store writes and quickly report them as written, even for storage devices such as that are much slower.
In the normal course of activity, the SLOG is never referred to or read, and it does not act as a cache; its purpose is to safeguard during the few seconds taken for collation and 'writing out', in case the eventual write were to fail. If all goes well, then the storage pool will be updated at some point within the next 5 to 60 seconds, when the current transaction group is written out to disk (see above), at which point the saved writes on the SLOG will simply be ignored and overwritten. If the write eventually fails, or the system suffers a crash or fault preventing its writing, then ZFS can identify all the writes that it has confirmed were written, by reading back the SLOG (the only time it is read from), and use this to completely repair the data loss. This becomes crucial if a large number of synchronous writes take place (such as with, and some ), where the client requires confirmation of successful writing before continuing its activity; the SLOG allows ZFS to confirm writing is successful much more quickly than if it had to write to the main store every time, without the risk involved in misleading the client as to the state of data storage. If there is no SLOG device then part of the main data pool will be used for the same purpose, although this is slower.
If the log device itself is lost, it is possible to lose the latest writes, therefore the log device should be mirrored. In earlier versions of ZFS, loss of the log device could result in loss of the entire zpool, although this is no longer the case. Therefore, one should upgrade ZFS if planning to use a separate log device. Copy-on-write transactional model [ ] ZFS uses a. All block pointers within the filesystem contain a 256-bit or 256-bit (currently a choice between,, or ) of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL () write cache is used when synchronous write semantics are required.
The blocks are arranged in a tree, as are their checksums (see ). Snapshots and clones [ ]. This section does not any.
Unsourced material may be challenged and. (January 2017) () An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a version of the file system to be maintained.
ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots. Writeable snapshots ('clones') can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the principle.
Sending and receiving snapshots [ ]. This section does not any. Unsourced material may be challenged and.
(January 2017) () ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool.
Dynamic striping [ ] Dynamic across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them. [ ] Variable block sizes [ ] ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations). Lightweight filesystem creation [ ] In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems.
[ ] Adaptive endianness [ ]. This section does not any. Unsourced material may be challenged and. (January 2017) (), a desktop operating system derived from FreeBSD, supports ZFS storage pool version 6 as of 0.3-RELEASE. This was derived from code included in 7.0-RELEASE.
An update to storage pool 28 is in progress in 0.4-CURRENT and based on 9-STABLE sources around FreeBSD 9.1-RELEASE code. TrueOS [ ] (formerly known as PC-BSD) is a desktop-oriented distribution of FreeBSD, which inherits its ZFS support. [ ] FreeNAS [ ], an embedded open source (NAS) distribution based on, has the same ZFS support as FreeBSD and.
[ ] ZFS Guru [ ], an embedded open source (NAS) distribution based on. PfSense and PCBSD [ ], an open source BSD based, and, a BSD based desktop, both support ZFS (pfSense in its upcoming 2.4 release). NAS4Free [ ], an embedded open source (NAS) distribution based on, has the same ZFS support as FreeBSD, ZFS storage pool version 5000. This project is a continuation of FreeNAS 7 series project.
Debian GNU/kFreeBSD [ ] Being based on the FreeBSD kernel, has ZFS support from the kernel. However, additional userland tools are required, while it is possible to have ZFS as root or /boot file system in which case required configuration is performed by the Debian installer since the Wheezy release. As of 31 January 2013, the ZPool version available is 14 for the Squeeze release, and 28 for the Wheezy-9 release.
This section may require to meet Wikipedia's. The specific problem is: wording and style issues. (July 2016) () Although the ZFS filesystem supports -based operating systems, difficulties arise for maintainers wishing to provide native support for ZFS in their products due to between the license used by the ZFS code, and the license used by the Linux kernel. To enable ZFS support within Linux, a containing the CDDL-licensed ZFS code must be compiled and loaded into the kernel.
According to the, the wording of the GPL license legally prohibits redistribution of the resulting product as a, though this viewpoint has caused some controversy. ZFS on FUSE [ ] One potential workaround to licensing incompatibility was trialed in 2006, with an experimental port of the ZFS code to Linux's system. The ran entirely in instead of being integrated into the Linux kernel, and was therefore not considered a derivative work of the kernel. This approach was functional, but suffered from significant performance penalties when compared with integrating the filesystem as a native kernel module running in.
As of 2016, the ZFS on FUSE project appears to be defunct. Native ZFS on Linux [ ] A native port of ZFS for Linux produced by the (LLNL) was released in March 2013, following these key events: • 2008: prototype to determine viability • 2009: initial ZVOL and Lustre support • 2010: development moved to • 2011: layer added • 2011: community of early adopters • 2012: production usage of ZFS • 2013: stable release As of August 2014, ZFS on Linux uses the pool version number 5000, which indicates that the features it supports are defined via. Erd Commander 2008 Isopropyl. This pool version is an unchanging number that is expected to never conflict with version numbers given by Oracle. KQ InfoTech [ ] Another native port for Linux was developed by KQ InfoTech in 2010. This port used the zvol implementation from the Lawrence Livermore National Laboratory as a starting point. A release supporting zpool v28 was announced in January 2011. In April 2011, KQ Infotech was acquired by, and their work on ZFS ceased.
Source code of this port can be found on. The work of KQ InfoTech was ultimately integrated into the LLNL's native port of ZFS for Linux. Source code distribution [ ] While the license incompatibility may arise with the distribution of compiled binaries containing ZFS code, it is generally agreed that distribution of the source code itself is not affected by this. In, configuring a ZFS root filesystem is well documented and the required packages can be installed from its package repository. Also provides documentation on supporting ZFS, both as a kernel module and when built into the kernel. Ubuntu integration [ ] The question of the CDDL license's compatibility with the GPL license resurfaced in 2015, when the Linux distribution announced that it intended to make precompiled OpenZFS binary kernel modules available to end-users directly from the distribution's official package repositories. In 2016, Ubuntu announced that a legal review resulted in the conclusion that providing support for ZFS via a binary was not in violation of the provisions of the GPL license.
Others followed Ubuntu's conclusion, while the FSF and SFC reiterated their opposing view. 16.04 LTS ('Xenial Xerus'), released on April 21, 2016, allows the user to install the OpenZFS binary packages directly from the Ubuntu software repositories. As of April 2017, no legal challenge has been brought against regarding the distribution of these packages. Microsoft Windows [ ] A port of open source ZFS was attempted in 2010 but after a hiatus of over one year development ceased in 2012. In October 2017 a new port of OpenZFS was announced at OpenZFS Developer Summit.
This section needs expansion. You can help. (December 2013) • 2008: Sun shipped a line of ZFS-based 7000-series storage appliances. • 2013: Oracle shipped ZS3 series of ZFS-based filers and seized first place in the benchmark with one of them. • 2013: ships ZFS-based NAS devices called for and for the enterprise.
• 2014: ships a line of ZFS-based NAS devices called, designed to be used in the enterprise. • 2015: announces a platform that allows customers to provision their own zpool and import and export data using zfs send and zfs receive. Detailed release history [ ] With ZFS in Oracle Solaris: as new features are introduced, the version numbers of the pool and file system are incremented to designate the format and features available. Features that are available in specific file system versions require a specific pool version. Distributed development of OpenZFS involves and pool version 5000, an unchanging number that is expected to never conflict with version numbers given by Oracle.
Legacy version numbers still exist for pool versions 1–28, implied by the version 5000. Illumos uses pool version 5000 for this purpose. Future on-disk format changes are enabled / disabled independently via feature flags. Legend: Old release Latest stable release Latest Proprietary stable release Latest Proprietary beta release ZFS Filesystem Version Number Release date Significant changes 1 OpenSolaris Nevada build 36 First release 2 OpenSolaris Nevada b69 Enhanced directory entries. In particular, directory entries now store the object type. For example, file, directory, named pipe, and so on, in addition to the object number. 3 OpenSolaris Nevada b77 Support for sharing ZFS file systems over.
Case insensitivity support. System attribute support.
Integrated anti-virus support. This section may contain an excessive amount of that may only interest a specific audience. Please help by or any relevant information, and removing excessive detail that may be against.
(December 2013) () The first indication of 's interest in ZFS was an April 2006 post on the opensolaris.org zfs-discuss mailing list where an Apple employee mentioned being interested in porting ZFS to their operating system. In the release version of Mac OS X 10.5, ZFS was available in read-only mode from the command line, which lacks the possibility to create zpools or write to them. Before the 10.5 release, Apple released the 'ZFS Beta Seed v1.1', which allowed read-write access and the creation of zpools,; however, the installer for the 'ZFS Beta Seed v1.1' has been reported to only work on version 10.5.0, and has not been updated for version 10.5.1 and above.
In August 2007, Apple opened a ZFS project on their Mac OS Forge web site. On that site, Apple provided the source code and binaries of their port of ZFS which includes read-write access, but there was no installer available until a third-party developer created one. In October 2009, Apple announced a shutdown of the ZFS project on Mac OS Forge. That is to say that their own hosting and involvement in ZFS was summarily discontinued. No explanation was given, just the following statement: 'The ZFS project has been discontinued. The mailing list and repository will also be removed shortly.'
Apple would eventually release the legally required, CDDL-derived, portion of the source code of their final public beta of ZFS, code named '10a286'. Complete ZFS support was once advertised as a feature of Snow Leopard Server ( 10.6). Grease Musical Libretto Pdf. However, by the time the operating system was released, all references to this feature had been silently removed from its features page.
Apple has not commented regarding the omission. Apple's '10a286' source code release, and versions of the previously released source and binaries, have been preserved and new development has been adopted by a group of enthusiasts. The MacZFS project acted quickly to mirror the public archives of Apple's project before the materials would have disappeared from the internet, and then to resume its development elsewhere. The MacZFS community has curated and matured the project, supporting ZFS for all Mac OS releases since 10.5. The project has an active. As of July 2012, MacZFS implements zpool version 8 and ZFS version 2, from the October 2008 release of. Additional historical information and commentary can be found on the MacZFS web site and FAQ.
The 17th September 2013 launch of OpenZFS included ZFS-OSX, which will become a new version of MacZFS, as the distribution for Darwin. See also [ ].