“编译Lustre”的版本间的差异

来自Lustre文件系统
跳转至: 导航搜索
Optional: Additional Drivers
Prepare the Kernel Source
(未显示同一用户的49个中间版本)
第1行: 第1行:
 
<!-- == 将源码编译成Lustre包 == -->
 
<!-- == 将源码编译成Lustre包 == -->
 +
 +
 +
 
== 简介 ==
 
== 简介 ==
Lustre is an open source software project, developed by a community of software engineers across the world. The project is maintained by its developers and is supported by infrastructure to provide the continual integration, build and test of patches. Patches that add new features, update existing functionality or fix bugs.
 
Lustre是一个开源软件项目,由世界各地的软件工程师社区开发。该项目由开发人员维护,并由基础设施支持,以提供补丁的持续集成、构建和测试。添加新功能、更新现有功能或修复错误的补丁。
 
  
The Lustre project's maintainers issue periodic releases of the software, extensively and comprehensively tested and qualified for general use. The releases include pre-built binary software packages for supported Linux-based operating system distributions. Many users of Lustre are content to rely upon the binary builds, and in general, it is recommended that the binary distributions from the Lustre project are used. Pre-built binary packages are available for download here:
+
Lustre是一个开源软件项目,由世界各地的软件工程师开发。该项目由开发者维护,由基础架构支持以提供持续的补丁集成、构建和测试服务。其中补丁包括添加新功能、更新现有功能或修复错误。
Lustre项目的维护者定期发布软件版本,经过广泛全面的测试,并符合通用标准。这些版本包括为支持的基于Linux的操作系统发行版预先构建的二进制软件包。Lustre的许多用户都满足于依赖二进制版本,一般来说,建议使用Lustre项目中的二进制版本。预构建的二进制包可在此下载:
+
 
 +
项目的维护者会定期发布经过广泛全面测试,并符合通用标准的软件版本。这些版本包括基于Linux操作系统发行版预先构建的二进制软件包。Lustre的许多用户都倾向于二进制版本。一般来说,我们建议用户使用Lustre项目中的二进制版本。预构建的二进制包可在此下载:
 +
 
 
https://wiki.whamcloud.com/display/PUB/Lustre+Releases
 
https://wiki.whamcloud.com/display/PUB/Lustre+Releases
  
There are, of course, times when it is advantageous to be able to compile the Lustre software directly from source code, e.g. to apply a hot-fix patch for a recently uncovered issue, to test a new feature in development, or to allow Lustre to take advantage of a 3rd party device driver (vendor-supplied network drivers, for example).
+
当然,有时可以直接从源代码编译Lustre软件。例如,将热修复补丁运用到最近发现的问题,测试开发中的新功能,或者允许Lustre利使用第三方设备驱动程序(比如供应商提供的网络驱动程序)。
当然,有时能够直接从源代码编译Lustre软件是有利的,例如,为最近发现的问题应用热修复补丁,测试开发中的新功能,或者允许Lustre利用第三方设备驱动程序(例如,供应商提供的网络驱动程序)。
+
 
== Building Lustre - TLDR Guide ==
+
== 构建Lustre - TLDR指南 ==
==搭建Lustre- TLDR指南==
+
 
To build 'only' Lustre RPMs against the currently-installed kernel, the process is relatively simple.  You need to have the ''EXACTLY IDENTICAL'' '''kernel-devel''' package installed for your currently-installed kernel, and basic building tools like '''rpmbuild''', '''gcc''', '''autoconf''', and rpmbuild will complain about anything else that is missing.
+
 
要针对当前安装的内核构建“仅”Lustre RPMs,过程相对简单。您需要为当前安装的内核安装''完全相同'''''kernel-devel'''包,并且基本的构建工具,如'''rpmbuild'''、'''gcc'''、'''autoconf'''和rpmbuild,会抱怨任何其他丢失的东西。
+
针对当前安装的内核构建“仅”Lustre软件包管理器,过程相对简单。您需要为当前安装的内核安装''完全相同'''''kernel-devel'''包,以及基本的构建工具,如'''rpmbuild'''、'''gcc'''、'''autoconf'''。没有安装需要的工具,rpmbuild会报错。
<pre>
+
 
 +
<pre style="overflow-x:auto;">
 
$ git clone git://git.whamcloud.com/fs/lustre-release.git
 
$ git clone git://git.whamcloud.com/fs/lustre-release.git
 
$ cd lustre-release
 
$ cd lustre-release
第23行: 第27行:
 
</pre>
 
</pre>
  
For Ubuntu or Debian DPKG systems, use '''make debs''' instead of '''make rpms''' as the last step.  This will build the tools and client RPMs if at all possible, which is what most people need.  If configure can detect a kernel source that matches the installed kernel along with a matching '''ldiskfs''' patch series, or an installed '''zfs-devel''' package, it will also try to build the server, otherwise that will be skipped.
+
乌班图或黛比DPKG系统,最后一步使用'''make debs'''而不是'''make rpms'''。如果可能的话,在这一步还可以构建大多数人所需要的工具和客户端RPMs。如果配置器可以检测到与已安装内核匹配的内核源以及匹配的'''Idiskfs'''修补程序系列,或者已安装的'''ZFS-dev'''包,它会尝试构建服务器。如果没有检测到,这一步则会被跳过。
对于乌班图或黛比DPKG系统,最后一步是使用'''make debs'''而不是'''make rpms'''。如果可能的话,这将构建工具和客户端RPMs,这是大多数人需要的。如果configure可以检测到与已安装内核匹配的内核源以及匹配的'''ldiskfs'''修补程序系列,或者已安装的'''ZFS-dev'''包,它还将尝试构建服务器,否则将被跳过。
+
 
The rest of this document goes into the details needed to optionally patch and build your own kernel (this is no longer required for ldiskfs and was never needed for ZFS), and handle less common scenarios like OFED or building against a kernel that does not match the installed kernel.
+
本文将深入讨论选择性修补和构建您自己的内核所需的细节(ldiskfs和ZFS不再需要讨论这一点),并处理不太常见的场景,如OFED或构建与安装内核不匹配的内核。
本文的其余部分将深入讨论选择性地修补和构建您自己的内核所需的细节(ldiskfs不再需要这一点,ZFS也不再需要这一点),并处理不太常见的场景,如OFED或构建与安装的内核不匹配的内核。
+
 
== Limitations ==
+
== 不足 ==
This documentation was originally developed to provide instructions for creating Lustre packages for use with Red Hat Enteprise Linux (RHEL) or CentOS.
+
 
本文档最初是为创建用于红帽企业版Linux (RHEL)或CentOS的Lustre软件包而写的。
+
 
Preliminary information on how to compile Lustre for SUSE Linux Enterprise Server (SLES) version 12 service pack 2 (SLES 12 SP2) has also been added. The documentation demonstrates the process for ZFS-based SLES servers, as well as for clients. The processes for compiling Lustre on SLES with OFED or LDISKFS support have not been reviewed.
+
本文档最初是为指导红帽企业Linux (RHEL)或CentOS创建Lustre软件包而编写的。
还添加了关于如何为SUSE Linux企业服务器(SLES)版本12 service pack 2 (SLES 12 SP2)编译Lustre的初步信息。该文档演示了ZFS SLES服务器和客户端的流程。在OFED或LDISKFS支持下编制Lustre的流程尚未审查。
+
 
 +
本文还包括了如何为SUSE Linux Enterprise Server (SLES) version 12 service pack 2 (SLES 12 SP2)编译Lustre的事前资料并展示了基于ZFS的SLES服务器和客户端流程。但在OFED或LDISKFS下编制Lustre的流程尚未讨论。
 +
 
 +
之后还将添加其他操作系统发行版的编译shuoming。
  
Other operating system distributions will be added over time.
+
'''注意:'''SUSE Linux将自编译内核模块标记为操作系统“不支持”。默认情况下,SLES将拒绝加载没有设置<code>supported</code>标志的内核模块。以下是尝试加载不受支持的内核模块时将返回的错误示例:
随着时间的推移,将会添加其他操作系统发行版。
 
'''Note:''' SUSE Linux will mark self-compiled kernel modules as ''unsupported'' by the operating system. By default, SLES will refuse to load kernel modules that do not have the <code>supported</code> flag set. the following is an example of the error that will be returned when attempting to load an unsupported kernel module:
 
'''注意:'''SUSE Linux将自编译内核模块标记为操作系统“不支持”。默认情况下,SLES将拒绝加载没有设置“代码支持”标志的内核模块。以下是尝试加载不受支持的内核模块时将返回的错误示例:
 
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第47行: 第51行:
 
</pre>
 
</pre>
  
To allow self-compiled kernel modules to be loaded in a SLES OS, add the following entry into <code>/etc/modprobe.d/10-unsupported-modules.conf</code>:
+
要允许在SLES操作系统中加载自编译内核模块,请在<code>/etc/modprobe.d/10-unsupported-modules.conf</code>条目中添加以下这一项:
要允许在SLES操作系统中加载自编译内核模块,请在<code>/etc/modprobe.d/10-unsupported-modules.conf</code>中添加下面这一项:
 
  
 
  allow_unsupported_modules 1
 
  allow_unsupported_modules 1
第54行: 第57行:
 
更多信息可参考 [https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_admsupport_kernel.html SUSE文档]
 
更多信息可参考 [https://www.suse.com/documentation/sles-12/book_sle_admin/data/sec_admsupport_kernel.html SUSE文档]
  
== Planning ==
+
== 规划 ==
Since Lustre is a network-oriented file system that runs as modules in the Linux kernel, it has dependencies on other kernel modules, including device drivers. One of the most common tasks requiring a new build from source is to allow Lustre kernel modules to work with 3rd party device drivers not distributed by the operating system. For example, the Open Fabrics Enterprise Distribution (OFED) from the Open Fabrics Alliance (OFA) and OFA's partners provides drivers for InfiniBand and RoCE network fabrics, and is probably the single most common reason for recompiling Lustre.
 
  
There are several options available to users when creating Lustre packages from the source, each of which has an effect on the build process.
 
  
On the Lustre servers, one must choose the block storage file system used to store data. Lustre file system data is contained on block storage file systems distributed across a set of storage servers. The back-end block storage is abstracted by an API called the Object Storage Device, or OSD. The OSD enables Lustre to use different back-end file systems for persistence.
+
因为Lustre是一个面向网络的文件系统,在Linux内核中作为模块运行,所以它依赖于其他内核模块,包括设备驱动程序。允许Lustre内核模块与非操作系统分发的第三方设备驱动程序一起工作是从源代码进行新构建的最常见任务之一。例如,来自OFA和OFA合作伙伴的OFED为InfiniBand和RoCE网络结构提供驱动因素可能是重新编译Lustre的最常见原因。
  
There is a choice between LDISKFS (based on EXT4) and ZFS OSDs, and Lustre must be compiled with support for at least one of these OSDs.
+
当用户从源创建Lustre包时,有几个选项可供选择,每个选项都对构建过程均有影响。
  
In addition, users must decide which drivers will be used by Lustre for networking. The Linux kernel has built-in support for Ethernet and InfiniBand, but systems vendors often supply their own device drivers and tools. Lustre's networking stack, LNet, needs to be able to link to these drivers, which requires re-compiling Lustre.
+
在Lustre服务器上,必须选择用于存储数据的块存储文件系统。Lustre文件系统数据位于分布在一组存储服务器上的块存储文件系统中。后端块存储是由一个叫做对象存储设备(OSD)的应用程序编程接口抽象出来的。OSD使Lustre能够使用不同的后端文件系统来实现持久性。
  
Before commencing, read through this document and decide the options that will be required for the Lustre build. The documentation will cover the following processes:
+
目前,有LDISKFS(基于EXT4) OSD和ZFS OSD可供选择,Lustre的编译必须支持至少一个OSD。
  
# Lustre with LDISKFS
+
此外,用户必须决定Lustre将使用哪些驱动程序进行联网。Linux内核内置了对以太网和InfiniBand的支持,但是系统供应商经常提供他们自己的设备驱动程序和工具。此时,需要重新编译Lustre来实现Lustre的网络堆栈LNet与这些驱动程序的链接。
# Lustre with ZFS
+
 
# Lustre with the following networking drivers:
+
编译之前,通读本文档并决定Lustre构建所需的选项。文档将涵盖以下过程:
## In-kernel drivers
+
 
## OpenFabrics Alliance (OFA) OFED
+
# LDISKFS Lustre
 +
# ZFS Lustre
 +
# 以下驱动程序Lustre:
 +
## In-kernel驱动程序
 +
## OFA OFED
 
## Mellanox OFED
 
## Mellanox OFED
## Intel Fabrics
+
## 英特尔Fabrics
  
== Establish a Build Environment ==
+
== 建立构建环境 ==
Compiling the Lustre software requires a computer with a comprehensive installation of software development tools. It is recommended that a dedicated build machine, separate from the intended installation targets is established to manage the build process. The build machine can be a dedicated server or virtual machine and should be installed with a version of the operating system that closely matches the target installation.
 
  
The build machine should conform to the following minimum specification:
+
编译Lustre软件需要一台安装了全面软件开发工具的计算机。建议使用独立于预期安装目标的专用构建机器来管理构建过程。专用构建机器可以是专用服务器或虚拟机,且安装了与目标安装相匹配的操作系统版本。
  
* Minimum 32GB storage to accommodate all source code and software development tools
+
构建机器应符合以下最低规格:
* Minimum 2GB RAM (for VMs -- more is better, of course)
 
* Network interface with access to externally hosted software repositories
 
* Supported Linux operating system distribution. Refer to the Lustre source code ChangeLog for specific information on OS distributions known to work with Lustre
 
* Access to the relevant OS packages needed to compile the software described in this document. Typically, packages are made available via online repositories or mirrors.
 
  
This documentation was developed on a system with an Intel-compatible 64-bit (x86_64) processor architecture, since this represents the vast majority of deployed processor architectures running Lustre file systems today.
+
* 最小32GB存储空间,可容纳所有源代码和软件开发工具
 +
* 最小2GB内存(对于虚拟机而言,内存越大越好)
 +
* 可以访问外部托管软件库的网络接口
 +
* 可支持的Linux操作系统发行版。请参阅Lustre源代码更改日志,了解已知可与Lustre一起工作的操作系统发行版的具体信息
 +
* 可访问本文档中描述的编译软件所需的相关操作包。通常,操作包可以通过在线存储库或镜像获得。
  
In addition to the normal requirements common to open source projects, namely compiler and development  library dependencies, Lustre has dependencies on other projects that may also need to be created from source. This means that at some stages in the process of creating Lustre packages, other packages will be compiled and then also installed on the build server. Lustre itself can normally be created entirely without superuser privileges, once the build server is set up with standard software development packages, but projects such as ZFS do not support this method of working.
+
因为英特尔兼容64位(x86_64)处理器体系结构代表了当今运行Lustre文件系统的绝大多数已部署的处理器体系结构,故本文档是基于该处理器而编写的。
  
Nevertheless, every effort has been made to reduce the requirement for super-user privileges during the build process. For RHEL and CentOS users, the process in this document also includes a description of how to make use of a project called Mock, which creates a chroot jail within which to create packages.
+
除了开源项目常见的正常需求(即编译器和开发库依赖项)之外,Lustre还依赖于其他可能也需要从源代码创建的项目。这意味着在创建Lustre包过程的某些阶段中,其他包也需编译并安装在构建服务器上。一般情况下,一旦用标准软件开发包建立了构建服务器,Lustre完全可以在没有超级用户权限的情况下创建,但是像ZFS这样的项目不支持这种工作方法。
  
Details of the Mock project can be found on GitHub:
+
因而,在构建过程中已经尽一切努力减少对超级用户权限的要求。对于RHEL和CentOS用户来说,本文中所描述的过程还包括如何使用一个名为Mock的项目,该项目创建了一个用于创建包的chroot jail。
 +
 
 +
Mock项目的细节可以在GitHub上找到:
  
 
https://github.com/rpm-software-management/mock
 
https://github.com/rpm-software-management/mock
  
Use of Mock is optional. It brings its own compromises into the build process, and is used in a somewhat unorthodox way, compared to its traditional usage.
+
使用Mock是可选项。与传统用法相比,它在构建过程中也有所取舍,并以某种非正统的方式使用。
 +
 
 +
 
 +
=== 创建用于管理构建的用户 ===
  
=== Create a user for managing the Builds ===
 
For the most part, super-user privileges are not required to create packages, although the user will be required to install software development tools, and some of the 3rd party software distributions expect their packages to be installed on the build host as well. We recommend using a regular account with some additional privileges (e.g. granted via sudo) to allow installation of packages creating during intermediate steps in the process.
 
  
=== RHEL and CentOS 7: Install the Software Development Tools ===
+
在大多数情况下,尽管用户需要安装软件开发工具,一些第三方软件发行版也希望他们的软件包安装在构建主机上,但创建软件包不需要超级用户权限。我们建议使用具有一些额外权限(例如通过sudo授予)的常规帐户,来允许在过程中安装创建包。
There are two options for managing the build environment for creating Lustre packages: use Mock to create an isolated <code>chroot</code> environment, or integrated directly with the build server's OS. Choose one or the other, based on your requirements. Each is described in the sections that follow.
 
  
==== Create a Mock Configuration for Lustre ====
+
=== RHEL和CentOS 7:安装软件开发工具 ===
Mock provides a simple way to isolate complex package builds without compromising the configuration of the host machine's operating platform. It is optional, but very useful, especially when experimenting with builds or working with multiple projects. The software is distributed with RHEL, CentOS and Fedora. Mock is normally used by developers to test RPM builds, starting from an SRPM package, but the environment can be used more generally as a development area.
 
  
To install Mock:
+
管理创建Lustre包的构建环境有两个选项:使用Mock创建一个独立的<code>chroot</code>环境,或者直接与构建服务器的操作系统集成。您可以选择其中任何一个。下面章节将会详细介绍这两个选项。
 +
 
 +
==== 为Lustre创建模拟配置 ====
 +
 
 +
Mock提供了一种简单的方法来隔离复杂的包构建,且不影响主机操作平台的配置。尽管它是可选的,但却非常有用,尤其是在尝试构建或处理多个项目时。该软件与RHEL、CentOS和Fedora一起发布。开发人员通常用Mock来测试从SPRM包开始的RPM构建。它的环境可以当作开发环境普遍使用。
 +
 
 +
安装Mock:
  
 
  sudo yum -y install mock
 
  sudo yum -y install mock
  
Optionally, install Git, so that repositories can be cloned outside of the Mock <code>chroot</code> (this will simplify maintenance of the <code>chroot</code> environment):
+
或者安装Git,存储库可以在Mock<code>chroot</code>外克隆(这样可以简化<code>chroot</code>环境的维护):
  
 
  sudo yum -y install git
 
  sudo yum -y install git
  
 
Add any users that will be running Mock environments to the <code>mock</code> group:
 
Add any users that will be running Mock environments to the <code>mock</code> group:
 +
将运行模拟Mock的任何用户添加到<code>mock</code>组:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第122行: 第134行:
 
</pre>
 
</pre>
  
The <code>wheel</code> group is optional and will allow the user to run commands with elevated privileges via <code>sudo</code>. Apply with caution, as this can potentially weaken the security of the build host.
+
<code>wheel</code>组是可选的,允许用户使用被<code>sudo</code>提升的权限来运行命令。使用需谨慎,因为这可能会削弱构建主机的安全性。
  
 
The following example creates the user <code>build</code>:
 
The following example creates the user <code>build</code>:
 +
以下示范<code>build</code>用户生成:
 +
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
 
sudo useradd -m build
 
sudo useradd -m build
第130行: 第144行:
 
</pre>
 
</pre>
  
When the software has been installed, create a configuration appropriate to the build target. Mock configurations are recorded in files in <code>/etc/mock</code>. The default configuration is called <code>default.cfg</code>, and is normally a soft link to one of the files in this directory. To use the system default configuration for RHEL or CentOS, run the following command:
+
软件安装后,创建适合构建目标的配置。模拟配置记录在<code>/etc/mock</code>下文件中。默认配置称为default.cfg,通常是指向该目录中某个文件的软链接。要使用RHEL或CentOS的系统默认配置,请运行以下命令:
  
 
  ln -snf /etc/mock/centos-7-x86_64.cfg /etc/mock/default.cfg
 
  ln -snf /etc/mock/centos-7-x86_64.cfg /etc/mock/default.cfg
  
These configuration files describe the set of packages and repos the chroot environment will have available when it is instantiated, and will automatically populate the chroot by downloading and installing packages from YUM repositories. The configuration can be customised so that it is tailored to the requirements of the user. Refer to the <code>mock(1)</code> manual page for more information.
+
这些配置文件描述了包的集合和使用。chronot环境在实例化时,可以被获得,从YUM存储库中下载和安装包可以自动填充chronot。该配置可以被定制以满足用户的要求。更多信息,请参考<code>mock(1)</code>
  
To create a new configuration specific to the requirements for compiling Lustre and also incorporating requirements for compiling ZFS and OFED, run the following commands (requires super-user privileges):
+
创建一个针对Lustre编译要求,且包含编译ZFS和OFED的编译要求的新配置,请运行以下命令(需要超级用户权限):
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第156行: 第170行:
 
</pre>
 
</pre>
  
This configuration ensures that each time a new Mock environment is created, all of the Lustre build dependencies are automatically downloaded and installed.
+
此配置保证每次创建新的Mock环境时,所有Lustre构建依赖项都会自动下载和安装。
  
'''Note:''' Some of the build scripts and <code>Makefiles</code> used by Lustre and other projects assume that there will always be an architecture sub-directory (e.g. <code>x86_64</code>) in the RPM build directories. This is not always the case. In particular, Mock does not create sub-directories based on target architecture. To work around this problem, a custom RPM macro was added into the mock configuration above. If this does not work, then the same macro can be added by hand by running the following command after creating a Mock chroot environment:
+
'''注意''':Lustre和其他项目使用的一些构建脚本和<code>Makefiles</code> 假设在RPM构建目录中总会有一个架构子目录(例如<code>x86_64</code>)。但情况并非总是如此,Mock不基于目标体系结构创建子目录。为了解决这个问题,在上面的Mock配置中添加了一个定制的PRM宏。如果这样也没用的话,那么在创建Mock chroot环境后,可以通过运行以下命令手动添加相同的宏:
  
 
  mock --shell "echo '%_rpmdir %{_topdir}/RPMS/%{_arch}' >>\$HOME/.rpmmacros"
 
  mock --shell "echo '%_rpmdir %{_topdir}/RPMS/%{_arch}' >>\$HOME/.rpmmacros"
  
For information, an example of the build error for SPL manifests as follows:
+
SPL生成错误的示例如下所示:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第171行: 第185行:
 
</pre>
 
</pre>
  
Once the Mock configuration has been created, login as the user that will be managing the builds and then run the following command to prepare the <code>chroot</code> environment:
+
创建Mock配置后,以管理构建的用户身份登录,然后运行以下命令来准备 <code>chroot</code>环境:
  
 
  mock [-r <config>] --init
 
  mock [-r <config>] --init
  
The <code>-r</code> flag specifies the configuration to use, if the default configuration is unsuitable. This is either the name of one of the files in the <code>/etc/mock</code> directory, minus the <code>.cfg</code> suffix, or the name of a file.
+
如果默认配置不合适, 则<code>-r</code>规定使用配置。要吗是 <code>/etc/mock</code> 目录中某个文件的名称,减去<code>.cfg</code>后缀,要么是文件名。
  
To work interactively within the Mock environment, launch a shell:
+
要在Mock环境中交互工作,请启动一个shell:
  
 
  mock [-r <config>] --shell
 
  mock [-r <config>] --shell
  
'''Note:''' some mock commands will attempt to clean the chroot directory before executing. This will remove any files considered temporary by Mock, which means anything that Mock itself has not provisioned. To avoid this situation, use the <code>-n</code> flag. The <code>--shell</code> command does not run a clean operation, so the <code>-n</code> flag is not required.
+
'''注意''':一些mock命令在执行前会试图清除chroot环境。任何Mock认为是临时性的文件都可能被清除,也就是说Mock本身没有提供任何文件。要避免这种情况,可以使用<code>-n</code>。因为<code>--shell</code> 指令不运行清理操作,所以不需要<code>-n</code>
 +
 
 +
==== 正常构建过程的开发软件安装 ====
  
==== Development Software Installation for Normal Build Process ====
+
如果使用Mock创建Lustre包,请跳过此步骤。
Skip this step if Mock is being used to create the Lustre packages.
 
  
Use the following command to install the prerequisite software tools on the build server:
+
使用以下命令在构建服务器上安装必备软件工具:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第199行: 第214行:
 
</pre>
 
</pre>
  
The packages in the above list are sufficient to build Lustre, ZFS and 3rd party drivers derived from OFED.
+
以上列表中的软件包足以构建Lustre、ZFS和来自OFED的第三方驱动程序。
  
==== CentOS: Constraining YUM to Older OS Releases Using the Vault Repositories ====
+
==== CentOS:使用Vault将YUM限制到旧的操作系统版本 ====
  
It is the convention with RHEL and CentOS to always retrieve that latest updates for a given major OS release when installing or upgrading software. YUM's repository definitions purposely refer to the latest upstream repositories in order to minimise the risk of users downloading obsolete packages. However, this behaviour is not always desirable. Installation and upgrade policies for a given organisation may impose restrictions on the operating platform, which may extend to mandating specific package versions for applications, including the kernel. A site may have frozen the operating system version to a specific revision or range of updates, or may have restrictions imposed upon them by the application software running on their infrastructure.
+
RHEL和CentOS的惯例是在安装或升级软件时总是检索给定主操作系统版本的最新更新。YUM存储库定义特意参考了最新的上游存储库,从而将用户下载过时软件包的风险降至最低。然而,这种行为并不总是可取的。给定组织的安装和升级策略可能会限制操作平台,进而限制应用程序(包括内核)的指定包版本。站点可能已经将操作系统版本冻结到特定的版本或更新范围,或者可能受到运行在基础架构上应用软件的限制。
  
This, in turn, affects the environment for building packages, including Lustre. If the run-time environment is bound to a specific OS release, so must the build environment be similarly restricted.
+
这反过来又会影响包括Lustre在内构建包环境。如果运行时环境绑定到特定的操作系统版本,那么构建环境也必须受到类似的限制。
  
To facilitate this restriction in CentOS, one can leverage the CentOS Vault repository (http://vault.centos.org), which maintains an online archive of every package and update released for every version of CentOS. Every CentOS installation includes a package called <code>centos-release</code> used to track the OS version and provide the YUM repository definitions. The package includes a definition for the Vault repositories available for versions of CentOS prior to the version currently installed. For example, the <code>centos-release</code> package for CentOS 6.9 will include Vault repository definitions for CentOS 6.0 - 6.8.
+
为了在CentOS中促进这种限制,可以利用CentOS Vault存储库(http://vault.centos.org)。该存储库维护每个版本CentOS的每个包和更新发布的在线档案。每个CentOS安装都包含一个名为<code>centos-release</code> 的包,用于跟踪操作系统版本并提供YUM存储库定义。该包包括当前安装版本之前的可用CentOS版本Vault存储库定义。例如,针对CentOS 6.9的<code>centos-release</code> 包将包括CentOS 6.0 - 6.8的Vault存储库定义。
  
This can be exploited to help constrain the build server environment such that it matches the intended target environment. The simplest way to do this is to download the latest <code>centos-release</code> rpm, extract the CentOS Vault repository definition and overwrite the original Vault definition on the platform. Once in place, disable the default repositories in YUM, and enable only the Vault repositories for the target OS version. For example:
+
这一点可以用来约束构建服务器环境,使其与预期的目标环境相匹配。最简单的方法是下载最新的 <code>centos-release</code>rpm,提取CentOS Vault定义并覆盖平台上的原始Vault定义。之后禁用YUM中的默认存储库,并且只为目标操作系统版本启用Vault存储库。例如:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第228行: 第243行:
 
</pre>
 
</pre>
  
'''Note:''' The <code>centos-release</code> package is not itself updated, as this can cause applications and software build processes that depend on correctly identifying the OS version to fail. The purpose of the above approach is to update YUM only, but otherwise maintain the OS version and release of the build environment.
+
'''注意''':CentOS发行包本身不会更新,因为这可能导致依赖于正确识别操作系统版本的应用程序和软件构建过程失败。上述方法的目的是仅更新YUM,而不会维护操作系统版本和发布构建环境。
  
=== SLES 12: Install the Software Development Tools ===
+
=== SLES 12:安装软件开发工具 ===
SUSE Linux Enterprise Server (SLES), like Red Hat Enterprise Linux, uses an RPM-based package management system, although there are some significant differences between the two platforms. In addition to the main subscription, the SUSE Linux Enterprise SDK (<code>SLE-SDK12-SP2</code>) add-on must also be enabled in order to be able to install the developer (i.e. <code>-devel</code>) packages.
 
  
Use the following command to install the prerequisite software on an SLES 12 SP2 build server:
+
尽管SLES与红帽企业Linux之间存在一些显著的差异,他们均使用基于PRM的包管理系统。除了主订阅之外,还必须启用SUSE Linux企业软件开发工具包(<code>SLE-SDK12-SP2</code>)附加组件,以安装开发包(<code>-devel</code>)。
 +
 
 +
使用以下命令在SLES 12 SP2构建服务器上安装必备软件:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第245行: 第261行:
 
</pre>
 
</pre>
  
In some circumstances, the <code>zypper</code> may flag a dependency issue with <code>rpm-build</code>. For example:
+
在某些情况下, <code>zypper</code>可能会标记<code>rpm-build</code>构建的依赖问题。例如:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第259行: 第275行:
 
</pre>
 
</pre>
  
If this occurs, select <code>Solution 2: deinstallation of gettext-runtime-mini-<version></code> to resolve.
+
出现这种情况,选择<code>Solution 2: deinstallation of gettext-runtime-mini-<version></code> 来解决。
  
== Obtain the Lustre Source Code ==
+
== 获取Lustre源代码 ==
  
The following information applies to the Lustre community releases. To acquire the source code for other distributions of Lustre, such as Intel Enterprise Edition for Lustre, please refer to the vendor's documentation.
+
以下信息适用于Lustre社区版本。要获取Lustre其他发行版的源代码,如Lustre的英特尔企业版,请参考供应商的文档。
  
The Lustre source code is maintained in a Git repository. To obtain a clone, run the following command:
+
Lustre源代码保存在Git存储库中。要获得克隆,请运行以下命令:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第273行: 第289行:
 
</pre>
 
</pre>
  
When the repository has been cloned, change into the clone directory and review the branches:
+
当存储库被克隆后,更改到克隆目录并查看分支:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第280行: 第296行:
 
</pre>
 
</pre>
  
For example:
+
例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第299行: 第315行:
 
</pre>
 
</pre>
  
The master branch is the main development branch and will form the basis of the next feature release of Lustre. Branches that begin with the letter "b" represent the current and previous Lustre release branches, along with the release version number. Thus, <code>b2_10</code> is the Lustre 2.10.0 branch. Other branches are used for long-running development projects such as Progressive File Layouts (PFL) and LNet Multi-rail.
 
  
You can review the tags as follows:
+
主分支是主要的开发分支,将构成Lustre下一个特性版本的基础。以字母“b”开头的分支代表当前和以前的Lustre发行分支,以及发行版本号。因此,b2_10是Lustre 2.10.0分支。其他分支则用于长期运行的开发项目,如渐进式文件布局(PFL)和Lnet多轨。
 +
 
 +
您可以按如下方式查看标签:
  
 
  git tag
 
  git tag
  
There are many more tags than there are branches. Each tag represents an inflection point in development. Lustre version numbers have four fields read from left to right to indicate major, minor, maintenance and hot fix version numbers respectively. For example, version <code>2.10.0.0</code> is interpreted as follows:
+
标签比分支多。每个标签都代表了发展变化。Lustre版本号有四个从左到右读取的字段,分别表示主要、次要、维护和热修复版本号。例如,版本2.10.0.0解释如下:
 +
 
 +
* 主功能发布号
 +
* 次功能发布号
 +
* 维护版本号
 +
* 热修复版本号
  
* major feature release number
+
维护发布版本号为0(零)表示该版本已完成,可供一般使用(也称为“一般可用”,或“通用”)。维护版本小于等于10的标签代表维护发布版(错误修复或次要操作系统支持更新)。维护版本大于50的标签是预发布的开发标签,不用于一般用途。
* minor feature release number
 
* maintenance releases number
 
* hot fix release number
 
  
A maintenance release version number of 0 (zero) indicates that the version is complete and is ready for general use (also referred to as generally available, or GA), and maintenance versions <=10 represent maintenance releases (bug fixes or minor operating system support updates). Tags with a maintenance version greater than 50 are pre-release development tags and should not be considered for general use.
+
<code>lustre-release</code>存储库中的标签有两种不同的格式:
  
The tag labels in the <code>lustre-release</code> repository have two different formats:
+
* 点分隔的数字版本号(例如2.10.0)  
* A dot-separated numerical version number (e.g. 2.10.0)
+
* 以小写“v”开头,后跟版本号的标签,数字之间用下划线分隔(例如v2_10_0_0)
* A label beginning with lower-case "v" followed by the version number, separated by underscores (e.g. <code>v2_10_0_0</code>)
 
  
The different tag formats for a given version number are equivalent and refer to the same point in the git repository history. That is, tags <code>v2_10_0</code> and <code>2.10.0</code> refer to the same commit.
+
给定版本号的不同标签格式意义是相同的,并且指向git存储库中的相同点。也就是说,标记v2_10_0和2.10.0指的是相同的版本。
  
For example, the following tags represent the generally available release of Lustre version 2.10.0:
+
例如,以下标签代表Lustre版本2.10.0的一般可用版本:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第332行: 第350行:
 
</pre>
 
</pre>
  
The next list of tags all point to the same pre-release development build, with maintenance release numbers of 50 or higher:
+
下面的标签列表都指向相同的预发布开发版本,维护版本号为50或更高:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第340行: 第358行:
 
</pre>
 
</pre>
  
Tags ending with the letters "RC" are release candidates: these are pre-production builds made for testing in anticipation of a final generally available (GA) release. If a release candidate is considered to be stable enough for general use, it is promoted to GA. There may be one or several RC builds before GA is declared for a given version of Lustre.
 
  
Use Git to checkout the Lustre release version that will be built. For example, to checkout Lustre version 2.10.0:
+
以字母“RC”结尾的标签是发布候选:这些版本是在最终的通用发布前为测试而做的预生产构建。如果一个发布候选被认为对于一般用途来说足够稳定,它将被当作是通用版本。在为给定版本的Lustre发布通用版本之前,可能有一个或多个发布后哦选。
 +
 
 +
使用Git签出将要构建的Lustre版本。例如,签出Lustre 2.10.0版:
  
 
  git checkout 2.10.0
 
  git checkout 2.10.0
  
or
+
或者
  
 
  git checkout b2_10
 
  git checkout b2_10
  
Prepare the build:
+
准备搭建:
  
 
  sh autogen.sh
 
  sh autogen.sh
  
Lustre source code is also available in package format, distributed alongside the binaries for a release. The latest software releases are available from the following URL:
+
Lustre源代码也有包格式,与二进制文件一起发布。最新软件版本可从以下网址获得:
  
 
https://wiki.whamcloud.com/display/PUB/Lustre+Releases
 
https://wiki.whamcloud.com/display/PUB/Lustre+Releases
  
This page has links to all of the releases. For example, the source RPM for the latest Lustre release on RHEL or CentOS 7 can be downloaded here:
+
本页有所有版本的链接。例如,RHEL或CentOS 7最新Lustre版本的PRM可在此下载:
  
 
https://downloads.whamcloud.com/public/lustre/latest-release/el7/server/SRPMS/
 
https://downloads.whamcloud.com/public/lustre/latest-release/el7/server/SRPMS/
  
'''Note:''' the examples used in the remainder of the documentation are based on a version of Lustre version 2.10.0 but the process applies equally to all recent Lustre releases.
+
'''注意:'''以下文档中使用的示例均基于Lustre 2.10.0版本,但同样适用于所有最近发布Lustre版本。
  
== LDISKFS and Patching the Linux Kernel ==
+
== LDISKFS与修补Linux内核 ==
  
=== Introduction ===
+
=== 简介 ===
If the Lustre servers will be using the LDISKFS object storage device (OSD) target, which is itself derived from EXT4, there are two options available to users when compiling Lustre and the LDISKFS kernel module. Simply put, one can either patch the Linux kernel or, with Lustre 2.10 and newer, there is the option of using the Linux kernel unmodified (also referred to as a "patchless" kernel). Patchless kernel support for the LDISKFS OSD is a new development in 2017 and is still considered to be experimental. Nevertheless, it is worth considering as an option, as it simplifies maintenance and support considerably. Choosing to run a patchless server means being able to take advantage of the KABI compatibility feature in RHEL and CentOS, and the weak-updates kernel module support.
 
  
Historically, the Linux kernel used by Lustre servers has always required the application of additional patches that are not carried by the upstream kernel or the OS distribution vendor. The Lustre developer community has always worked to reduce the dependency on these patches and today the delta is small enough, at least for RHEL and CentOS servers, that users can evaluate running LDISKFS Lustre servers without any baseline kernel patches.
+
如果Lustre服务器将使用从EXT4衍生的LDISKFS对象存储设备(OSD),那么在编译Lustre和LDISKFS内核模块时,用户可以使用两个选项。也就是,可以修补Lustre 2.10和更新版本的Linux内核,或者选择不加修改地使用Linux内核(也称为“无补丁”内核)。LDISKFS OSD的无补丁内核是2017年新发布的,目前仍在实验阶段。然而,作为一种选择,这是值得考虑的,因为它大大简化了维护和支持步骤。选择运行无补丁服务器意味着能够利用RHEL和CentOS的KABI兼容性特性,以及弱更新内核模块。
  
There are some caveats:
+
历史上,Lustre服务器使用的Linux内核需要应用上游内核或操作系统发行商不支持的附加补丁。Lustre开发者社区一直致力于减少对这些补丁的依赖。如今,至少对于RHEL和中央操作系统服务器来说,附加补丁的增量足够小,用户可以在没有任何基线内核补丁的情况下评估运行LDISKFS Lustre服务器。
  
* Project quota support, new in Lustre 2.10, requires a set of patches that have not yet been accepted by into mainstream Linux distributions. If project quota is a requirement, then the kernel must be patched.
+
警告:
* Running without patches may have a negative impact on performance, although this is thought to be a small risk.
 
* Test coverage for Lustre servers running without kernel patches for LDISKFS servers is lower.
 
  
The choice is not, therefore, entirely clear-cut: patched kernels deviate from the package provided by the operating system, and have maintenance overheads that must be taken into account, but they have the broadest test coverage and a long legacy. In addition, some functionality is currently only available to patched kernels. The patchless kernel option is new, and therefore carries some risk as an unknown quanitity, relatively speaking, but offers a simpler path for maintenance.
+
* Lustre 2.10中新增的项目配额支持需要一组尚未被主流Linux发行版接受的补丁。如果项目配额要求,那么内核必须被修补。
 +
* 尽管风险很小,在没有补丁的情况下运行可能会对性能产生负面影响。
 +
* 运行时没有LDISKFS服务器内核补丁的Lustre服务器的测试覆盖率较低。
  
'''Note:''' Running "patchless" does not mean that Lustre OSDs are EXT4 devices. The OSDs will still be LDISKFS, which is a modified derivative of EXT4. Irrespective of whether or not the Kernel packages are patched, Lustre still needs access to the kernel source code in order to create the LDISKFS kernel module.
+
因此,选择并不完全简单:补丁内核偏离了操作系统提供的包,并且有必须考虑的维护开销,但是它们历史较长且有最广泛的测试覆盖面。此外,一些功能目前仅适用于打补丁的内核。无补丁内核选项是新出现的,因此相对而言,作为一个未知数,它有一定的风险,但它提供了一个更简单的维护途径。
  
'''Note:''' Lustre does not require a patched kernel if the ZFS OSD is used. Lustre installations that use ZFS exclusively do not require a customised kernel.
+
'''注意:'''运行“无补丁”并不意味着Lustre OSDs是EXT4设备。OSDs仍将是LDISKFS,它是EXT4的一种改进衍生物。不管内核包是否打了补丁,Lustre仍然需要访问内核源代码来创建LDISKFS内核模块。
  
'''Note:''' Lustre clients do not require a patched kernel.
+
''' 注意:''' 如果使用ZFS操作系统,Lustre不需要修补内核。专门使用ZFS的Lustre安装不需要定制内核。
  
To create a patched kernel, read through the rest of the section and follow the instructions. Otherwise, this section can be skipped.
+
'''注意:'''Lustre客户端不需要打补丁的内核。
  
=== Applying the Lustre Kernel Patches ===
+
要创建打补丁的内核,请通读本节的其余部分,并按照说明操作。否则,可以跳过这一部分。
  
The rest of this section describes the process of modifying the operating system kernel with the patches provided in the Lustre distribution. Using the "patchless" kernel for LDISKFS Lustre servers will be covered in the section on [[#Create_the_Lustre_Packages|creating the Lustre packages]].
+
=== 应用Lustre内核补丁 ===
  
For the most part, these patches provide performance enhancements or additional hooks useful for testing. In addition, project quota support requires a set of patches that must be applied to the kernel. If project quota support is required, then these patches are essential.
+
本节的其余部分介绍了使用Lustre发行版提供的补丁修改操作系统内核的过程。LDISKFS Lustre服务器使用“无补丁”内核将在“[[#创建_ Lustre_软件包|创建Lustre软件包”]]一节中介绍。
  
The Lustre community continues to work to reduce the dependency on maintaining LDISKFS patches and it is hoped that at some point in the future, they will be entirely unnecessary.
+
在大多数情况下,这些补丁的性能会增强且提供对测试有用的额外钩。此外,项目配额支持需要一组必须应用于内核的补丁。如果需要项目配额支持,那么这些补丁是必不可少的。
  
=== Obtain the Kernel Source Code ===
+
Lustre社区继续努力减少对维护LDISKFS的补丁的依赖。希望在将来的某个时候,它们变得完全没有必要。
  
If the target build will be based on LDISKFS storage targets, download the kernel sources appropriate to the OS distribution. Refer to the changelog in the Lustre source code for the list of kernels for each OS distribution that are known to work with Lustre. The changelog maintains a historical record for all Lustre releases.
+
=== 获取内核源代码 ===
  
The following excerpt shows the kernel support for Lustre version 2.10.0:
+
如果目标构建基于LDISKFS存储目标,庆下载适合操作系统版本的内核源。参阅Lustre源代码中的变更日志,了解已知可与Lustre一起使用的每个操作系统发行版的内核列表。变更日志维护所有Lustre版本的历史记录。
 +
 
 +
以下摘录展示了Lustre 2.10.0版的内核支持:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第432行: 第452行:
 
</pre>
 
</pre>
  
In the above list, Lustre version 2.10.0 supports version <code>3.10.0-514.16.1.el7</code> of the RHEL / CentOS 7.3 kernel. Use YUM to download a copy of the source RPM. For example:
+
在上表中,Lustre 2.10.0版支持RHEL/CentOS7.3内核的<code>3.10.0-514.16.1.el7</code>版。使用YUM下载源PRM副本。例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第439行: 第459行:
 
</pre>
 
</pre>
  
The following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then use that to download the kernel source:
+
以下shell脚本片段可用于识别给定操作系统和Lustre版本的内核版本,然后使用它下载内核源:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第451行: 第471行:
 
</pre>
 
</pre>
  
Set the <code>os</code> and <code>lu</code> variables at the beginning of the script to the required operating system release and Lustre version respectively.
+
将脚本开头的<code>os</code><code>lu</code>变量分别设置为所需的操作系统版本和Lustre版本。
  
If Mock is being used to build Lustre, you can download the source RPM from outside the mock shell and then copy it in as follows:
+
如果Mock被用于构建Lustre,您可以从Mock Shell外部下载源RPM,然后按如下方式复制它:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第459行: 第479行:
 
</pre>
 
</pre>
  
For example:
+
例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第465行: 第485行:
 
</pre>
 
</pre>
  
An alternative solution for Mock is to enable the CentOS-Source repository configuration, then run the <code>yumdownloader</code> command directly from the Mock shell. A simple, but crude way to add the source repositories into Mock's YUM configuration is to run the following from the Mock shell:
+
Mock的另一个解决方案是启用CentOS-Source存储库配置,然后直接从Mock shell运行<code>yumdownloader</code>命令。将源存储库添加到Mock的YUM配置中的一个简单但粗略的方法是从Mock shell运行以下内容:
  
 
  cat /etc/yum.repos.d/CentOS-Sources.repo >> /etc/yum/yum.conf
 
  cat /etc/yum.repos.d/CentOS-Sources.repo >> /etc/yum/yum.conf
  
However, this will be overwritten on the next invocation of the mock shell. One can permanently update the configuration by appending the CentOS source repositories to the appropriate configuration file in the <code>/etc/mock</code> directory on the build host, and this what was done when preparing the Mock configuration earlier.
+
但是,这将在下次调用Mock Shell时被覆盖。您可以通过将CentOS源存储库附加到构建主机<code>/etc/mock</code>目录下适当配置文件来永久更新配置,这就是之前准备Mock配置时所做的工作。
  
If it is necessary to create a build for an older kernel version, it might not be available in the active YUM repository for the distribution. CentOS maintains an archive or all previous releases in a set of YUM repositories called Vault. The CentOS Vault is located at:
+
如果为较旧的内核版本创建一个版本,该版本也许无法在使用的YUM存储库中获得。CentOS在一组名为Vault的YUM存储库中维护版本档案或所有以前发布的版本。CentOS位于:
  
 
http://vault.centos.org
 
http://vault.centos.org
  
The Vault includes source RPMS, as well as binaries. Unfortunately, CentOS does not include YUM configuration descriptions for the archived source repositories. Instead of YUM, go the the Vault site directly and navigate through the directory structure to get the required files. For example, the source RPMS for the CentOS 7.2 package updates can be found here:
+
Vault包括源RPMS,以及二进制文件。不幸的是,CentOS不包括存档源存储库的YUM配置描述。可以越过YUM,直接进入Vault站点,浏览目录结构以获得所需的文件。例如,CentOS 7.2软件包更新的源RPMS可在此找到:
  
 
http://vault.centos.org/7.2.1511/updates/Source/
 
http://vault.centos.org/7.2.1511/updates/Source/
  
=== Prepare the Kernel Source ===
+
=== 准备内核源码 ===
  
 
Install the kernel source RPM that was downloaded in the previous step. This will create a standard RPM build directory structure and extract the contents of the source RPM:
 
Install the kernel source RPM that was downloaded in the previous step. This will create a standard RPM build directory structure and extract the contents of the source RPM:
 +
安装上一步下载的内核源转速。这将创建一个标准的转速构建目录结构,并提取源转速的内容:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第489行: 第510行:
  
 
Determine the set of patches that need to be applied to the kernel, based on the operating system distribution. The file <code>lustre-release/lustre/kernel_patches/which_patch</code> maps the kernel version to the appropriate patch series. For example, for RHEL / CentOS 7.3 on Lustre 2.10.0, the file contains:
 
Determine the set of patches that need to be applied to the kernel, based on the operating system distribution. The file <code>lustre-release/lustre/kernel_patches/which_patch</code> maps the kernel version to the appropriate patch series. For example, for RHEL / CentOS 7.3 on Lustre 2.10.0, the file contains:
 +
根据操作系统分布,确定需要应用于内核的补丁集。文件lustre-release/lustre/kernel _patches/哪个_ patch将内核版本映射到适当的补丁系列。例如,对于Lustre 2.10.0上的RHEL/CETOs 7.3,文件包含:
  
 
  3.10-rhel7.series      3.10.0-514.16.1.el7 (RHEL 7.3)
 
  3.10-rhel7.series      3.10.0-514.16.1.el7 (RHEL 7.3)
  
 
Review the list of patches in the series, e.g.:
 
Review the list of patches in the series, e.g.:
 +
查看系列中的修补程序列表,例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第506行: 第529行:
  
 
When the correct patch series has been identified, create a patch file containing all of the kernel patches required by Lustre's LDISKFS OSD:
 
When the correct patch series has been identified, create a patch file containing all of the kernel patches required by Lustre's LDISKFS OSD:
 +
注:Lustre 2.10引入的新功能之一是支持项目配额。这是一个强大的管理功能,允许基于称为项目标识的新标识符将额外配额应用于文件系统。为LDISKFS实现项目配额意味着在内核中更改EXT4代码。不幸的是,这个特殊的变化打破了内核ABI (KABI)兼容性保证,这是RHEL内核的一个特性。如果有问题,请从修补程序系列文件中删除名为vfs-project-quotas-rhel7.patch的修补程序。此操作将有效禁用Lustre LDISKFS版本的项目配额支持。
 +
 +
确定正确的补丁系列后,创建一个补丁文件,其中包含Lustre LDISKFS OSD所需的所有内核补丁:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第515行: 第541行:
  
 
Apply the following changes to the Kernel RPM spec file:
 
Apply the following changes to the Kernel RPM spec file:
 +
将以下更改应用于内核转速规格文件:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第532行: 第559行:
  
 
The following changes to the kernel configuration specification are also strongly recommended:
 
The following changes to the kernel configuration specification are also strongly recommended:
 +
这些修改确保Lustre为LDISKFS OSD所需的补丁在编译期间应用于内核。
 +
 +
强烈建议对内核配置规范进行以下更改:
  
 
<pre style="white-space: pre-wrap;">
 
<pre style="white-space: pre-wrap;">
第539行: 第569行:
  
 
To apply these changes, run the following commands from the command shell:
 
To apply these changes, run the following commands from the command shell:
 +
要应用这些更改,请从命令外壳运行以下命令:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第550行: 第581行:
  
 
Alternatively, there is a <code>kernel.config</code> file distributed with the Lustre source code that can be used in place of the standard file distributed with the kernel. If using a file from the Lustre source, make sure that the first line of the file is as follows:
 
Alternatively, there is a <code>kernel.config</code> file distributed with the Lustre source code that can be used in place of the standard file distributed with the kernel. If using a file from the Lustre source, make sure that the first line of the file is as follows:
 +
或者,有一个内核配置文件与Lustre源代码一起分发,可以用来代替内核分发的标准文件。如果使用来自Lustre源的文件,请确保文件的第一行如下所示:
  
 
  # x86_64
 
  # x86_64
  
 
The following script demonstrates the method for a RHEL / CentOS 7.3 kernel configuration:
 
The following script demonstrates the method for a RHEL / CentOS 7.3 kernel configuration:
 +
以下脚本演示了RHEL / CentOS 7.3内核配置的方法:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第562行: 第595行:
  
 
=== Create the kernel RPM packages ===
 
=== Create the kernel RPM packages ===
 
+
=== 创建内核转速包 ===
 
Use the following command to build the patched Linux kernel:
 
Use the following command to build the patched Linux kernel:
 +
使用以下命令构建修补的Linux内核:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第576行: 第610行:
  
 
'''Note:''' the "<code>--with baseonly</code>" flag means that only the essential kernel packages will be created and the "<code>debug</code>" and "<code>kdump</code>" options will be excluded from the build. If the project quotas patch is used, the KABI verification must also be disabled using the "<code>--without kabichk</code>" flag.
 
'''Note:''' the "<code>--with baseonly</code>" flag means that only the essential kernel packages will be created and the "<code>debug</code>" and "<code>kdump</code>" options will be excluded from the build. If the project quotas patch is used, the KABI verification must also be disabled using the "<code>--without kabichk</code>" flag.
 +
'''注意''':"<code>--with baseonly</code>"标志意味着只创建基本内核包,“debug”和“kdump”选项将从构建中排除。如果使用项目配额补丁,还必须使用“-with KABICHK”标志禁用KABI验证。
  
 
=== Save the Kernel RPMs ===
 
=== Save the Kernel RPMs ===
 +
=== 保存内核RPMs ===
  
 
Copy the resulting kernel RPM packages into a directory tree for later distribution:
 
Copy the resulting kernel RPM packages into a directory tree for later distribution:
 +
将生成的内核转速包复制到目录树中,以便以后发布:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第590行: 第627行:
  
 
Lustre servers that will be using ZFS-based storage targets require packages from the ZFS on Linux project (http://zfsonlinux.org). The Linux port of ZFS is developed in cooperation with the OpenZFS project and is a versatile and powerful alternative to EXT4 as a file system target for Lustre OSDs. The source code is hosted on GitHub:
 
Lustre servers that will be using ZFS-based storage targets require packages from the ZFS on Linux project (http://zfsonlinux.org). The Linux port of ZFS is developed in cooperation with the OpenZFS project and is a versatile and powerful alternative to EXT4 as a file system target for Lustre OSDs. The source code is hosted on GitHub:
 +
将使用基于ZFS的存储目标的Lustre服务器需要来自ZFS Linux项目的软件包(http://zfsonlinux.org)。ZFS的Linux端口是与OpenZFS项目合作开发的,是作为Lustre OSDs文件系统目标的EXT4的多功能和强大的替代品。源代码托管在GitHub上:
  
 
https://github.com/zfsonlinux
 
https://github.com/zfsonlinux
  
 
Pre-compiled packages maintained by the ZFS on Linux project are available for download. For instructions on how to incorporate the ZFS on Linux binary distribution into one of the supported operating systems, refer to the "Getting Started" documentation:
 
Pre-compiled packages maintained by the ZFS on Linux project are available for download. For instructions on how to incorporate the ZFS on Linux binary distribution into one of the supported operating systems, refer to the "Getting Started" documentation:
 +
ZFS Linux项目维护的预编译包可供下载。有关如何将ZFS Linux二进制发行版集成到受支持的操作系统之一的说明,请参阅“入门”文档:
  
 
https://github.com/zfsonlinux/zfs/wiki/Getting-Started
 
https://github.com/zfsonlinux/zfs/wiki/Getting-Started
第600行: 第639行:
  
 
When compiling packages from the source code, there are three options for creating ZFS on Linux packages:
 
When compiling packages from the source code, there are three options for creating ZFS on Linux packages:
 +
本节的剩余部分描述了如何从源创建ZFS包。
  
# DKMS: packages are distributed as source code and compiled on the target against the installed kernel[s]. When an updated kernel is installed, DKMS-compatible modules will be recompiled to work with the new kernel. The module rebuild is usually triggered automatically on system reboot, but can also be invoked directly from the command-line
+
从源代码编译包时,在Linux包上创建ZFS有三个选项:
# KMOD: kernel modules built for a specific kernel version and bundled into a binary package. These modules are not portable between kernel versions, so a change in kernel version requires that the kernel modules are recompiled and re-installed.
 
# KMOD with kernel application binary interface (KABI) compatibility, sometimes referred to as "weak-updates" support. KABI-compliant kernel modules exploit a feature available in certain operating system distributions, such as RHEL, that ensure ABI compatibility across kernel updates in the same family of releases. If a minor kernel update is installed, the KABI guarantee means that modules that were compiled against the older variant can be loaded unmodified by the new kernel without requiring re-compilation from source.
 
  
The process for compiling ZFS and SPL is thoroughly documented on the ZFSonLinux GitHub site, but will be summarised here, as compiling ZFS has an implication on the Lustre build process. Each approach has its benefits and drawbacks.
+
# DKMS: packages are distributed as source code and compiled on the target against the installed kernel[s]. When an updated kernel is installed, DKMS-compatible modules will be recompiled to work with the new kernel. The module rebuild is usually triggered automatically on system reboot, but can also be invoked directly from the command-line软件包作为源代码分发,并根据安装的内核[在目标上编译。当安装更新的内核时,DKMS兼容的模块将被重新编译以与新内核一起工作。模块重建通常在系统重启时自动触发,但也可以直接从命令行调用
 +
# KMOD: kernel modules built for a specific kernel version and bundled into a binary package. These modules are not portable between kernel versions, so a change in kernel version requires that the kernel modules are recompiled and re-installed.为特定内核版本构建的内核模块,打包成二进制包。这些模块在内核版本之间是不可移植的,所以内核版本的改变需要重新编译和安装内核模块。
 +
# KMOD with kernel application binary interface (KABI) compatibility, sometimes referred to as "weak-updates" support. KABI-compliant kernel modules exploit a feature available in certain operating system distributions, such as RHEL, that ensure ABI compatibility across kernel updates in the same family of releases. If a minor kernel update is installed, the KABI guarantee means that modules that were compiled against the older variant can be loaded unmodified by the new kernel without requiring re-compilation from source.具有内核应用程序二进制接口(KABI)兼容性的KMOD,有时被称为“弱更新”支持。符合KABI的内核模块利用了某些操作系统发行版(如RHEL)中可用的特性,确保了同一系列发行版中的内核更新与ABI兼容。如果安装了一个小内核更新,KABI保证意味着根据旧版本编译的模块可以被新内核不加修改地加载,而不需要从源代码重新编译。
  
DKMS provides a straightforward packaging system and attempts to accommodate changes in the operating system by automatically rebuilding kernel modules, reducing manual overhead when updating OS kernels. DKMS packages are also generally easy to create and distribute.
+
The process for compiling ZFS and SPL is thoroughly documented on the ZFSonLinux GitHub site, but will be summarised here, as compiling ZFS has an implication on the Lustre build process. Each approach has its benefits and drawbacks.编译ZFS和SPL的过程在ZFSonLinux GitHub网站上有完整的记录,但这里将进行总结,因为编译ZFS对Lustre构建过程有一定的影响。每种方法都有其优缺点。
  
The KMOD packages take more work to create, but are easier to install. However, when the kernel is updated, the modules may need to be recompiled. KABI-compliant kernel modules reduce this risk by providing ABI compatibility across minor updates, but only work for some distributions (currently RHEL and CentOS).
+
DKMS provides a straightforward packaging system and attempts to accommodate changes in the operating system by automatically rebuilding kernel modules, reducing manual overhead when updating OS kernels. DKMS packages are also generally easy to create and distribute.DKMS提供了一个简单的打包系统,并试图通过自动重建内核模块来适应操作系统的变化,减少更新操作系统内核时的手动开销。DKMS包通常也易于创建和分发。
  
The premise of DKMS is simple: each time the OS kernel of a host is updated, DKMS will rebuild any out of tree kernel modules so that they can be loaded by the new kernel. This can be managed automatically on the next system boot, or can be triggered on demand. This does mean that the run-time environment of Lustre servers running ZFS DKMS modules is quite large, as it needs to include a compiler and other development libraries, but it also means that creating the packages for distribution is quick and simple.
+
The KMOD packages take more work to create, but are easier to install. However, when the kernel is updated, the modules may need to be recompiled. KABI-compliant kernel modules reduce this risk by providing ABI compatibility across minor updates, but only work for some distributions (currently RHEL and CentOS).KMOD包需要更多的工作来创建,但是更容易安装。然而,当内核更新时,模块可能需要重新编译。兼容KABI的内核模块通过在小更新之间提供ABI兼容性降低了这一风险,但只适用于某些发行版(目前是RHEL和中央操作系统)。
  
Unfortunately, even the simple approach has its idiosyncrasies. You cannot build the DKMS packages for distribution without also building at least the SPL development packages, since the ZFS build depends on SPL, and the source code is simply not sufficient by itself.
+
The premise of DKMS is simple: each time the OS kernel of a host is updated, DKMS will rebuild any out of tree kernel modules so that they can be loaded by the new kernel. This can be managed automatically on the next system boot, or can be triggered on demand. This does mean that the run-time environment of Lustre servers running ZFS DKMS modules is quite large, as it needs to include a compiler and other development libraries, but it also means that creating the packages for distribution is quick and simple.DKMS的前提很简单:每次主机的操作系统内核更新时,DKMS都会重建任何树外内核模块,以便新内核可以加载它们。这可以在下次系统启动时自动管理,也可以按需触发。这确实意味着运行ZFS DKMS模块的Lustre服务器的运行时环境相当大,因为它需要包含编译器和其他开发库,但也意味着创建用于分发的包既快速又简单。
  
There is also a cost associated with recompiling kernel modules from source that needs to be planned for. In order to be able to recompile the modules, DKMS packages require a full software development toolkit and dependencies to be installed on all servers. This does represent a significant overhead for servers, and is usually seen as undesirable for production environments, where there is often an emphasis placed on minimising the software footprint in order to streamline deployment and maintenance, and reduce the security attack surface.  
+
Unfortunately, even the simple approach has its idiosyncrasies. You cannot build the DKMS packages for distribution without also building at least the SPL development packages, since the ZFS build depends on SPL, and the source code is simply not sufficient by itself.不幸的是,即使是简单的方法也有其特殊性。如果不同时构建至少SPL开发包,就无法构建DKMS包进行分发,因为ZFS构建依赖于SPL,源代码本身是不够的。
  
Rebuilding packages also takes time, which will lengthen maintenance windows. And there is always some risk that rebuilding the modules will fail for a given kernel release, although this is rare. DKMS lowers the up-front distribution overhead, but moves some of the cost of maintenance directly onto the servers and the support organisations maintaining the data centre infrastructure.
+
There is also a cost associated with recompiling kernel modules from source that needs to be planned for. In order to be able to recompile the modules, DKMS packages require a full software development toolkit and dependencies to be installed on all servers. This does represent a significant overhead for servers, and is usually seen as undesirable for production environments, where there is often an emphasis placed on minimising the software footprint in order to streamline deployment and maintenance, and reduce the security attack surface. 从源代码中重新编译内核模块也需要花费一定的成本。为了能够重新编译模块,DKMS软件包需要在所有服务器上安装完整的软件开发工具包和依赖项。这对于服务器来说确实是一个很大的开销,通常被认为是生产环境所不希望的,在生产环境中,通常强调最小化软件占用空间,以便简化部署和维护,并减少安全攻击面。
  
When choosing DKMS, it is not only the ZFS and SPL modules that need to be recompiled, but also the Lustre modules. To support this, Lustre can also be distributed as a DKMS package.
+
Rebuilding packages also takes time, which will lengthen maintenance windows. And there is always some risk that rebuilding the modules will fail for a given kernel release, although this is rare. DKMS lowers the up-front distribution overhead, but moves some of the cost of maintenance directly onto the servers and the support organisations maintaining the data centre infrastructure.重建软件包也需要时间,这会延长维护时间。对于给定的内核版本,重建模块总是有失败的风险,尽管这种情况很少见。DKMS降低了前期分发开销,但将部分维护成本直接转移到维护数据中心基础架构的服务器和支持组织上。
 +
 
 +
When choosing DKMS, it is not only the ZFS and SPL modules that need to be recompiled, but also the Lustre modules. To support this, Lustre can also be distributed as a DKMS package.选择DKMS时,不仅需要重新编译ZFS和SPL模块,还需要重新编译Lustre模块。为了支持这一点,Lustre也可以作为DKMS包分发。
  
 
'''Note:''' The DKMS method was in part adopted in order to work-around licensing compatibility issues between the Linux Kernel project, licensed under GPL, and ZFS which is licensed under CDDL, with respect to the distribution of binaries. While both licenses are free open source licenses, there are restrictions on distribution of binaries created using a combination of software source code from projects with these different licenses. There is no restriction on the separate distribution of source code, however. The DKMS modules provide a convenient workaround that simplifies packaging and distribution of the ZFS source with Lustre and Linux kernels. There are differences of opinion in the open source community regarding packaging and distribution, and currently no consensus has been reached.
 
'''Note:''' The DKMS method was in part adopted in order to work-around licensing compatibility issues between the Linux Kernel project, licensed under GPL, and ZFS which is licensed under CDDL, with respect to the distribution of binaries. While both licenses are free open source licenses, there are restrictions on distribution of binaries created using a combination of software source code from projects with these different licenses. There is no restriction on the separate distribution of source code, however. The DKMS modules provide a convenient workaround that simplifies packaging and distribution of the ZFS source with Lustre and Linux kernels. There are differences of opinion in the open source community regarding packaging and distribution, and currently no consensus has been reached.
 +
'''注:'''DKMS方法的部分采用是为了解决在二进制文件分发方面,根据GPL许可的Linux内核项目和根据CDDL许可的ZFS之间的许可兼容性问题。虽然这两个许可证都是免费的开源许可证,但是使用来自具有这些不同许可证的项目的软件源代码组合创建的二进制文件的分发受到限制。然而,对源代码的单独分发没有限制。DKMS模块提供了一个方便的解决方案,简化了Lustre和Linux内核的ZFS源代码的打包和分发。开源社区对包装和分发有不同的意见,目前还没有达成共识。
  
 
The vanilla KMOD build process is straightforward to execute and will generally work for any supported Linux distribution. The KABI variant of the KMOD build is very similar with the restriction that it is only useful for distributions that support KABI compatibility. The KABI build is also has some hard-coded directory paths in the supplied RPM spec files, which has effectively mandated a dedicated build environment for creating packages.
 
The vanilla KMOD build process is straightforward to execute and will generally work for any supported Linux distribution. The KABI variant of the KMOD build is very similar with the restriction that it is only useful for distributions that support KABI compatibility. The KABI build is also has some hard-coded directory paths in the supplied RPM spec files, which has effectively mandated a dedicated build environment for creating packages.
 +
普通的KMOD构建过程易于执行,通常适用于任何受支持的Linux发行版。KMOD构建的KABI变体与限制非常相似,即它仅适用于支持KABI兼容性的发行版。KABI构建在提供的RPM规范文件中也有一些硬编码的目录路径,这有效地授权了一个用于创建包的专用构建环境。
  
 
=== Obtain the ZFS Source Code ===
 
=== Obtain the ZFS Source Code ===
 +
=== 获取ZFS源代码 ===
  
 
If the target build will be based on ZFS, then acquire the ZFS software sources from the ZFS on Linux project. ZFS is comprised of two projects:
 
If the target build will be based on ZFS, then acquire the ZFS software sources from the ZFS on Linux project. ZFS is comprised of two projects:
 +
如果目标构建基于ZFS,那么从ZFS的Linux项目中获取ZFS软件资源。ZFS由两个项目组成:
  
 
* SPL: Solaris portability layer. This is a shim that presents ZFS with a consistent interface and allows OpenZFS to be ported to multiple operating systems.
 
* SPL: Solaris portability layer. This is a shim that presents ZFS with a consistent interface and allows OpenZFS to be ported to multiple operating systems.
 
* ZFS: The OpenZFS file system implementation for Linux.
 
* ZFS: The OpenZFS file system implementation for Linux.
 +
* SPL: Solaris可移植层。这是一个垫片,它为ZFS提供了一致的接口,并允许OpenZFS移植到多个操作系统。 * ZFS:面向Linux的OpenZFS文件系统实现。
  
 
Clone the SPL and ZFS repositories as follows:
 
Clone the SPL and ZFS repositories as follows:
 +
按照以下方式克隆SPL和ZFS存储库:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第642行: 第690行:
  
 
When the repositories have been cloned, change into the clone directory of each project and review the branches:
 
When the repositories have been cloned, change into the clone directory of each project and review the branches:
 +
当存储库被克隆后,更改到每个项目的克隆目录,并查看分支:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第652行: 第701行:
  
 
For example:
 
For example:
 +
例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第665行: 第715行:
  
 
The master branch In each project is the main development branch and will form the basis of the next release of SPL and ZFS, respectively.
 
The master branch In each project is the main development branch and will form the basis of the next release of SPL and ZFS, respectively.
 +
每个项目中的主分支是主要的开发分支,并将分别构成SPL和ZFS下一版本的基础。
  
 
Review the tags as follows:
 
Review the tags as follows:
 +
按照以下步骤检查标签:
  
 
  git tag
 
  git tag
  
 
Just like the Lustre project, there are many more tags than there are branches, although the naming convention is simpler. Tags have the format <code><name>-<version></code>. The following output lists some of the tags in the spl repository:
 
Just like the Lustre project, there are many more tags than there are branches, although the naming convention is simpler. Tags have the format <code><name>-<version></code>. The following output lists some of the tags in the spl repository:
 +
就像Lustre项目一样,标签比分支多得多,尽管命名约定更简单。标签的格式是<code><name>-<version></code>。以下输出列出了spl存储库中的一些标签:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第685行: 第738行:
  
 
Tags with an <code>rc#</code> suffix are release candidates.
 
Tags with an <code>rc#</code> suffix are release candidates.
 +
带有<code>rc#</code> 后缀的标签是发布候选。
  
 
Use Git to checkout the release version of SPL and ZFS that will be built and then run the <code>autogen.sh</code> script to prepare the build environment. For example, to checkout SPL version 0.6.5.9:
 
Use Git to checkout the release version of SPL and ZFS that will be built and then run the <code>autogen.sh</code> script to prepare the build environment. For example, to checkout SPL version 0.6.5.9:
 +
使用Git签出将要构建的SPL和ZFS版本,然后运行autogen.sh脚本来准备构建环境。例如,要签出SPL版本0.6.5.9:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第695行: 第750行:
  
 
To check out SPL version 0.7.0-rc4:
 
To check out SPL version 0.7.0-rc4:
 +
要查看SPL版本0.7.0-rc4:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第703行: 第759行:
  
 
Do the same for ZFS. for example:
 
Do the same for ZFS. for example:
 +
为ZFS做同样的事情。例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第711行: 第768行:
  
 
For ZFS 0.7.0-rc4:
 
For ZFS 0.7.0-rc4:
 +
对于ZFS 0.7.0-rc4:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第719行: 第777行:
  
 
Make sure that the SPL and ZFS versions match for each respective checkout.
 
Make sure that the SPL and ZFS versions match for each respective checkout.
 +
确保SPL和ZFS版本在每次结帐时都匹配。
  
 
The ZFS on Linux source code is also available in the package format distributed alongside the binaries for a release. The latest software releases are available from the following URL:
 
The ZFS on Linux source code is also available in the package format distributed alongside the binaries for a release. The latest software releases are available from the following URL:
 +
Linux上的ZFS源代码也可以以包的形式提供,并与二进制文件一起发布。最新软件版本可从以下网址获得:
  
 
https://github.com/zfsonlinux/
 
https://github.com/zfsonlinux/
  
 
Links are also available on the main ZFS on Linux site:
 
Links are also available on the main ZFS on Linux site:
 +
链接也可以在ZFS的主网站上找到:
  
 
http://zfsonlinux.org/
 
http://zfsonlinux.org/
  
 
'''Note:''' the examples used in the remainder of the documentation are based on a release candidate version of ZFS version 0.7.0, but the process applies equally to all recent releases.
 
'''Note:''' the examples used in the remainder of the documentation are based on a release candidate version of ZFS version 0.7.0, but the process applies equally to all recent releases.
 +
'''注:'''文档剩余部分中使用的示例基于ZFS版本0.7.0的候选版本,但该过程同样适用于所有最新版本。
  
 
=== Install the Kernel Development Package ===
 
=== Install the Kernel Development Package ===
 +
=== 安装内核开发包 ===
  
 
The SPL and ZFS projects comprise kernel modules as well as user-space applications. To compile the kernel modules, install the kernel development packages relevant to the target OS distribution. This must match the kernel version being used to create the Lustre packages. Review the ChangeLog file in the Lustre source code to identify the appropriate kernel version.
 
The SPL and ZFS projects comprise kernel modules as well as user-space applications. To compile the kernel modules, install the kernel development packages relevant to the target OS distribution. This must match the kernel version being used to create the Lustre packages. Review the ChangeLog file in the Lustre source code to identify the appropriate kernel version.
  
 
The following excerpt shows that Lustre version 2.10.0 supports version <code>3.10.0-514.16.1.el7</code> of the RHEL / CentOS 7.3 kernel, and version <code>4.4.49-92.14</code> of the SLES 12 SP2 kernel (output has been truncated):
 
The following excerpt shows that Lustre version 2.10.0 supports version <code>3.10.0-514.16.1.el7</code> of the RHEL / CentOS 7.3 kernel, and version <code>4.4.49-92.14</code> of the SLES 12 SP2 kernel (output has been truncated):
 +
SPL和ZFS项目包括核心模块以及用户空间应用程序。要编译内核模块,请安装与目标操作系统分发相关的内核开发包。这必须与用于创建Lustre包的内核版本相匹配。查看Lustre源代码中的更改日志文件,以确定合适的内核版本。
 +
 +
以下摘录显示Lustre 2 . 10 . 0版支持RHEL /中央处理器7.3内核的3.10.0-514.16.1.el7版和SLES 12 SP2内核的4.4.49-92.14版(输出已被截断):
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第750行: 第816行:
  
 
'''Note:''' it is also possible to compile the SPL and ZFS packages against the LDISKFS patched kernel development tree, in which case, substitute the kernel development packages from the OS distribution with those created with the LDISKFS patches.
 
'''Note:''' it is also possible to compile the SPL and ZFS packages against the LDISKFS patched kernel development tree, in which case, substitute the kernel development packages from the OS distribution with those created with the LDISKFS patches.
 +
'''注意''':也可以根据LDISKFS修补的内核开发树编译SPL和ZFS包,在这种情况下,用LDISKFS修补程序创建的包替换操作系统发行版中的内核开发包。
  
 
==== RHEL and CentOS ====
 
==== RHEL and CentOS ====
 +
 
For RHEL / CentOS systems, use YUM to install the <code>kernel-devel</code> RPM. For example:
 
For RHEL / CentOS systems, use YUM to install the <code>kernel-devel</code> RPM. For example:
  
第1,208行: 第1,276行:
  
 
=== Lustre Server (DKMS Packages only) ===
 
=== Lustre Server (DKMS Packages only) ===
 +
=== Lustre服务器(仅限DKMS套装) ===
 
The process for creating a Lustre server DKMS package is straightforward:
 
The process for creating a Lustre server DKMS package is straightforward:
 +
创建Lustre服务器DKMS包的过程非常简单:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第1,223行: 第1,293行:
  
 
If the objective is to create a set of DKMS server packages for use with ZFS, then there is no further work required. See also the section on creating DKMS packages for Lustre clients, if required.
 
If the objective is to create a set of DKMS server packages for use with ZFS, then there is no further work required. See also the section on creating DKMS packages for Lustre clients, if required.
 +
如果目标是创建一组供ZFS使用的DKMS服务器包,则不需要做进一步的工作。如有需要,另请参阅为Lustre客户端创建DKMS软件包一节。
  
=== Lustre Server (All other builds) ===
+
=== Lustre服务器(所有其他版本) ===
To compile the Lustre server packages requires the development packages for the Linux kernel, and optionally, SPL, ZFS and OFED. The packages used in the following examples have been taken from the builds created in the earlier stages of this process.
+
需要针对Linux内核的开发包,以及可选的SPL、ZFS和OFED来编译Lustre服务器包。下面的示例中使用的包取自本指南前面创建的版本。
  
==== Patched LDISKFS Server Builds ====
+
==== 带补丁的LDISKFS服务器版本 ====
  
For Lustre LDISKFS patched kernels (including optional project quota patches), install the kernel development package or packages with the patches compiled in. For example:
+
对于Lustre LDISKFS打了补丁的内核(包括可选的项目配额补丁),需要安装内核开发包,或者安装包含在编译的补丁中的包。例如:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第1,237行: 第1,308行:
 
</pre>
 
</pre>
  
==== ZFS and/or Patchless LDISKFS Server Builds ====
+
==== ZFS 或无补丁LDISKFS的服务器版本 ====
  
For "patchless" kernels, install the <code>kernel-devel</code> package that matches the supported kernel for the version of Lustre being compiled. Refer to the Lustre ChangeLog in the source code distribution (<code>lustre-release/lustre/ChangeLog</code>) for the list of kernels that are known to work with Lustre. The <code>ChangeLog</code> file contains a historical record of all Lustre releases.
+
对于“无补丁”内核,应安装<code>kernel-devel</code> 包,这个包能够匹配正在编译的Lustre版本支持的内核。想了解更多与包含Lustre的内核列表,请参考源代码发行版(<code>lustre-release/lustre/ChangeLog</code>)中的Lustre <code>ChangeLog</code> 文件。<code>ChangeLog</code> 文件包含所有Lustre版本的历史记录。
  
For LDISKFS patchless kernels, also download and install the kernel source code package that matches the target kernel.
+
对于无补丁的LDISKFS内核,也需要下载并安装与目标内核匹配的内核源代码包。
  
===== RHEL / CentOS 7 Kernel Development Packages =====
+
===== RHEL/CentOS 7 内核开发包 =====
For RHEL / CentOS 7, use <code>yum</code> to install the set of kernel development packages required by Lustre. For example, to install version <code>3.10.0-514.16.1.el7</code> of the RHEL / CentOS 7.3 <code>kernel-devel</code> RPM:
+
对于RHEL / CentOS 7, 使用<code>yum</code>安装Lustre所需的一组内核开发包。例如,要安装RHEL /CentOS 7.3 的<code>kernel-devel</code>  <code>3.10.0-514.16.1.el7</code>版本:
  
 
<pre style="overflow-x:auto;">
 
<pre style="overflow-x:auto;">
第1,338行: 第1,409行:
 
</pre>
 
</pre>
  
For self-compiled packages that were created using the process documented earlier in this guide, use the following command:
 
 
对于使用本指南前面介绍的过程创建的定制的编译包,请使用以下命令:
 
对于使用本指南前面介绍的过程创建的定制的编译包,请使用以下命令:
  
第1,350行: 第1,420行:
 
===== SLES 12 SP2 =====
 
===== SLES 12 SP2 =====
  
'''Note:''' The ZFS on Linux project does not appear to provide a ZFS binary distribution for SLES.
 
 
'''注意:''' Linux项目中的ZFS似乎没有为SLES提供ZFS二进制版本。
 
'''注意:''' Linux项目中的ZFS似乎没有为SLES提供ZFS二进制版本。
  
第1,366行: 第1,435行:
 
</pre>
 
</pre>
  
==== 可选: 额外的驱动程序 ====
+
==== 可选: 第三方驱动 ====
  
 
如果有第三方InfiniBand驱动,也必须安装它们。
 
如果有第三方InfiniBand驱动,也必须安装它们。

2019年9月8日 (日) 20:28的版本



目录

简介

Lustre是一个开源软件项目,由世界各地的软件工程师开发。该项目由开发者维护,由基础架构支持以提供持续的补丁集成、构建和测试服务。其中补丁包括添加新功能、更新现有功能或修复错误。

项目的维护者会定期发布经过广泛全面测试,并符合通用标准的软件版本。这些版本包括基于Linux操作系统发行版预先构建的二进制软件包。Lustre的许多用户都倾向于二进制版本。一般来说,我们建议用户使用Lustre项目中的二进制版本。预构建的二进制包可在此下载:

https://wiki.whamcloud.com/display/PUB/Lustre+Releases

当然,有时可以直接从源代码编译Lustre软件。例如,将热修复补丁运用到最近发现的问题,测试开发中的新功能,或者允许Lustre利使用第三方设备驱动程序(比如供应商提供的网络驱动程序)。

构建Lustre - TLDR指南

针对当前安装的内核构建“仅”Lustre软件包管理器,过程相对简单。您需要为当前安装的内核安装完全相同kernel-devel包,以及基本的构建工具,如rpmbuildgccautoconf。没有安装需要的工具,rpmbuild会报错。

$ git clone git://git.whamcloud.com/fs/lustre-release.git
$ cd lustre-release
$ git checkout 2.12.0
$ sh autogen.sh
$ ./configure
$ make rpms

乌班图或黛比DPKG系统,最后一步使用make debs而不是make rpms。如果可能的话,在这一步还可以构建大多数人所需要的工具和客户端RPMs。如果配置器可以检测到与已安装内核匹配的内核源以及匹配的Idiskfs修补程序系列,或者已安装的ZFS-dev包,它会尝试构建服务器。如果没有检测到,这一步则会被跳过。

本文将深入讨论选择性修补和构建您自己的内核所需的细节(ldiskfs和ZFS不再需要讨论这一点),并处理不太常见的场景,如OFED或构建与安装内核不匹配的内核。

不足

本文档最初是为指导红帽企业Linux (RHEL)或CentOS创建Lustre软件包而编写的。

本文还包括了如何为SUSE Linux Enterprise Server (SLES) version 12 service pack 2 (SLES 12 SP2)编译Lustre的事前资料并展示了基于ZFS的SLES服务器和客户端流程。但在OFED或LDISKFS下编制Lustre的流程尚未讨论。

之后还将添加其他操作系统发行版的编译shuoming。

注意:SUSE Linux将自编译内核模块标记为操作系统“不支持”。默认情况下,SLES将拒绝加载没有设置supported标志的内核模块。以下是尝试加载不受支持的内核模块时将返回的错误示例:

sl12sp2-b:~ # modprobe zfs
modprobe: ERROR: module 'zavl' is unsupported
modprobe: ERROR: Use --allow-unsupported or set allow_unsupported_modules 1 in
modprobe: ERROR: /etc/modprobe.d/10-unsupported-modules.conf
modprobe: ERROR: could not insert 'zfs': Operation not permitted
sl12sp2-b:~ # vi /etc/modprobe.d/10-unsupported-modules.conf 

要允许在SLES操作系统中加载自编译内核模块,请在/etc/modprobe.d/10-unsupported-modules.conf条目中添加以下这一项:

allow_unsupported_modules 1

更多信息可参考 SUSE文档

规划

因为Lustre是一个面向网络的文件系统,在Linux内核中作为模块运行,所以它依赖于其他内核模块,包括设备驱动程序。允许Lustre内核模块与非操作系统分发的第三方设备驱动程序一起工作是从源代码进行新构建的最常见任务之一。例如,来自OFA和OFA合作伙伴的OFED为InfiniBand和RoCE网络结构提供驱动因素可能是重新编译Lustre的最常见原因。

当用户从源创建Lustre包时,有几个选项可供选择,每个选项都对构建过程均有影响。

在Lustre服务器上,必须选择用于存储数据的块存储文件系统。Lustre文件系统数据位于分布在一组存储服务器上的块存储文件系统中。后端块存储是由一个叫做对象存储设备(OSD)的应用程序编程接口抽象出来的。OSD使Lustre能够使用不同的后端文件系统来实现持久性。

目前,有LDISKFS(基于EXT4) OSD和ZFS OSD可供选择,Lustre的编译必须支持至少一个OSD。

此外,用户必须决定Lustre将使用哪些驱动程序进行联网。Linux内核内置了对以太网和InfiniBand的支持,但是系统供应商经常提供他们自己的设备驱动程序和工具。此时,需要重新编译Lustre来实现Lustre的网络堆栈LNet与这些驱动程序的链接。

编译之前,通读本文档并决定Lustre构建所需的选项。文档将涵盖以下过程:

  1. LDISKFS Lustre
  2. ZFS Lustre
  3. 以下驱动程序Lustre:
    1. In-kernel驱动程序
    2. OFA OFED
    3. Mellanox OFED
    4. 英特尔Fabrics

建立构建环境

编译Lustre软件需要一台安装了全面软件开发工具的计算机。建议使用独立于预期安装目标的专用构建机器来管理构建过程。专用构建机器可以是专用服务器或虚拟机,且安装了与目标安装相匹配的操作系统版本。

构建机器应符合以下最低规格:

  • 最小32GB存储空间,可容纳所有源代码和软件开发工具
  • 最小2GB内存(对于虚拟机而言,内存越大越好)
  • 可以访问外部托管软件库的网络接口
  • 可支持的Linux操作系统发行版。请参阅Lustre源代码更改日志,了解已知可与Lustre一起工作的操作系统发行版的具体信息
  • 可访问本文档中描述的编译软件所需的相关操作包。通常,操作包可以通过在线存储库或镜像获得。

因为英特尔兼容64位(x86_64)处理器体系结构代表了当今运行Lustre文件系统的绝大多数已部署的处理器体系结构,故本文档是基于该处理器而编写的。

除了开源项目常见的正常需求(即编译器和开发库依赖项)之外,Lustre还依赖于其他可能也需要从源代码创建的项目。这意味着在创建Lustre包过程的某些阶段中,其他包也需编译并安装在构建服务器上。一般情况下,一旦用标准软件开发包建立了构建服务器,Lustre完全可以在没有超级用户权限的情况下创建,但是像ZFS这样的项目不支持这种工作方法。

因而,在构建过程中已经尽一切努力减少对超级用户权限的要求。对于RHEL和CentOS用户来说,本文中所描述的过程还包括如何使用一个名为Mock的项目,该项目创建了一个用于创建包的chroot jail。

Mock项目的细节可以在GitHub上找到:

https://github.com/rpm-software-management/mock

使用Mock是可选项。与传统用法相比,它在构建过程中也有所取舍,并以某种非正统的方式使用。


创建用于管理构建的用户

在大多数情况下,尽管用户需要安装软件开发工具,一些第三方软件发行版也希望他们的软件包安装在构建主机上,但创建软件包不需要超级用户权限。我们建议使用具有一些额外权限(例如通过sudo授予)的常规帐户,来允许在过程中安装创建包。

RHEL和CentOS 7:安装软件开发工具

管理创建Lustre包的构建环境有两个选项:使用Mock创建一个独立的chroot环境,或者直接与构建服务器的操作系统集成。您可以选择其中任何一个。下面章节将会详细介绍这两个选项。

为Lustre创建模拟配置

Mock提供了一种简单的方法来隔离复杂的包构建,且不影响主机操作平台的配置。尽管它是可选的,但却非常有用,尤其是在尝试构建或处理多个项目时。该软件与RHEL、CentOS和Fedora一起发布。开发人员通常用Mock来测试从SPRM包开始的RPM构建。它的环境可以当作开发环境普遍使用。

安装Mock:

sudo yum -y install mock

或者安装Git,存储库可以在Mockchroot外克隆(这样可以简化chroot环境的维护):

sudo yum -y install git

Add any users that will be running Mock environments to the mock group: 将运行模拟Mock的任何用户添加到mock组:

sudo useradd -m <username>
sudo usermod -a -G mock[,wheel] <username>

wheel组是可选的,允许用户使用被sudo提升的权限来运行命令。使用需谨慎,因为这可能会削弱构建主机的安全性。

The following example creates the user build: 以下示范build用户生成:

sudo useradd -m build
sudo usermod -a -G mock build

软件安装后,创建适合构建目标的配置。模拟配置记录在/etc/mock下文件中。默认配置称为default.cfg,通常是指向该目录中某个文件的软链接。要使用RHEL或CentOS的系统默认配置,请运行以下命令:

ln -snf /etc/mock/centos-7-x86_64.cfg /etc/mock/default.cfg

这些配置文件描述了包的集合和使用。chronot环境在实例化时,可以被获得,从YUM存储库中下载和安装包可以自动填充chronot。该配置可以被定制以满足用户的要求。更多信息,请参考mock(1)

创建一个针对Lustre编译要求,且包含编译ZFS和OFED的编译要求的新配置,请运行以下命令(需要超级用户权限):

# Create a copy of the default CentOS 7 x86_64 Mock template and add the source repos
sr=`cat /etc/yum.repos.d/CentOS-Sources.repo` \
awk '/^"""$/{print ENVIRON["sr"]; printf "\n%s\n",$0;i=1}i==0{print}i==1{i=0}' \
/etc/mock/centos-7-x86_64.cfg > /etc/mock/lustre-c7-x86_64.cfg
 
# Change the config name. Populate the Mock chroot with prerequisite packages.
sed -i -e 's/\(config_opts\['\''root'\''\]\).*/\1 = '\''lustre-c7-x86_64'\''/' \
-e 's/\(config_opts\['\''chroot_setup_cmd'\''\]\).*/\1 = '\''install bash bc openssl gettext net-tools hostname bzip2 coreutils cpio diffutils system-release findutils gawk gcc gcc-c++ grep gzip info make patch redhat-rpm-config rpm-build yum-utils sed shadow-utils tar unzip util-linux wget which xz automake git xmlto asciidoc elfutils-libelf-devel zlib-devel binutils-devel newt-devel python-devel hmaccalc perl-ExtUtils-Embed patchutils pesign elfutils-devel bison audit-libs-devel numactl-devel pciutils-devel ncurses-devel libtool libselinux-devel flex tcl tcl-devel tk tk-devel expect glib2 glib2-devel libuuid-devel libattr-devel libblkid-devel systemd-devel device-mapper-devel parted lsscsi ksh libyaml-devel krb5-devel keyutils-libs-devel net-snmp-devel'\''/' \
/etc/mock/lustre-c7-x86_64.cfg
 
# Modify the %_rpmdir RPM macro to prevent build failures.
echo "config_opts['macros']['%_rpmdir'] = \"%{_topdir}/RPMS/%{_arch}\"" >> /etc/mock/lustre-c7-x86_64.cfg
 
# Make the new configuration the default
ln -snf /etc/mock/lustre-c7-x86_64.cfg /etc/mock/default.cfg

此配置保证每次创建新的Mock环境时,所有Lustre构建依赖项都会自动下载和安装。

注意:Lustre和其他项目使用的一些构建脚本和Makefiles 假设在RPM构建目录中总会有一个架构子目录(例如x86_64)。但情况并非总是如此,Mock不基于目标体系结构创建子目录。为了解决这个问题,在上面的Mock配置中添加了一个定制的PRM宏。如果这样也没用的话,那么在创建Mock chroot环境后,可以通过运行以下命令手动添加相同的宏:

mock --shell "echo '%_rpmdir %{_topdir}/RPMS/%{_arch}' >>\$HOME/.rpmmacros"

SPL生成错误的示例如下所示:

cp: cannot stat ‘/tmp/spl-build-root-uDSQ5Bay/RPMS/*/*’: No such file or directory
make[1]: *** [rpm-common] Error 1
make[1]: Leaving directory `/builddir/spl'
make: *** [rpm-utils] Error 2

创建Mock配置后,以管理构建的用户身份登录,然后运行以下命令来准备 chroot环境:

mock [-r <config>] --init

如果默认配置不合适, 则-r规定使用配置。要吗是 /etc/mock 目录中某个文件的名称,减去.cfg后缀,要么是文件名。

要在Mock环境中交互工作,请启动一个shell:

mock [-r <config>] --shell

注意:一些mock命令在执行前会试图清除chroot环境。任何Mock认为是临时性的文件都可能被清除,也就是说Mock本身没有提供任何文件。要避免这种情况,可以使用-n。因为--shell 指令不运行清理操作,所以不需要-n

正常构建过程的开发软件安装

如果使用Mock创建Lustre包,请跳过此步骤。

使用以下命令在构建服务器上安装必备软件工具:

sudo yum install asciidoc audit-libs-devel automake bc binutils-devel \
bison device-mapper-devel elfutils-devel elfutils-libelf-devel expect \
flex gcc gcc-c++ git glib2 glib2-devel hmaccalc keyutils-libs-devel \
krb5-devel ksh libattr-devel libblkid-devel libselinux-devel libtool \
libuuid-devel libyaml-devel lsscsi make ncurses-devel net-snmp-devel \
net-tools newt-devel numactl-devel parted patchutils pciutils-devel \
perl-ExtUtils-Embed pesign python-devel redhat-rpm-config rpm-build \
systemd-devel tcl tcl-devel tk tk-devel wget xmlto yum-utils zlib-devel

以上列表中的软件包足以构建Lustre、ZFS和来自OFED的第三方驱动程序。

CentOS:使用Vault将YUM限制到旧的操作系统版本

RHEL和CentOS的惯例是在安装或升级软件时总是检索给定主操作系统版本的最新更新。YUM存储库定义特意参考了最新的上游存储库,从而将用户下载过时软件包的风险降至最低。然而,这种行为并不总是可取的。给定组织的安装和升级策略可能会限制操作平台,进而限制应用程序(包括内核)的指定包版本。站点可能已经将操作系统版本冻结到特定的版本或更新范围,或者可能受到运行在基础架构上应用软件的限制。

这反过来又会影响包括Lustre在内构建包环境。如果运行时环境绑定到特定的操作系统版本,那么构建环境也必须受到类似的限制。

为了在CentOS中促进这种限制,可以利用CentOS Vault存储库(http://vault.centos.org)。该存储库维护每个版本CentOS的每个包和更新发布的在线档案。每个CentOS安装都包含一个名为centos-release 的包,用于跟踪操作系统版本并提供YUM存储库定义。该包包括当前安装版本之前的可用CentOS版本Vault存储库定义。例如,针对CentOS 6.9的centos-release 包将包括CentOS 6.0 - 6.8的Vault存储库定义。

这一点可以用来约束构建服务器环境,使其与预期的目标环境相匹配。最简单的方法是下载最新的 centos-releaserpm,提取CentOS Vault定义并覆盖平台上的原始Vault定义。之后禁用YUM中的默认存储库,并且只为目标操作系统版本启用Vault存储库。例如:

# Download and install an updated Vault definition:
mkdir $HOME/tmp
cd $HOME/tmp
yumdownloader centos-release
rpm2cpio centos-release*.rpm | cpio -idm
cp etc/yum.repos.d/CentOS-Vault.repo /etc/yum.repos.d/.

# Configure YUM to use only the repositories for the current OS:
yum-config-manager --disable \*

# Get the current OS major and minor version
ver=`sed 's/[^0-9.]*//g' /etc/centos-release`
# Enable the Vault repos that match the OS version
yum-config-manager --enable C$ver-base,C$ver-extras,C$ver-updates

注意:CentOS发行包本身不会更新,因为这可能导致依赖于正确识别操作系统版本的应用程序和软件构建过程失败。上述方法的目的是仅更新YUM,而不会维护操作系统版本和发布构建环境。

SLES 12:安装软件开发工具

尽管SLES与红帽企业Linux之间存在一些显著的差异,他们均使用基于PRM的包管理系统。除了主订阅之外,还必须启用SUSE Linux企业软件开发工具包(SLE-SDK12-SP2)附加组件,以安装开发包(-devel)。

使用以下命令在SLES 12 SP2构建服务器上安装必备软件:

sudo zypper install asciidoc automake bc binutils-devel bison bison \
device-mapper-devel elfutils libelf-devel flex gcc gcc-c++ git \
glib2-tools glib2-devel hmaccalc  libattr-devel libblkid-devel \
libselinux-devel libtool libuuid-devel lsscsi make mksh ncurses-devel \
net-tools numactl parted patchutils pciutils-devel perl pesign expect \
python-devel rpm-build sysstat systemd-devel tcl tcl-devel tk tk-devel wget \
xmlto zlib-devel libyaml-devel krb5-devel keyutils-devel net-snmp-devel

在某些情况下, zypper可能会标记rpm-build构建的依赖问题。例如:

Problem: rpm-build-4.11.2-15.1.x86_64 requires gettext-tools, but this requirement cannot be provided
  uninstallable providers: gettext-tools-0.19.2-1.103.x86_64[SUSE_Linux_Enterprise_Server_12_SP2_x86_64:SLES12-SP2-Pool]
 Solution 1: Following actions will be done:
  do not install rpm-build-4.11.2-15.1.x86_64
  do not install sysstat-10.2.1-9.2.x86_64
 Solution 2: deinstallation of gettext-runtime-mini-0.19.2-1.103.x86_64
 Solution 3: break rpm-build-4.11.2-15.1.x86_64 by ignoring some of its dependencies

Choose from above solutions by number or cancel [1/2/3/c] (c):

出现这种情况,选择Solution 2: deinstallation of gettext-runtime-mini-<version> 来解决。

获取Lustre源代码

以下信息适用于Lustre社区版本。要获取Lustre其他发行版的源代码,如Lustre的英特尔企业版,请参考供应商的文档。

Lustre源代码保存在Git存储库中。要获得克隆,请运行以下命令:

# Mock users: run "mock --shell" first
cd $HOME
git clone git://git.whamcloud.com/fs/lustre-release.git

当存储库被克隆后,更改到克隆目录并查看分支:

cd $HOME/lustre-release
git branch -av

例如:

[build@ctb-el73 lustre-release]$ git branch -av
* master                    fc7c513 LU-9306 tests: more debug info for hsm test_24d
  remotes/origin/HEAD       -> origin/master
  :
  remotes/origin/b2_10      1706513907 LU-7631 tests: wait_osts_up waits for MDS precreates
  remotes/origin/b2_11      1779751bcf New release 2.11
  :
  remotes/origin/b2_5       35bb8577c8 LU-0000 build: update build version to 2.5.3.90
  remotes/origin/b2_6       73ea776053 New tag 2.6.0-RC2
  remotes/origin/b2_7       7eef5727a9 New tag 2.7.0-RC2
  remotes/origin/b2_8       ea79df5af4 New tag 2.8.0-RC5
  remotes/origin/b2_9       e050996742 New Lustre release 2.9.0
  :
  :


主分支是主要的开发分支,将构成Lustre下一个特性版本的基础。以字母“b”开头的分支代表当前和以前的Lustre发行分支,以及发行版本号。因此,b2_10是Lustre 2.10.0分支。其他分支则用于长期运行的开发项目,如渐进式文件布局(PFL)和Lnet多轨。

您可以按如下方式查看标签:

git tag

标签比分支多。每个标签都代表了发展变化。Lustre版本号有四个从左到右读取的字段,分别表示主要、次要、维护和热修复版本号。例如,版本2.10.0.0解释如下:

  • 主功能发布号
  • 次功能发布号
  • 维护版本号
  • 热修复版本号

维护发布版本号为0(零)表示该版本已完成,可供一般使用(也称为“一般可用”,或“通用”)。维护版本小于等于10的标签代表维护发布版(错误修复或次要操作系统支持更新)。维护版本大于50的标签是预发布的开发标签,不用于一般用途。

lustre-release存储库中的标签有两种不同的格式:

  • 点分隔的数字版本号(例如2.10.0)
  • 以小写“v”开头,后跟版本号的标签,数字之间用下划线分隔(例如v2_10_0_0)

给定版本号的不同标签格式意义是相同的,并且指向git存储库中的相同点。也就是说,标记v2_10_0和2.10.0指的是相同的版本。

例如,以下标签代表Lustre版本2.10.0的一般可用版本:

2.10.0
v2_10_0
v2_10_0_0
2.10.5
v2_10_5
2.10.6
v2_10_6

下面的标签列表都指向相同的预发布开发版本,维护版本号为50或更高:

2.10.56
v2_10_56
v2_10_56_0


以字母“RC”结尾的标签是发布候选:这些版本是在最终的通用发布前为测试而做的预生产构建。如果一个发布候选被认为对于一般用途来说足够稳定,它将被当作是通用版本。在为给定版本的Lustre发布通用版本之前,可能有一个或多个发布后哦选。

使用Git签出将要构建的Lustre版本。例如,签出Lustre 2.10.0版:

git checkout 2.10.0

或者

git checkout b2_10

准备搭建:

sh autogen.sh

Lustre源代码也有包格式,与二进制文件一起发布。最新软件版本可从以下网址获得:

https://wiki.whamcloud.com/display/PUB/Lustre+Releases

本页有所有版本的链接。例如,RHEL或CentOS 7最新Lustre版本的PRM可在此下载:

https://downloads.whamcloud.com/public/lustre/latest-release/el7/server/SRPMS/

注意:以下文档中使用的示例均基于Lustre 2.10.0版本,但同样适用于所有最近发布Lustre版本。

LDISKFS与修补Linux内核

简介

如果Lustre服务器将使用从EXT4衍生的LDISKFS对象存储设备(OSD),那么在编译Lustre和LDISKFS内核模块时,用户可以使用两个选项。也就是,可以修补Lustre 2.10和更新版本的Linux内核,或者选择不加修改地使用Linux内核(也称为“无补丁”内核)。LDISKFS OSD的无补丁内核是2017年新发布的,目前仍在实验阶段。然而,作为一种选择,这是值得考虑的,因为它大大简化了维护和支持步骤。选择运行无补丁服务器意味着能够利用RHEL和CentOS的KABI兼容性特性,以及弱更新内核模块。

历史上,Lustre服务器使用的Linux内核需要应用上游内核或操作系统发行商不支持的附加补丁。Lustre开发者社区一直致力于减少对这些补丁的依赖。如今,至少对于RHEL和中央操作系统服务器来说,附加补丁的增量足够小,用户可以在没有任何基线内核补丁的情况下评估运行LDISKFS Lustre服务器。

警告:

  • Lustre 2.10中新增的项目配额支持需要一组尚未被主流Linux发行版接受的补丁。如果项目配额要求,那么内核必须被修补。
  • 尽管风险很小,在没有补丁的情况下运行可能会对性能产生负面影响。
  • 运行时没有LDISKFS服务器内核补丁的Lustre服务器的测试覆盖率较低。

因此,选择并不完全简单:补丁内核偏离了操作系统提供的包,并且有必须考虑的维护开销,但是它们历史较长且有最广泛的测试覆盖面。此外,一些功能目前仅适用于打补丁的内核。无补丁内核选项是新出现的,因此相对而言,作为一个未知数,它有一定的风险,但它提供了一个更简单的维护途径。

注意:运行“无补丁”并不意味着Lustre OSDs是EXT4设备。OSDs仍将是LDISKFS,它是EXT4的一种改进衍生物。不管内核包是否打了补丁,Lustre仍然需要访问内核源代码来创建LDISKFS内核模块。

注意: 如果使用ZFS操作系统,Lustre不需要修补内核。专门使用ZFS的Lustre安装不需要定制内核。

注意:Lustre客户端不需要打补丁的内核。

要创建打补丁的内核,请通读本节的其余部分,并按照说明操作。否则,可以跳过这一部分。

应用Lustre内核补丁

本节的其余部分介绍了使用Lustre发行版提供的补丁修改操作系统内核的过程。LDISKFS Lustre服务器使用“无补丁”内核将在“创建Lustre软件包”一节中介绍。

在大多数情况下,这些补丁的性能会增强且提供对测试有用的额外钩。此外,项目配额支持需要一组必须应用于内核的补丁。如果需要项目配额支持,那么这些补丁是必不可少的。

Lustre社区继续努力减少对维护LDISKFS的补丁的依赖。希望在将来的某个时候,它们变得完全没有必要。

获取内核源代码

如果目标构建基于LDISKFS存储目标,庆下载适合操作系统版本的内核源。参阅Lustre源代码中的变更日志,了解已知可与Lustre一起使用的每个操作系统发行版的内核列表。变更日志维护所有Lustre版本的历史记录。

以下摘录展示了Lustre 2.10.0版的内核支持:

TBD Intel Corporation
       * version 2.10.0
       * See https://wiki.whamcloud.com/display/PUB/Lustre+Support+Matrix
         for currently supported client and server kernel versions.
       * Server known to build on patched kernels:
         2.6.32-431.29.2.el6 (RHEL6.5)
         2.6.32-504.30.3.el6 (RHEL6.6)
         2.6.32-573.26.1.el6 (RHEL6.7)
         2.6.32-642.15.1.el6 (RHEL6.8)
         2.6.32-696.el6      (RHEL6.9)
         3.10.0-514.16.1.el7 (RHEL7.3)
         3.0.101-0.47.71     (SLES11 SP3)
         3.0.101-97          (SLES11 SP4)
         3.12.69-60.64.35    (SLES12 SP1)
         4.4.49-92.14        (SLES12 SP2)
         vanilla linux 4.6.7 (ZFS only)
       * Client known to build on unpatched kernels:
         2.6.32-431.29.2.el6 (RHEL6.5)
         2.6.32-504.30.3.el6 (RHEL6.6)
         2.6.32-573.26.1.el6 (RHEL6.7)
         2.6.32-642.15.1.el6 (RHEL6.8)
         2.6.32-696.el6      (RHEL6.9)
         3.10.0-514.16.1.el7 (RHEL7.3)
         3.0.101-0.47.71     (SLES11 SP3)
         3.0.101-97          (SLES11 SP4)
         3.12.69-60.64.35    (SLES12 SP1)
         4.4.49-92.14        (SLES12 SP2)
         vanilla linux 4.6.7

在上表中,Lustre 2.10.0版支持RHEL/CentOS7.3内核的3.10.0-514.16.1.el7版。使用YUM下载源PRM副本。例如:

cd $HOME
yumdownloader --source  kernel-3.10.0-514.16.1.el7

以下shell脚本片段可用于识别给定操作系统和Lustre版本的内核版本,然后使用它下载内核源:

cd $HOME
kernelversion=`os=RHEL7.3 lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`
[ -n "$kernelversion" ] && yumdownloader --source  kernel-$kernelversion || echo "ERROR: kernel version not found."

将脚本开头的oslu变量分别设置为所需的操作系统版本和Lustre版本。

如果Mock被用于构建Lustre,您可以从Mock Shell外部下载源RPM,然后按如下方式复制它:

mock --copyin <package> /builddir/.

例如:

mock --copyin kernel-3.10.0-514.16.1.el7.src.rpm /builddir/.

Mock的另一个解决方案是启用CentOS-Source存储库配置,然后直接从Mock shell运行yumdownloader命令。将源存储库添加到Mock的YUM配置中的一个简单但粗略的方法是从Mock shell运行以下内容:

cat /etc/yum.repos.d/CentOS-Sources.repo >> /etc/yum/yum.conf

但是,这将在下次调用Mock Shell时被覆盖。您可以通过将CentOS源存储库附加到构建主机/etc/mock目录下适当配置文件来永久更新配置,这就是之前准备Mock配置时所做的工作。

如果为较旧的内核版本创建一个版本,该版本也许无法在使用的YUM存储库中获得。CentOS在一组名为Vault的YUM存储库中维护版本档案或所有以前发布的版本。CentOS位于:

http://vault.centos.org

Vault包括源RPMS,以及二进制文件。不幸的是,CentOS不包括存档源存储库的YUM配置描述。可以越过YUM,直接进入Vault站点,浏览目录结构以获得所需的文件。例如,CentOS 7.2软件包更新的源RPMS可在此找到:

http://vault.centos.org/7.2.1511/updates/Source/

准备内核源码

Install the kernel source RPM that was downloaded in the previous step. This will create a standard RPM build directory structure and extract the contents of the source RPM: 安装上一步下载的内核源转速。这将创建一个标准的转速构建目录结构,并提取源转速的内容:

cd $HOME
rpm -ivh kernel-[0-9].*.src.rpm

Determine the set of patches that need to be applied to the kernel, based on the operating system distribution. The file lustre-release/lustre/kernel_patches/which_patch maps the kernel version to the appropriate patch series. For example, for RHEL / CentOS 7.3 on Lustre 2.10.0, the file contains: 根据操作系统分布,确定需要应用于内核的补丁集。文件lustre-release/lustre/kernel _patches/哪个_ patch将内核版本映射到适当的补丁系列。例如,对于Lustre 2.10.0上的RHEL/CETOs 7.3,文件包含:

3.10-rhel7.series       3.10.0-514.16.1.el7 (RHEL 7.3)

Review the list of patches in the series, e.g.: 查看系列中的修补程序列表,例如:

[build@ctb-el7 ~]$ cat $HOME/lustre-release/lustre/kernel_patches/series/3.10-rhel7.series
raid5-mmp-unplug-dev-3.7.patch
dev_read_only-3.7.patch
blkdev_tunables-3.8.patch
jbd2-fix-j_list_lock-unlock-3.10-rhel7.patch
vfs-project-quotas-rhel7.patch

Note: one of the new features introduced with Lustre 2.10 is support for project quotas. This is a powerful administration feature that allows for additional quotas to be applied to the file system based on a new identifier called a project ID. To implement project quotas for LDISKFS means making a change to EXT4 code in the kernel. Unfortunately, this particular change breaks the kernel ABI (KABI) compatibility guarantee that is a feature of RHEL kernels. If this is a concern, then remove the patch called vfs-project-quotas-rhel7.patch from the patch series file. This action will effectively disable project quota support from Lustre LDISKFS builds.

When the correct patch series has been identified, create a patch file containing all of the kernel patches required by Lustre's LDISKFS OSD: 注:Lustre 2.10引入的新功能之一是支持项目配额。这是一个强大的管理功能,允许基于称为项目标识的新标识符将额外配额应用于文件系统。为LDISKFS实现项目配额意味着在内核中更改EXT4代码。不幸的是,这个特殊的变化打破了内核ABI (KABI)兼容性保证,这是RHEL内核的一个特性。如果有问题,请从修补程序系列文件中删除名为vfs-project-quotas-rhel7.patch的修补程序。此操作将有效禁用Lustre LDISKFS版本的项目配额支持。

确定正确的补丁系列后,创建一个补丁文件,其中包含Lustre LDISKFS OSD所需的所有内核补丁:

_TOPDIR=`rpm --eval %{_topdir}`
for i in `cat $HOME/lustre-release/lustre/kernel_patches/series/3.10-rhel7.series`; do
cat $HOME/lustre-release/lustre/kernel_patches/patches/$i
done > $_TOPDIR/SOURCES/patch-lustre.patch

Apply the following changes to the Kernel RPM spec file: 将以下更改应用于内核转速规格文件:

_TOPDIR=`rpm --eval %{_topdir}`
sed -i.inst -e '/find $RPM_BUILD_ROOT\/lib\/modules\/$KernelVer/a\
    cp -a fs/ext3/* $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/fs/ext3 \
    cp -a fs/ext4/* $RPM_BUILD_ROOT/lib/modules/$KernelVer/build/fs/ext4' \
-e '/^# empty final patch to facilitate testing of kernel patches/i\
Patch99995: patch-lustre.patch' \
-e '/^ApplyOptionalPatch linux-kernel-test.patch/i\
ApplyOptionalPatch patch-lustre.patch' \
-e '/^%define listnewconfig_fail 1/s/1/0/' \
$_TOPDIR/SPECS/kernel.spec

These modifications ensure that the patches that Lustre requires for the LDISKFS OSD are applied to the kernel during compilation.

The following changes to the kernel configuration specification are also strongly recommended: 这些修改确保Lustre为LDISKFS OSD所需的补丁在编译期间应用于内核。

强烈建议对内核配置规范进行以下更改:

CONFIG_FUSION_MAX_SGE=256
CONFIG_SCSI_MAX_SG_SEGMENTS=128

To apply these changes, run the following commands from the command shell: 要应用这些更改,请从命令外壳运行以下命令:

_TOPDIR=`rpm --eval %{_topdir}`
sed -i.inst -e 's/\(CONFIG_FUSION_MAX_SGE=\).*/\1256/' \
-e 's/\(CONFIG_SCSI_MAX_SG_SEGMENTS\)/\1128/' \
$_TOPDIR/SOURCES/kernel-3.10.0-x86_64.config
! `grep -q CONFIG_SCSI_MAX_SG_SEGMENTS $_TOPDIR/SOURCES/kernel-3.10.0-x86_64.config.inst` && \
echo "CONFIG_SCSI_MAX_SG_SEGMENTS=128" >> $_TOPDIR/SOURCES/kernel-3.10.0-x86_64.config

Alternatively, there is a kernel.config file distributed with the Lustre source code that can be used in place of the standard file distributed with the kernel. If using a file from the Lustre source, make sure that the first line of the file is as follows: 或者,有一个内核配置文件与Lustre源代码一起分发,可以用来代替内核分发的标准文件。如果使用来自Lustre源的文件,请确保文件的第一行如下所示:

# x86_64

The following script demonstrates the method for a RHEL / CentOS 7.3 kernel configuration: 以下脚本演示了RHEL / CentOS 7.3内核配置的方法:

_TOPDIR=`rpm --eval %{_topdir}`
echo '# x86_64' > $_TOPDIR/SOURCES/kernel-3.10.0-x86_64.config
cat $HOME/lustre-release/lustre/kernel_patches/kernel_configs/kernel-3.10.0-3.10-rhel7-x86_64.config >> $_TOPDIR/SOURCES/kernel-3.10.0-x86_64.config

Create the kernel RPM packages

创建内核转速包

Use the following command to build the patched Linux kernel: 使用以下命令构建修补的Linux内核:

_TOPDIR=`rpm --eval %{_topdir}`
rpmbuild -ba --with firmware --with baseonly \
[--without debuginfo] \
[--without kabichk] \
--define "buildid _lustre" \
--target x86_64 \
$_TOPDIR/SPECS/kernel.spec

Note: the "--with baseonly" flag means that only the essential kernel packages will be created and the "debug" and "kdump" options will be excluded from the build. If the project quotas patch is used, the KABI verification must also be disabled using the "--without kabichk" flag. 注意:"--with baseonly"标志意味着只创建基本内核包,“debug”和“kdump”选项将从构建中排除。如果使用项目配额补丁,还必须使用“-with KABICHK”标志禁用KABI验证。

Save the Kernel RPMs

保存内核RPMs

Copy the resulting kernel RPM packages into a directory tree for later distribution: 将生成的内核转速包复制到目录树中,以便以后发布:

_TOPDIR=`rpm --eval %{_topdir}`
mkdir -p $HOME/releases/lustre-kernel
mv $_TOPDIR/RPMS/*/{kernel-*,python-perf-*,perf-*} $HOME/releases/lustre-kernel

ZFS

Lustre servers that will be using ZFS-based storage targets require packages from the ZFS on Linux project (http://zfsonlinux.org). The Linux port of ZFS is developed in cooperation with the OpenZFS project and is a versatile and powerful alternative to EXT4 as a file system target for Lustre OSDs. The source code is hosted on GitHub: 将使用基于ZFS的存储目标的Lustre服务器需要来自ZFS Linux项目的软件包(http://zfsonlinux.org)。ZFS的Linux端口是与OpenZFS项目合作开发的,是作为Lustre OSDs文件系统目标的EXT4的多功能和强大的替代品。源代码托管在GitHub上:

https://github.com/zfsonlinux

Pre-compiled packages maintained by the ZFS on Linux project are available for download. For instructions on how to incorporate the ZFS on Linux binary distribution into one of the supported operating systems, refer to the "Getting Started" documentation: ZFS Linux项目维护的预编译包可供下载。有关如何将ZFS Linux二进制发行版集成到受支持的操作系统之一的说明,请参阅“入门”文档:

https://github.com/zfsonlinux/zfs/wiki/Getting-Started

The remainder of this section describes how to create ZFS packages from source.

When compiling packages from the source code, there are three options for creating ZFS on Linux packages: 本节的剩余部分描述了如何从源创建ZFS包。

从源代码编译包时,在Linux包上创建ZFS有三个选项:

  1. DKMS: packages are distributed as source code and compiled on the target against the installed kernel[s]. When an updated kernel is installed, DKMS-compatible modules will be recompiled to work with the new kernel. The module rebuild is usually triggered automatically on system reboot, but can also be invoked directly from the command-line软件包作为源代码分发,并根据安装的内核[在目标上编译。当安装更新的内核时,DKMS兼容的模块将被重新编译以与新内核一起工作。模块重建通常在系统重启时自动触发,但也可以直接从命令行调用
  2. KMOD: kernel modules built for a specific kernel version and bundled into a binary package. These modules are not portable between kernel versions, so a change in kernel version requires that the kernel modules are recompiled and re-installed.为特定内核版本构建的内核模块,打包成二进制包。这些模块在内核版本之间是不可移植的,所以内核版本的改变需要重新编译和安装内核模块。
  3. KMOD with kernel application binary interface (KABI) compatibility, sometimes referred to as "weak-updates" support. KABI-compliant kernel modules exploit a feature available in certain operating system distributions, such as RHEL, that ensure ABI compatibility across kernel updates in the same family of releases. If a minor kernel update is installed, the KABI guarantee means that modules that were compiled against the older variant can be loaded unmodified by the new kernel without requiring re-compilation from source.具有内核应用程序二进制接口(KABI)兼容性的KMOD,有时被称为“弱更新”支持。符合KABI的内核模块利用了某些操作系统发行版(如RHEL)中可用的特性,确保了同一系列发行版中的内核更新与ABI兼容。如果安装了一个小内核更新,KABI保证意味着根据旧版本编译的模块可以被新内核不加修改地加载,而不需要从源代码重新编译。

The process for compiling ZFS and SPL is thoroughly documented on the ZFSonLinux GitHub site, but will be summarised here, as compiling ZFS has an implication on the Lustre build process. Each approach has its benefits and drawbacks.编译ZFS和SPL的过程在ZFSonLinux GitHub网站上有完整的记录,但这里将进行总结,因为编译ZFS对Lustre构建过程有一定的影响。每种方法都有其优缺点。

DKMS provides a straightforward packaging system and attempts to accommodate changes in the operating system by automatically rebuilding kernel modules, reducing manual overhead when updating OS kernels. DKMS packages are also generally easy to create and distribute.DKMS提供了一个简单的打包系统,并试图通过自动重建内核模块来适应操作系统的变化,减少更新操作系统内核时的手动开销。DKMS包通常也易于创建和分发。

The KMOD packages take more work to create, but are easier to install. However, when the kernel is updated, the modules may need to be recompiled. KABI-compliant kernel modules reduce this risk by providing ABI compatibility across minor updates, but only work for some distributions (currently RHEL and CentOS).KMOD包需要更多的工作来创建,但是更容易安装。然而,当内核更新时,模块可能需要重新编译。兼容KABI的内核模块通过在小更新之间提供ABI兼容性降低了这一风险,但只适用于某些发行版(目前是RHEL和中央操作系统)。

The premise of DKMS is simple: each time the OS kernel of a host is updated, DKMS will rebuild any out of tree kernel modules so that they can be loaded by the new kernel. This can be managed automatically on the next system boot, or can be triggered on demand. This does mean that the run-time environment of Lustre servers running ZFS DKMS modules is quite large, as it needs to include a compiler and other development libraries, but it also means that creating the packages for distribution is quick and simple.DKMS的前提很简单:每次主机的操作系统内核更新时,DKMS都会重建任何树外内核模块,以便新内核可以加载它们。这可以在下次系统启动时自动管理,也可以按需触发。这确实意味着运行ZFS DKMS模块的Lustre服务器的运行时环境相当大,因为它需要包含编译器和其他开发库,但也意味着创建用于分发的包既快速又简单。

Unfortunately, even the simple approach has its idiosyncrasies. You cannot build the DKMS packages for distribution without also building at least the SPL development packages, since the ZFS build depends on SPL, and the source code is simply not sufficient by itself.不幸的是,即使是简单的方法也有其特殊性。如果不同时构建至少SPL开发包,就无法构建DKMS包进行分发,因为ZFS构建依赖于SPL,源代码本身是不够的。

There is also a cost associated with recompiling kernel modules from source that needs to be planned for. In order to be able to recompile the modules, DKMS packages require a full software development toolkit and dependencies to be installed on all servers. This does represent a significant overhead for servers, and is usually seen as undesirable for production environments, where there is often an emphasis placed on minimising the software footprint in order to streamline deployment and maintenance, and reduce the security attack surface. 从源代码中重新编译内核模块也需要花费一定的成本。为了能够重新编译模块,DKMS软件包需要在所有服务器上安装完整的软件开发工具包和依赖项。这对于服务器来说确实是一个很大的开销,通常被认为是生产环境所不希望的,在生产环境中,通常强调最小化软件占用空间,以便简化部署和维护,并减少安全攻击面。

Rebuilding packages also takes time, which will lengthen maintenance windows. And there is always some risk that rebuilding the modules will fail for a given kernel release, although this is rare. DKMS lowers the up-front distribution overhead, but moves some of the cost of maintenance directly onto the servers and the support organisations maintaining the data centre infrastructure.重建软件包也需要时间,这会延长维护时间。对于给定的内核版本,重建模块总是有失败的风险,尽管这种情况很少见。DKMS降低了前期分发开销,但将部分维护成本直接转移到维护数据中心基础架构的服务器和支持组织上。

When choosing DKMS, it is not only the ZFS and SPL modules that need to be recompiled, but also the Lustre modules. To support this, Lustre can also be distributed as a DKMS package.选择DKMS时,不仅需要重新编译ZFS和SPL模块,还需要重新编译Lustre模块。为了支持这一点,Lustre也可以作为DKMS包分发。

Note: The DKMS method was in part adopted in order to work-around licensing compatibility issues between the Linux Kernel project, licensed under GPL, and ZFS which is licensed under CDDL, with respect to the distribution of binaries. While both licenses are free open source licenses, there are restrictions on distribution of binaries created using a combination of software source code from projects with these different licenses. There is no restriction on the separate distribution of source code, however. The DKMS modules provide a convenient workaround that simplifies packaging and distribution of the ZFS source with Lustre and Linux kernels. There are differences of opinion in the open source community regarding packaging and distribution, and currently no consensus has been reached. 注:DKMS方法的部分采用是为了解决在二进制文件分发方面,根据GPL许可的Linux内核项目和根据CDDL许可的ZFS之间的许可兼容性问题。虽然这两个许可证都是免费的开源许可证,但是使用来自具有这些不同许可证的项目的软件源代码组合创建的二进制文件的分发受到限制。然而,对源代码的单独分发没有限制。DKMS模块提供了一个方便的解决方案,简化了Lustre和Linux内核的ZFS源代码的打包和分发。开源社区对包装和分发有不同的意见,目前还没有达成共识。

The vanilla KMOD build process is straightforward to execute and will generally work for any supported Linux distribution. The KABI variant of the KMOD build is very similar with the restriction that it is only useful for distributions that support KABI compatibility. The KABI build is also has some hard-coded directory paths in the supplied RPM spec files, which has effectively mandated a dedicated build environment for creating packages. 普通的KMOD构建过程易于执行,通常适用于任何受支持的Linux发行版。KMOD构建的KABI变体与限制非常相似,即它仅适用于支持KABI兼容性的发行版。KABI构建在提供的RPM规范文件中也有一些硬编码的目录路径,这有效地授权了一个用于创建包的专用构建环境。

Obtain the ZFS Source Code

获取ZFS源代码

If the target build will be based on ZFS, then acquire the ZFS software sources from the ZFS on Linux project. ZFS is comprised of two projects: 如果目标构建基于ZFS,那么从ZFS的Linux项目中获取ZFS软件资源。ZFS由两个项目组成:

  • SPL: Solaris portability layer. This is a shim that presents ZFS with a consistent interface and allows OpenZFS to be ported to multiple operating systems.
  • ZFS: The OpenZFS file system implementation for Linux.
  • SPL: Solaris可移植层。这是一个垫片,它为ZFS提供了一致的接口,并允许OpenZFS移植到多个操作系统。 * ZFS:面向Linux的OpenZFS文件系统实现。

Clone the SPL and ZFS repositories as follows: 按照以下方式克隆SPL和ZFS存储库:

# Mock users run "mock --shell" first
cd $HOME
git clone https://github.com/zfsonlinux/spl.git
git clone https://github.com/zfsonlinux/zfs.git

When the repositories have been cloned, change into the clone directory of each project and review the branches: 当存储库被克隆后,更改到每个项目的克隆目录,并查看分支:

cd $HOME/spl
git branch -av
 
cd $HOME/zfs
git branch -av

For example: 例如:

[build@ctb-el73 spl]$ cd $HOME/spl
[build@ctb-el73 spl]$ git branch -av
* master                           8f87971 Linux 4.12 compat: PF_FSTRANS was removed
  remotes/origin/HEAD              -> origin/master
  remotes/origin/master            8f87971 Linux 4.12 compat: PF_FSTRANS was removed
  remotes/origin/spl-0.6.3-stable  ce4c463 Tag spl-0.6.3-1.3
  remotes/origin/spl-0.6.4-release c8acde0 Tag spl-0.6.4.1
  remotes/origin/spl-0.6.5-release b5bed49 Prepare to release 0.6.5.9

The master branch In each project is the main development branch and will form the basis of the next release of SPL and ZFS, respectively. 每个项目中的主分支是主要的开发分支,并将分别构成SPL和ZFS下一版本的基础。

Review the tags as follows: 按照以下步骤检查标签:

git tag

Just like the Lustre project, there are many more tags than there are branches, although the naming convention is simpler. Tags have the format <name>-<version>. The following output lists some of the tags in the spl repository: 就像Lustre项目一样,标签比分支多得多,尽管命名约定更简单。标签的格式是<name>-<version>。以下输出列出了spl存储库中的一些标签:

[build@ctb-el73 spl]$ git tag | tail -8
spl-0.6.5.6
spl-0.6.5.7
spl-0.6.5.8
spl-0.6.5.9
spl-0.7.0-rc1
spl-0.7.0-rc2
spl-0.7.0-rc3
spl-0.7.0-rc4

Tags with an rc# suffix are release candidates. 带有rc# 后缀的标签是发布候选。

Use Git to checkout the release version of SPL and ZFS that will be built and then run the autogen.sh script to prepare the build environment. For example, to checkout SPL version 0.6.5.9: 使用Git签出将要构建的SPL和ZFS版本,然后运行autogen.sh脚本来准备构建环境。例如,要签出SPL版本0.6.5.9:

cd $HOME/spl
git checkout spl-0.6.5.9
sh autogen.sh

To check out SPL version 0.7.0-rc4: 要查看SPL版本0.7.0-rc4:

cd $HOME/spl
git checkout spl-0.7.0-rc4
sh autogen.sh

Do the same for ZFS. for example: 为ZFS做同样的事情。例如:

cd $HOME/zfs
git checkout zfs-0.6.5.9
sh autogen.sh

For ZFS 0.7.0-rc4: 对于ZFS 0.7.0-rc4:

cd $HOME/zfs
git checkout zfs-0.7.0-rc4
sh autogen.sh

Make sure that the SPL and ZFS versions match for each respective checkout. 确保SPL和ZFS版本在每次结帐时都匹配。

The ZFS on Linux source code is also available in the package format distributed alongside the binaries for a release. The latest software releases are available from the following URL: Linux上的ZFS源代码也可以以包的形式提供,并与二进制文件一起发布。最新软件版本可从以下网址获得:

https://github.com/zfsonlinux/

Links are also available on the main ZFS on Linux site: 链接也可以在ZFS的主网站上找到:

http://zfsonlinux.org/

Note: the examples used in the remainder of the documentation are based on a release candidate version of ZFS version 0.7.0, but the process applies equally to all recent releases. 注:文档剩余部分中使用的示例基于ZFS版本0.7.0的候选版本,但该过程同样适用于所有最新版本。

Install the Kernel Development Package

安装内核开发包

The SPL and ZFS projects comprise kernel modules as well as user-space applications. To compile the kernel modules, install the kernel development packages relevant to the target OS distribution. This must match the kernel version being used to create the Lustre packages. Review the ChangeLog file in the Lustre source code to identify the appropriate kernel version.

The following excerpt shows that Lustre version 2.10.0 supports version 3.10.0-514.16.1.el7 of the RHEL / CentOS 7.3 kernel, and version 4.4.49-92.14 of the SLES 12 SP2 kernel (output has been truncated): SPL和ZFS项目包括核心模块以及用户空间应用程序。要编译内核模块,请安装与目标操作系统分发相关的内核开发包。这必须与用于创建Lustre包的内核版本相匹配。查看Lustre源代码中的更改日志文件,以确定合适的内核版本。

以下摘录显示Lustre 2 . 10 . 0版支持RHEL /中央处理器7.3内核的3.10.0-514.16.1.el7版和SLES 12 SP2内核的4.4.49-92.14版(输出已被截断):

TBD Intel Corporation
       * version 2.10.0
       * See https://wiki.whamcloud.com/display/PUB/Lustre+Support+Matrix
         for currently supported client and server kernel versions.
       * Server known to build on patched kernels:
...
         3.10.0-514.16.1.el7 (RHEL7.3)
...
         4.4.49-92.14        (SLES12 SP2)
...

Note: it is also possible to compile the SPL and ZFS packages against the LDISKFS patched kernel development tree, in which case, substitute the kernel development packages from the OS distribution with those created with the LDISKFS patches. 注意:也可以根据LDISKFS修补的内核开发树编译SPL和ZFS包,在这种情况下,用LDISKFS修补程序创建的包替换操作系统发行版中的内核开发包。

RHEL and CentOS

For RHEL / CentOS systems, use YUM to install the kernel-devel RPM. For example:

sudo yum install kernel-devel-3.10.0-514.16.1.el7

If Mock is being used to create packages, install the kernel-devel RPM using the mock --install command:

mock --install kernel-devel-3.10.0-514.16.1.el7

Note: you can, in fact, run YUM commands within the mock shell, as well.

Note: similar to the way in which the kernel source can be automatically identified and installed for the LDISKFS patched kernel, the following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then use that to install the kernel-devel package:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os=RHEL7.3 lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`
[ -n "$kernelversion" ] && $SUDOCMD yum -y install kernel-devel-$kernelversion || echo "ERROR: kernel version not found."

Set the os and lu variables at the beginning of the script to the required operating system release and Lustre version respectively.

SLES 12 SP2

For SLES12 SP2 systems, use zypper to install the kernel development packages. For example:

sudo zypper install \
kernel-default-devel=4.4.59-92.17 \
kernel-devel=4.4.59-92.17 \
kernel-syms=4.4.59-92.17 \
kernel-source=4.4.59-92.17 

Note: the following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then use that to install the packages:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os="SLES12 SP2" lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`

[ -n "$kernelversion" ] && $SUDOCMD zypper install \
kernel-default-devel=$kernelversion \
kernel-devel=$kernelversion \
kernel-syms=$kernelversion \
kernel-source=$kernelversion || echo "ERROR: kernel version not found."

Set the os and lu variables at the beginning of the script to the required operating system release and Lustre version respectively.

Create the SPL Packages

Run the configure script:

cd $HOME/spl
# For RHEL and CentOS, set the --spec=redhat flag. Otherwise do not use.
./configure [--with-spec=redhat] \
[--with-linux=<path to kernel-devel>] \
[--with-linux-obj=<path to kernel-devel>]

The simplest invocation is to run the configure script with no arguments:

cd $HOME/spl
./configure

This is usually sufficient for most distributions such as SLES 12. To compile KABI-compliant kernel module packages for RHEL and CentOS distributions, use the --with-spec=redhat option:

cd $HOME/spl
# For RHEL and CentOS, set the --spec=redhat flag. Otherwise do not use.
./configure [--with-spec=redhat]

This option is not usable for other OS distributions.

If there is only one set of kernel development packages installed, the configure script should automatically detect the location of the relevant directory tree. However, if there are multiple kernel development packages installed for different kernel versions and revisions, then use the --with-linux and optionally --with-linux-obj flags to identify the correct directory for the target kernel. For example:

cd $HOME/spl
./configure --with-spec=redhat \
--with-linux=/usr/src/kernels/3.10.0-514.16.1.el7.x86_64

Packages are created using the make command. There are three types of package that can be created from the SPL project. These are selected by providing parameters to the make command. One must create, at a minimum, the user-space packages, at least one other set of packages: the KMOD and/or DKMS packages.

To compile the user-space tools, run this command:

make pkg-utils

To create the kernel modules packages:

make pkg-kmod

To create the DKMS package:

make rpm-dkms

Since later process steps require that dependent packages be installed on the build server, always compile the user-space and KMOD packages even when the intended distribution will be DKMS. To compile all required sets of packages from a single command line invocation:

make pkg-utils pkg-kmod [rpm-dkms]

Note: DKMS packaging has not been evaluated for SLES.

Save the SPL RPMs

Copy the resulting RPM packages into a directory tree for later distribution:

mkdir -p $HOME/releases/zfs-spl
mv $HOME/spl/*.rpm $HOME/releases/zfs-spl

Create the ZFS Packages

The build process for ZFS is very similar to that for SPL. The ZFS package build process has a dependency on SPL, so make sure that the SPL packages created in the previous step have been installed on the build host.

RHEL / CentOS

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/zfs-spl/{spl-[0-9].*,kmod-spl-[0-9].*,kmod-spl-devel-[0-9].*}.x86_64.rpm

Note: it is not unusual for the installation to resolve additional dependencies, including the full kernel package for the version of the kernel that SPL was compiled for.

SLES 12 SP2

sudo rpm -ivh kmod-spl-* spl-0.7.0*.x86_64.rpm

Note: The rpm command is used in the above example due to a peculiarity of the SLES packages for SPL (and which also affects ZFS). In the set of RPMs that are created, two of the packages have very similar names (kmod-spl-devel-*), differing only by the version numbering, as can be seen in the following example:

kmod-spl-devel-0.7.0-rc4.x86_64.rpm
kmod-spl-devel-4.4.21-69-default-0.7.0-rc4.x86_64.rpm

It is essential to install both packages but if both are specified on the command line invocation, the zypper command will only install one of them. The rpm command is not affected. To use zypper instead, so that dependencies are automatically resolved, run the command twice, with the second command containing just the "conflicting" RPM. For example:

sudo zypper install kmod-spl-4.4.21-69-default-0.7.0-rc4.x86_64.rpm \
kmod-spl-devel-0.7.0-rc4.x86_64.rpm \
spl-0.7.0*.x86_64.rpm
sudo zypper install kmod-spl-devel-4.4.21-69-default-0.7.0-rc4.x86_64.rpm

Prepare the build

Run the configure script:

cd $HOME/zfs
# For RHEL and CentOS only, set the --spec=redhat flag.
./configure [--with-spec=redhat] \
[--with-spl=<path to spl-devel> \
[--with-linux=<path to kernel-devel>] \
[--with-linux-obj=<path to kernel obj>]

The simplest invocation is to run the configure script with no arguments:

cd $HOME/zfs
./configure

This is usually sufficient for most distributions such as SLES 12.

To compile KABI-compliant kernel module packages for RHEL and CentOS distributions, use the --with-spec=redhat option:

cd $HOME/zfs
# For RHEL and CentOS, set the --spec=redhat flag. Otherwise do not use.
./configure [--with-spec=redhat]

This option is not usable for other OS distributions.

If there is only one set of kernel development packages installed, the configure script should automatically detect the location of the relevant directory tree. However, if there are multiple kernel development packages installed for different kernel versions and revisions, then use the --with-linux and optionally --with-linux-obj flags to identify the correct directory for the target kernel.

For example:

cd $HOME/zfs
./configure --with-spec=redhat \
--with-linux=/usr/src/kernels/3.10.0-514.16.1.el7.x86_64

In addition to the location of the kernel-devel RPM, the configure script may also need to be informed of the location of the SPL development installation (i.e. the location of the files installed from the spl-devel package, not the Git source code repository). For example:

cd $HOME/zfs
./configure --with-spec=redhat \
--with-spl=/usr/src/spl-0.7.0 \
--with-linux=/usr/src/kernels/3.10.0-514.16.1.el7.x86_64

Packages are created using the make command. Just like SPL, there are three types of package that can be created from the ZFS project. These are selected by providing parameters to the make command. One must create, at a minimum, the user-space packages, at least one other set of packages: the KMOD and/or DKMS packages.

Compile the Packages

To compile the user-space tools, run this command:

make pkg-utils

To create the kernel modules packages:

make pkg-kmod

To create the DKMS package:

make rpm-dkms

It is recommended that the user-space and KMOD packages are always compiled even when the intended distribution will be DKMS. To compile all sets of packages from a single command line invocation:

make pkg-utils pkg-kmod [rpm-dkms]

Save the ZFS RPMs

Copy the resulting RPM packages into a directory tree for later distribution:

mkdir -p $HOME/releases/zfs-spl
mv $HOME/zfs/*.rpm $HOME/releases/zfs-spl

3rd Party Network Fabric Support

This section is optional since, by default, Lustre will use the device drivers supplied by the Linux kernel. Complete this section if 3rd party InfiniBand drivers are required for the target environment. The procedure for creating InfiniBand drivers from external sources varies depending upon the version of the InfiniBand software being used.

Instructions are provided for each of the following driver distributions:

  • OpenFabrics Alliance (OFA) OFED*
  • Mellanox OFED
  • True Scale OFED
  • Intel OmniPath Architecture (OPA)

*OFED: Open Fabrics Enterprise Distribution

Note: whichever distribution of OFED is selected, the resulting RPMs created during the build process for Lustre must be saved for distribution with the Lustre server packages.

Note: The procedure in this section only prepares the distribution packages needed to compile Lustre from source. To create a full installation, follow the instructions provided by the driver vendor. Naturally, one can also use the full installation of the OFED packages on the build server instead of using the stripped-down procedure described here.

Preparation

Any 3rd party drivers must be compiled against the target kernel that will be used by Lustre. This is true for each of the InfiniBand driver distributions, regardless of vendor. If the target systems will be using LDISKFS for the storage, then use kernel packages that have been created with the Lustre LDISKFS patches applied. If the kernel for the target servers has not been patched for LDISKFS, then use the binary kernel packages supplied by the operating system.

Note: Only the kernel-devel package is needed for this part of the build process.

Lustre-patched kernel-devel Package (for LDISKFS Server Builds)

For Lustre LDISKFS patched kernels, where the patched kernel has been recompiled from source, install the kernel-devel package as follows:

SUDOCMD=`which sudo 2>/dev/null`
find `rpm --eval %{_rpmdir}` -type f -name kernel-devel-\*.rpm -exec $SUDOCMD yum localinstall {} \;

Unpatched kernel-devel Package (for ZFS-only Server and Lustre Client Builds)

For "patchless" kernels, install the kernel-devel RPM that matches the supported kernel for the version of Lustre being compiled. Refer to the Lustre changelog in the source code distribution (lustre-release/lustre/ChangeLog) for the list of kernels for each OS distribution that are known to work with Lustre. The ChangeLog file contains a historical record of all Lustre releases.

For example, Lustre version 2.10.0 supports version 3.10.0-514.16.1.el7 of the RHEL / CentOS 7.3 kernel. Use YUM to install the kernel-devel RPM:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum install kernel-devel-3.10.0-514.16.1.el7

If Mock is being used to create packages, exit the Mock shell and install the kernel-devel RPM using the mock --install command:

mock --install kernel-devel-3.10.0-514.16.1.el7

Note: similar to the way in which the kernel source can be automatically identified and installed for the LDISKFS patched kernel, the following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then use that to install the kernel-devel package:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os=RHEL7.3 lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`
[ -n "$kernelversion" ] && $SUDOCMD yum -y install kernel-devel-$kernelversion || echo "ERROR: kernel version not found."

Set the os and lu variables at the beginning of the script to the required operating system release and Lustre version respectively.

For older RHEL / CentOS distributions, the required kernel might not be available in the active YUM repository for the distribution. CentOS maintains an archive or all previous releases in a set of YUM repositories called Vault, located at:

http://vault.centos.org

For example, the source RPMS for the CentOS 7.2 package updates can be found here:

http://vault.centos.org/7.2.1511/updates/x86_64/Packages

When the kernel-devel package has been downloaded, install it:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum -y install kernel-devel-<version>*.rpm

OpenFabrics Alliance (OFA) Open Fabrics Enterprise Distribution (OFED)

OFED Is maintained by the OpenFabrics alliance: http://openfabrics.org.

Note: At the time of writing, OFED 4.8-rc2 does not work with the latest Lustre release (Lustre 2.10.0), and that OFED 3.18-3 does not compile on RHEL / CentOS 7.3. It is therefore recommended that integrators and systems administrators use the in-kernel InfiniBand drivers, or the drivers supplied by the HCA vendor (Mellanox or Intel True Scale). Since it is rare for systems to make direct use of OFA OFED for production installations, using an alternative driver distribution is preferred in any case.

There are several releases of the OFED distribution, distinguished by version number, and the build process for each is different. OFED version 4 is the latest stable release at the time of writing (May 2017). There is also a version 3.18-3 stable release that is currently more mature but does not compile cleanly on RHEL / CentOS 7.3 or newer. Check the OFA web site for updates and to verify the releases that are compatible with the target operating system distribution.

Instructions are provided for OFED-4.8-rc2 but the method is equivalent for all 4.x and 3.x releases.

Note: in OFED version 3 and 4, the kernel drivers are contained in the compat_rdma RPM. In versions of OFED prior to release 3, the IB kernel drivers were contained in a source RPM called ofa_kernel, which in turn built kernel-ib and related binary packages.

Download the OpenFabrics (OFA) OFED software distribution from http://downloads.openfabrics.org/OFED., and extract the tarball bundle. For example, to download OFED-4.8-rc2:

cd $HOME
wget http://downloads.openfabrics.org/OFED/ofed-4.8/OFED-4.8-rc2.tgz
tar zxf $HOME/OFED-4.8-rc2.tgz

Intel True Scale InfiniBand

Intel provides a software distribution, derived from OFED, to support its True Scale InfiniBand host channel adapters (HCAs). The distribution can be downloaded from Intel's download centre:

https://downloadcenter.intel.com

Once downloaded, extract the Intel-IB bundle. For example:

cd $HOME
tar zxf $HOME/IntelIB-Basic.RHEL7-x86_64.7.4.2.0.6.tgz

Mellanox InfiniBand

Mellanox provides its own distribution of OFED, optimised for the Mellanox chipsets. Occasionally referred to as MOFED. The software can be downloaded from the Mellanox web site:

http://www.mellanox.com/page/software_overview_ib

Once downloaded, extract the Mellanox OFED bundle. For example:

cd $HOME
tar zxf $HOME/MLNX_OFED_SRC-3.4-2.1.8.0.tgz

While the overall process for compiling the Mellanox kernel driver is similar to that for OFA and Intel OFED distributions, Mellanox packages the kernel drivers into an source RPM called mlnx-ofa_kernel, rather than compat-rdma.

Intel Omni-Path Architecture

Recent releases of the Intel Omni-Path host fabric interface (HFI) adapters use the drivers supplied by the distribution kernel and do not normally require a customised driver build. However, there are occasionally driver updates included in the IFS distribution from Intel, which may need to be recompiled for LDISKFS kernels. The same is true for older releases of the Intel Omni-Path software. Kernel driver updates are distributed in a compat-rdma kernel driver package which can be treated in the same way as for Intel True Scale OFED distributions.

Compiling the Network Fabric Kernel Drivers

There are many options available for the IB kernel driver builds and it is important to review the documentation supplied with the individual driver distributions to ensure that appropriate options required by the target environment are selected.

The options used in the following example are based on the default selections made by the distributions' install software. These should meet most requirements for x86_64-based systems and be suitable for each of the different vendors. The command-line can be used to build the compat-rdma packages for OFA OFED, Intel True Scale, as well as the mlnx-ofa_kernel package for Mellanox OFED. Some options for OFED are only available on specific kernels or processor architectures and these have been omitted from the example:

rpmbuild --rebuild --nodeps --define 'build_kernel_ib 1' --define 'build_kernel_ib_devel 1' \
--define 'configure_options --with-addr_trans-mod --with-core-mod --with-cxgb3-mod --with-cxgb4-mod --with-ipoib-mod --with-iser-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-nes-mod --with-srp-mod --with-user_access-mod --with-user_mad-mod --with-ibscif-mod --with-ipath_inf-mod --with-iscsi-mod --with-qib-mod --with-qlgc_vnic-mod' \

--define 'KVERSION <version>-<release>.<os-dist>.x86_64' \

--define 'K_SRC /usr/src/kernels/<version>-<release>.<os-dist>.x86_64' \
--define 'K_SRC_OBJ /usr/src/kernels/<version>-<release>.<os-dist>.x86_64' \

--define '_release <version>_<release>.<os-dist>' \

<distribution directory>/SRPMS/<package-name>-<version>-<release>.src.rpm


Note: In the command line arguments, the definition of the variable configure_options must appear on a single line.


Pay special attention to the KVERSION</code<, <code>K_SRC, K_SRC_OBJ and _release variables. These must match the target kernel version. In addition, the _release variable must not contain any hyphen (-) characters. Instead, replace hyphens with underscore (_). The _release variable is optional, but recommended as it will help to associate the package build with the kernel version.

The following is a complete example, using kernel version 3.10.0-514.16.1.el7_lustre.x86_64 (a Lustre-patched kernel for RHEL / CentOS 7.3 built using the process described earlier in this document). At the beginning of the example are variables pointing to the kernel driver packages for each of the major distributions:

# OFA OFED 3.x
# ofed_driver_srpm=$HOME/OFED-3.*/SRPMS/compat-rdma-3.*.rpm
# OFA OFED 4.x
# ofed_driver_srpm=$HOME/OFED-4.*/SRPMS/compat-rdma-4.*.src.rpm
# Intel True Scale
# ofed_driver_srpm=$HOME/IntelIB-Basic.RHEL7-x86_64.7.*/IntelIB-OFED.RHEL7-x86_64.3.*/SRPMS/compat-rdma-3.*.src.rpm
# Mellanox OFED 3.x
# ofed_driver_srpm=$HOME/MLNX_OFED_SRC-3.*/SRPMS/mlnx-ofa_kernel-3.*-OFED.3.*.src.rpm
 
ofed_driver_srpm=$HOME/IntelIB-Basic.RHEL7-x86_64.7.*/IntelIB-OFED.RHEL7-x86_64.3.*/SRPMS/compat-rdma-3.*.src.rpm
kernel_dev=3.10.0-514.16.1.el7_lustre.x86_64
kernel_release=`echo $kernel_dev|sed s'/-/_/g'`
 
rpmbuild --rebuild --nodeps --define 'build_kernel_ib 1' --define 'build_kernel_ib_devel 1' \
--define 'configure_options --with-addr_trans-mod --with-core-mod --with-cxgb3-mod --with-cxgb4-mod --with-ipoib-mod --with-iser-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-nes-mod --with-srp-mod --with-user_access-mod --with-user_mad-mod --with-ibscif-mod --with-ipath_inf-mod --with-iscsi-mod --with-qib-mod --with-qlgc_vnic-mod' \
--define "KVERSION $kernel_dev" \
--define "K_SRC /usr/src/kernels/$kernel_dev" \
--define "K_SRC_OBJ /usr/src/kernels/$kernel_dev" \
--define "_release $kernel_release" \
$ofed_driver_srpm

The result is a set of kernel drivers for InfiniBand devices that are compatible with the kernel that will be used by Lustre.

An alternative method is to use the standard OFED install script. The following example shows how to supply additional options the the standard OFED installer:

cd $HOME/*OFED*/
./install.pl \
--kernel <kernel version> \
--linux /usr/src/kernels/<kernel-devel version> \
--linux-obj /usr/src/kernels/<kernel-devel version>

This will run through the insteractive build and install process, with options to select the various packages. Since the Lustre build process requires only the kernel drivers, the documentation uses the direct rpmbuild command, which in turn makes the possibility of automation easier to incorporate.

The Mellanox OFED install.pl script is similar, but has more options to control how the build is prosecuted. For example:

cd $HOME/*OFED*/
./install.pl --build-only --kernel-only \
--kernel <kernel version> \
--kernel-sources /usr/src/kernels/<kernel-devel version>

Intel's IFS IB install script is quite different form the OFA and Mellanox OFED scripts, and does not provide an obvious means to specify the kernel version. Nevertheless, using the more direct rpmbuild command above should result in suitable kernel drivers being created for whichever driver distribution is required.

Save the Driver RPMs

Copy the resulting RPM packages into a directory tree for later distribution:

_TOPDIR=`rpm --eval %{_topdir}`
mkdir -p $HOME/releases/ofed
mv $_TOPDIR/RPMS/*/*.rpm $HOME/releases/ofed

Create the Lustre Packages

Preparation

When compiling the Lustre packages from source, the build environment requires access to the kernel development package for the target Linux kernel, and in the case of patchless LDISKFS servers, the kernel source code is also needed. The requirements are as follows:

  • Patched LDISKFS Lustre servers require the kernel development package that has been created with the Lustre patches applied (the "lustre-patched kernel")
  • [Experimental] Patchless LDISKFS Lustre servers require the standard kernel development package and the matching kernel source code package.
  • ZFS-based Lustre servers and all Lustre clients require the standard kernel development package

Also required are any 3rd party network device drivers not distributed with the kernel itself; typically this means InfiniBand drivers from one of the OFED distributions (either compat-rdma-devel or mlnx-ofa_kernel-devel).

Lustre Server (DKMS Packages only)

Lustre服务器(仅限DKMS套装)

The process for creating a Lustre server DKMS package is straightforward: 创建Lustre服务器DKMS包的过程非常简单:

_TOPDIR=`rpm --eval %{_topdir}`
cd $HOME/lustre-release
./configure --enable-dist
make dist
cp lustre-*.tar.gz $_TOPDIR/SOURCES/
rpmbuild -bs lustre-dkms.spec
rpmbuild --rebuild $_TOPDIR/SRPMS/lustre-dkms-*.src.rpm
mkdir -p $HOME/releases/lustre-server-dkms
mv $_TOPDIR/RPMS/*/*.rpm $HOME/releases/lustre-server-dkms

If the objective is to create a set of DKMS server packages for use with ZFS, then there is no further work required. See also the section on creating DKMS packages for Lustre clients, if required. 如果目标是创建一组供ZFS使用的DKMS服务器包,则不需要做进一步的工作。如有需要,另请参阅为Lustre客户端创建DKMS软件包一节。

Lustre服务器(所有其他版本)

需要针对Linux内核的开发包,以及可选的SPL、ZFS和OFED来编译Lustre服务器包。下面的示例中使用的包取自本指南前面创建的版本。

带补丁的LDISKFS服务器版本

对于Lustre LDISKFS打了补丁的内核(包括可选的项目配额补丁),需要安装内核开发包,或者安装包含在编译的补丁中的包。例如:

SUDOCMD=`which sudo 2>/dev/null`
INSTCMD=`which yum 2>/dev/null || which zypper 2>/dev/null`
$SUDOCMD $INSTCMD localinstall $HOME/releases/lustre-kernel/kernel-devel-\*.rpm

ZFS 或无补丁LDISKFS的服务器版本

对于“无补丁”内核,应安装kernel-devel 包,这个包能够匹配正在编译的Lustre版本支持的内核。想了解更多与包含Lustre的内核列表,请参考源代码发行版(lustre-release/lustre/ChangeLog)中的Lustre ChangeLog 文件。ChangeLog 文件包含所有Lustre版本的历史记录。

对于无补丁的LDISKFS内核,也需要下载并安装与目标内核匹配的内核源代码包。

RHEL/CentOS 7 内核开发包

对于RHEL / CentOS 7, 使用yum安装Lustre所需的一组内核开发包。例如,要安装RHEL /CentOS 7.3 的kernel-devel 3.10.0-514.16.1.el7版本:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum install kernel-devel-3.10.0-514.16.1.el7

For LDISKFS support with an unpatched kernel, install the kernel-debuginfo-common package, which contains the EXT4 source code needed to create the LDISKFS kernel module. For example:

sudo yum install --disablerepo=* --enablerepo=base-debuginfo kernel-debuginfo-common-x86_64-3.10.0-514.16.1.el7

Alternatively, one can download the source RPM package and copy the EXT4 source code into place:

yumdownloader --source kernel-3.10.0-514.16.1.el7
rpm -ivh kernel-3.10.0-514.16.1.el7.src.rpm
tar Jxf $HOME/rpmbuild/SOURCES/linux-3.10.0-514.16.1.el7.tar.xz linux-*/fs/ext{3,4}
sudo cp -an $HOME/linux-3.10.0-514.16.1.el7/fs/ext{3,4} \
  /usr/src/kernels/3.10.0-514.16.1.el7.x86_64/fs/.

The following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then use that to install the kernel-devel and kernel source RPMs:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os=RHEL7.3 lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`
[ -n "$kernelversion" ] && $SUDOCMD yum -y install kernel-devel-$kernelversion || echo "ERROR: kernel version not found."

#For patchless LDISKFS support:
sudo yum install --disablerepo=* --enablerepo=base-debuginfo kernel-debuginfo-common-x86_64-$kernelversion

Set the os and lu variables at the beginning of the script to the required operating system release and Lustre version respectively.

SLES 12 SP2 Kernel Development Packages

For SLES 12 SP2, use zypper to install the set of kernel development packages required by Lustre. For example:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD zypper install \
kernel-default-devel=4.4.59-92.17 \
kernel-devel=4.4.59-92.17 \
kernel-syms=4.4.59-92.17 \
kernel-source=4.4.59-92.17 

Similarly, the following shell script fragment can be used to identify the kernel version for a given operating system and Lustre version, and then install the kernel development packages for SLES:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os="SLES12 SP2" lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`

[ -n "$kernelversion" ] && $SUDOCMD zypper install \
kernel-default-devel=$kernelversion \
kernel-devel=$kernelversion \
kernel-syms=$kernelversion \
kernel-source=$kernelversion || echo "ERROR: kernel version not found."

Note: To compile Lustre, SLES 12 SP2 development environments require the kernel-syms package as well as kernel-default-devel, kernel-devel, and kernel-source. zypper may also incorporate other packages as dependencies.

安装ZFS开发包(针对ZFS服务器版本)

如果有必要,请安装SPL和ZFS开发包。

RHEL / CentOS 7

有两种方法:

  • 使用Linux项目中的ZFS 软件包
  • 安装源码编译后的软件包,如编译ZFS一节中介绍的一样。

要使用Linux项目中的ZFS来维护的二进制包版本,请按照[[[1]] 中的说明配置YUM存储库,然后运行以下命令:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum-config-manager --disable zfs
$SUDOCMD yum-config-manager --enable zfs-kmod
$SUDOCMD yum install \
spl zfs \
kmod-spl kmod-spl-devel \
kmod-zfs kmod-zfs-devel \
libzfs2-devel

对于使用本指南前面介绍的过程创建的定制的编译包,请使用以下命令:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/zfs-spl/{spl-[0-9].*,kmod-spl-[0-9].*,kmod-spl-devel-[0-9].*}.x86_64.rpm \
$HOME/releases/zfs-spl/{zfs-[0-9].*,zfs-dracut-[0-9].*,kmod-zfs-[0-9].*,kmod-zfs-devel-[0-9].*,lib*}.x86_64.rpm
SLES 12 SP2

注意: Linux项目中的ZFS似乎没有为SLES提供ZFS二进制版本。

对于使用本指南前面介绍的过程创建的定制的编译包,请使用以下命令:

cd $HOME/releases/zfs-spl
SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD rpm -ivh kmod-spl-* spl-*.x86_64.rpm \
kmod-zfs-[0-9].*-default-*.x86_64.rpm \
kmod-zfs-devel-[0-9].*.x86_64.rpm \
lib*.x86_64.rpm \
zfs-[0-9].*.x86_64.rpm \
zfs-dracut-[0-9].*.x86_64.rpm

可选: 第三方驱动

如果有第三方InfiniBand驱动,也必须安装它们。

对于OFA OFED和Intel True Scale驱动:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/ofed/{compat-rdma-devel-[0-9].*,compat-rdma-[0-9].*}.x86_64.rpm

对于Mellanox OFED驱动:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/ofed/{mlnx-ofa_kernel-[0-9].*,mlnx-ofa_kernel-devel-[0-9].*,mlnx-ofa_kernel-modules-[0-9].*}.x86_64.rpm

创建RPMS服务器

使用构建服务器上的命令行shell,切换到包含克隆的Lustre Git存储库的目录:

cd $HOME/lustre-release

确保之前构建的相关文件已经被清理干净,提供一个原始的构建环境:

make distclean

运行配置脚本:

./configure --enable-server \
[ --disable-ldiskfs ] \
[ --with-linux=<kernel devel or src path> ] \
[ --with-linux-obj=<kernel obj path> ] \
[ --with-o2ib=<IB driver path> ] \
[ --with-zfs=<ZFS devel path> | no ] \
[ --with-spl=<SPL devel path> ]

若要创建服务器包,需要将--enable-server选项合并。 --with-linux--with-o2ib 选项应该分别指向提取的kernel-devel (或者 kernel-source) 和InfiniBand内核驱动程序所在的位置。SLES 12版本也需要--with-linux-obj 参数。

如果ZFS开发树安装在默认位置,configure脚本通常会自动检测到它,但如果没有,请使用--with-zfs--with-spl选项来指定包含相应开发包目录的目录。Lustre可以自动确定它是否正在编译支持LDISKFS或ZFS的服务器包。要强制Lustre禁用ZFS支持,请设置--with-zfs=no。要显式禁用LDISKFS支持,请使用 --disable-ldiskfs

RHEL / CentOS 7 Examples

为OFA OFED或Intel True Scale,创建带LDISKFS补丁内核的Lustre服务器包:

./configure --enable-server \
--with-linux=/usr/src/kernels/*_lustre.x86_64 \
--with-o2ib=/usr/src/compat-rdma

为Mellanox OFED创建带有补丁的内核包:

./configure --enable-server \
--with-linux=/usr/src/kernels/*_lustre.x86_64 \
--with-o2ib=/usr/src/ofa_kernel/default

使用标准的、不包含补丁的操作系统内核版本3.10.0-514.16.1.el7.x86_64来创建Lustre包,以支持ZFS服务器和无补丁LDISKFS服务器:

./configure --enable-server \
--with-linux=/usr/src/kernels/3.10.0-514.16.1.el7.x86_64
SLES 12 SP2样例

如果需要为不包含补丁的内核创建Lustre服务器包,请参考特定的目标内核:

./configure --enable-server \
--with-linux=/usr/src/linux-4.4.59-92.17 \
--with-linux-obj=/usr/src/linux-4.4.59-92.17-obj/x86_64/default 

上面的示例命令行引用了一个不包含补丁的内核,所以适用于不要求内核补丁的ZFS构建和LDISKFS构建。

编译服务器包

要构建Lustre服务器包:

make rpms

On successful completion of the build, packages will be created in the current working directory. 构建成功完成后,将在当前工作目录中创建包。

保存Lustre服务器的RPMs包

将Lustre服务器RPM包复制到目录树中,以便后续发布:

mkdir -p $HOME/releases/lustre-server
mv $HOME/lustre-release/*.rpm $HOME/releases/lustre-server

Lustre客户(仅限DKMS包)

创建Lustre客户端的DKMS包的过程非常简单:

_TOPDIR=`rpm --eval %{_topdir}`
cd $HOME/lustre-release
make distclean
./configure --enable-dist --disable-server --enable-client
make dist
cp lustre-*.tar.gz $_TOPDIR/SOURCES/
rpmbuild -bs --without servers lustre-dkms.spec
rpmbuild --rebuild --without servers $_TOPDIR/SRPMS/lustre-client-dkms-*.src.rpm
mkdir -p $HOME/releases/lustre-client-dkms
mv $_TOPDIR/RPMS/*/*.rpm $HOME/releases/lustre-client-dkms

如果目标是创建一组DMKS客户端包,则不需要再做进一步的工作。

Lustre客户端(包含所有版本)

安装kernel-devel包

Lustre客户端版本需要一个不包含补丁的kernel-devel版本。

RHEL / CentOS 7

使用以下shell脚本片段为给定的操作系统和Lustre版本识别并下载合适的内核版本:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os=RHEL7.3 lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`
[ -n "$kernelversion" ] && $SUDOCMD yum -y install kernel-devel-$kernelversion || echo "ERROR: kernel version not found."

将脚本开头的oslu 变量分别设置为所需的操作系统版本和Lustre版本。

SLES 12 SP2

对于SLES 12 SP2,请使用zypper安装Lustre所需的一组内核开发包。以下shell脚本可用于识别给定操作系统的内核版本和Lustre版本,然后再为SLES安装kernel-devel:

SUDOCMD=`which sudo 2>/dev/null`
kernelversion=`os="SLES12 SP2" lu=2.10.0 \
awk '$0 ~ "* version "ENVIRON["lu"]{i=1; next} \
$0 ~ "* Server known" && i {j=1; next} \
(/\*/ && j) || (/\* version/ && i) {exit} \
i && j && $0 ~ ENVIRON["os"]{print $1}' $HOME/lustre-release/lustre/ChangeLog`

[ -n "$kernelversion" ] && $SUDOCMD zypper install \
kernel-default-devel=$kernelversion \
kernel-devel=$kernelversion \
kernel-syms=$kernelversion \
kernel-source=$kernelversion || echo "ERROR: kernel version not found."

注: 为了编译Lustre,SLES 12 SP2开发环境需要以下几个包:kernel-symskernel-default-develkernel-develkernel-source

可选:其他驱动程序

如果存在第三方InfiniBand驱动程序,那么也必须安装。下面的例子假设驱动程序是使用前面描述的过程,根据不包含补丁的kernel-devel RPM从源代码编译而来的。谨慎区分为包含LDISKFS补丁的内核创建的驱动程序包,与针对标准、不包含补丁的内核编译的驱动程序包。

针对OFA OFED和Intel True Scale驱动程序:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/ofed/{compat-rdma-devel-[0-9].*,compat-rdma-[0-9].*}.x86_64.rpm

针对Mellanox OFED驱动程序:

SUDOCMD=`which sudo 2>/dev/null`
$SUDOCMD yum localinstall \
$HOME/releases/ofed/{mlnx-ofa_kernel-[0-9].*,mlnx-ofa_kernel-devel-[0-9].*,mlnx-ofa_kernel-modules-[0-9].*}.x86_64.rpm

Create the Client RPMs

使用主机上的命令行shell工具,切换到克隆的Lustre Git存储库所在的目录:

cd $HOME/lustre-release

确保之前创建产生的文件已经被清理,提供一个原始的构建环境:

make distclean

运行配置脚本。如果需要创建客户端包,请合并--disable-server--enable-client选项:

./configure --disable-server --enable-client \
[ --with-linux=<kernel devel path> ] \
[ --with-linux-obj=<kernel obj path> ] \
[ --with-o2ib=<IB driver path> ]

选项--with-linux--with-o2ib应该分别指向提取的kernel-devel和InfiniBand内核驱动程序的位置。

例如,需要在OFA OFED或Intel True Scale架构上创建Lustre客户端软件包:

./configure --disable-server --enable-client \
--with-linux=/usr/src/kernels/*.x86_64 \
--with-o2ib=/usr/src/compat-rdma

如果需要为Mellanox OFED创建Lustre客户端包,请执行以下操作:

./configure --disable-server --enable-client \
--with-linux=/usr/src/kernels/*.x86_64 \
--with-o2ib=/usr/src/ofa_kernel/default

请使用标准的、不带补丁的操作系统内核版本3.10.0-514.16.1.el7.x86_64来创建Lustre客户端包:

./configure --disable-server --enable-client \
--with-linux=/usr/src/kernels/3.10.0-514.16.1.el7.x86_64

创建Lustre客户端包:

make rpms

创建成功完成后,会在当前工作目录中生成Lustre客户端RPM包。

保存Lustre客户端RPMs包

将Lustre客户端RPM包复制到目录树中,以便后续发布:

mkdir -p $HOME/releases/lustre-client
mv $HOME/lustre-release/*.rpm $HOME/releases/lustre-client