Solaris 11 安裝記錄

Solaris 11 安裝記錄

第一階段: 能夠連上網路, 並且可以用遠端ssh來控制主機

首先, 正常安裝完後, 開機, 並且登入系統.

不知為何, 裝完重開機第2次後, 其default route 設定會不見了, 所以只好手動再設定一次

server# route -p add default 192.168.9.1

其他如靜態IP的設定, 可以參考這裡

將sshd改為可以用root登入. 修改/etc/ssh/sshd_conf, 並將裡面的PermitRootLogin 設為yes.

PermitRootLogin yes

然後重新啟動ssh service.

server# svcadm disable network/ssh
server# svcadm enable network/ssh

新增domain name server, 修改/etc/resolv.conf如下

server# cat resolv.conf 
nameserver 168.95.1.1
nameserver 8.8.8.8

用GUI的工具, 將網路的IP改為static IP.

設定DNS. 通常剛裝的server 其DNS 並不會被設定. DNS的設定方法如下

mv /etc/nsswitch.conf /etc/nsswitch.conf.bak
cp /etc/nsswitch.dns /etc/nsswitch.conf

設定此server之domainname

server# domainname haostudio.net
server# domainname > /etc/defaultdomain

如此, 我們就可以離開主機旁了. 用遠端ssh login 來控制就好了.

LDAP , nsswitch.conf 及 DNS 問題: 參考這裡


第二階段: 使用zfs

將整個系統先用ZFS備份起來, 當有問題時, 可以rollback回去, 就不用重裝了

server# zfs list
NAME                     USED  AVAIL  REFER  MOUNTPOINT
rpool                   11.0G   136G  45.5K  /rpool
rpool/ROOT              2.83G   136G    31K  legacy
rpool/ROOT/openindiana  2.83G   136G  2.82G  /
rpool/dump              3.95G   136G  3.95G  -
rpool/export              63K   136G    32K  /export
rpool/export/home         31K   136G    31K  /export/home
rpool/swap              4.19G   140G   180M  -  

我們只要備份rpool/ROOT/openindiana即可

server# zfs snapshot rpool/ROOT/openindiana@just-installed

查看是否有產生snapshot

server# zfs list -t snapshot
NAME                                    USED  AVAIL  REFER  MOUNTPOINT
rpool/ROOT/openindiana@install         12.9M      -  2.74G  -
rpool/ROOT/openindiana@just-installed      0      -  2.82G  -

建立raid-z 磁碟陣列

server# format
Searching for disks...done

AVAILABLE DISK SELECTIONS:
       0. c3t0d0 
          /pci@0,0/pci15d9,633@1f,2/disk@0,0
       1. c3t1d0 
          /pci@0,0/pci15d9,633@1f,2/disk@1,0
       2. c3t2d0 
          /pci@0,0/pci15d9,633@1f,2/disk@2,0
       3. c3t3d0 
          /pci@0,0/pci15d9,633@1f,2/disk@3,0
       4. c3t4d0 
          /pci@0,0/pci15d9,633@1f,2/disk@4,0
       5. c3t5d0 
          /pci@0,0/pci15d9,633@1f,2/disk@5,0
Specify disk (enter its number): ^C

由此可知, 共有6台硬碟, 而我們要建立的磁碟陣列的硬碟分別是c3t1d0,c3t2d0,c3t3d0,c3t4d0,c3t5d0

server# zpool create -f  fspool raidz c3t1d0 c3t2d0 c3t3d0 c3t4d0 c3t5d0
server# zpool status fspool
  pool: fspool
 state: ONLINE
  scan: none requested
config:

    NAME        STATE     READ WRITE CKSUM
    fspool      ONLINE       0     0     0
      raidz1-0  ONLINE       0     0     0
        c3t1d0  ONLINE       0     0     0
        c3t2d0  ONLINE       0     0     0
        c3t3d0  ONLINE       0     0     0
        c3t4d0  ONLINE       0     0     0
        c3t5d0  ONLINE       0     0     0

errors: No known data errors

其中”-f” 是強制建立, 不管硬碟裡的是否有資料


第三階段: 使用zone

建立zone images專用的目錄, 並且建立第一個zone: ldap_zone 專用目錄

server# zfs create -o mountpoint=/export/zone_img fspool/zone_img
server# zfs create fspool/zone_img/ldap_zone
server# zfs list
NAME                        USED  AVAIL  REFER  MOUNTPOINT
fspool                     1.33M  5.36T   230K  /fspool
fspool/zone_img             460K  5.36T   230K  /export/zone_img
fspool/zone_img/ldap_zone   230K  5.36T   230K  /export/zone_img/ldap_zone
rpool                      11.0G   136G  45.5K  /rpool
rpool/ROOT                 2.83G   136G    31K  legacy
rpool/ROOT/openindiana     2.83G   136G  2.82G  /
rpool/dump                 3.95G   136G  3.95G  -
rpool/export                 64K   136G    33K  /export
rpool/export/home            31K   136G    31K  /export/home
rpool/swap                 4.19G   140G   180M  -

server# cd /export/zone_img
server# chmod 700 ldap_zone
server# ls -al
total 29
drwxr-xr-x 3 root root 3 2012-04-27 20:55 .
drwxr-xr-x 4 root sys  4 2012-04-27 20:52 ..
drwx------ 2 root root 2 2012-04-27 20:55 ldap_zone

建立第一個zone, 名稱叫做:ldap_zone

server# zonecfg -z ldap_zone
ldap_zone: No such zone configured
Use 'create' to begin configuring a new zone.
Bad terminal type: "xterm-256color". Will assume vt100.
zonecfg:ldap_zone> create
create: Using system default template 'SYSdefault'
zonecfg:ldap_zone> set zonepath=/export/zone_img/ldap_zone
zonecfg:ldap_zone> set autoboot=true
zonecfg:ldap_zone> set bootargs="-m verbose"
zonecfg:ldap_zone> verify
zonecfg:ldap_zone> commit
zonecfg:ldap_zone> exit

建立zone時, 若加入add net的的定的話, 會有ip-type的錯誤訊息, 所以我們忽略net的設定, 等到zone boot時再設定即可.
參考這裡.

列出所有zone的狀態:

server# zoneadm list -cv
  ID NAME             STATUS     PATH                           BRAND    IP    
   0 global           running    /                              solaris  shared
   - ldap_zone        configured /export/zone_img/ldap_zone     solaris  excl

安裝, 開機, 登入此zone.

server# zoneadm -z ldap_zone install
....bala bala...一堆訊息...忽略...

server# zoneadm -z ldap_zone boot  (將此 zone 開機)
server# zlogin -C ldap_zone  (登入此zone)
This entry was posted in Solaris and tagged , , . Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *