Jump to: navigation, search

Maybe things described here will also hold for solaris (sun) but then again, i didn't try that. If i say 'different' i mean with regards to linux debian.



During installation you might be asked to log in , use

  • user: jack
  • password: jack

You can su with the password 'opensolaris'

disable x/ gdm

pfexec svcadm disable gdm

actually pfexec is a sortof sudo, soas root you won't need it, so i won't use it anymore.

However now you are stuck with a splashscreen at boot time which seems to go on forever (it's until you hit return). Don't just throw away the splashimage !! it refuses to boot: YES not having a splash screen is a fatal error in opensolaris !! This can be disabled in grub, NO the files are not in /boot/grub/ but in /rpool/boot/grub/ for a change, edit menu.lst and remove all lines regarding the splashimage :

splashimage /boot/solaris.xpm
foreground d25f00
background 115d93

also remove the 'console=graphics' from the kernel line :

kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS,console=graphics


kernel$ /platform/i86pc/kernel/$ISADIR/unix -B $ZFS-BOOTFS

Finally you got rid of X. All of this last bit can also be done when you are in the grub screen by pushing 'e' and making your changes, but they will not be permanent.

package installation

Command list
apt-get install pkg install
apt-get remove pkg uninstall
apt-cache search pkg search
apt-get update pkg refresh (mostly redundant, pkg refreshes before installing)
apt-get dist-upgrade pkg image-update
editing sources.list pkg publisher / pkg set_publisher / pkg unset_publisher

sorry most are untried as yet:

exceptions :



Zfs is a very useful filesystem type. Too bad it can't be used under the linux licenses (except for user space :# ). Also a disadvantage is that it is recommended you entire disks, not slices, but you cannot put your root partition on the raid-z version of zfs. So you end up slicing anyway, or maybe you have a small disk lying around, but then you offer an ide/sata slot. I chose in this case to dedicate one 80G disk to the root partition, and setting 3 disk in raid-z configuration, leaving 160 effective space (80 for the parity).

Ok, but maybe you want to edit your disk, fdisk is the tool of choice but WTF is the device name to put in. This can actually be a serious hunt, it's a combination of :

  • c# - this is the number of the controller (you need to find out which number it is on your machine), so this would be 0 for hda and hdb, 1 for hdc,hdd etc.
  • t# - this is the target number (number of the device on controller - with IDE this is generally determined by the device's position on the IDE cable, it may be left out completely e.g. on SATA drives), this would be 0 for hda and hda, 1 for hdb and hdd etc.
  • d# - this is the logical unit (LUN) for SCSI, or partition number for IDE disk drives : the number in /dev/sda2, /dev/hda1
  • s# - number of the slice (used on SPARC), on PC IDE drives p# is used instead to specify a partition number (p0 = all partitions, p1 = partition 1, etc.)
  • p# - This on Solaris x86 *only*. p0 refers to the whole disk in the absence of slices. p1-p4 refer to the 4 primary partitions.
  • l - Refers to the FAT partition number. Again, this is Solaris x86 specific. FAT partitions will be numbered as p0:1, p0:2 etc. If your p4 is the extended partition, then p4:1, p4:2 etc refer to FAT partitions in the extended partition.

For example :


Start trying the combinations, (in my case they started with c7 so i'm glad i found something better;)


Displays all detected disks, and then prompts which one you want to format, so just ctrl-c your way out.


creating a new raid-z pool is done with th zpool command, it is recommended to use whole disks but in my case i had differnetly sized disks, so i had to create slices on the bigger ones to make them match, so this makes it a command like this :

zpool create safe raidz c7d0p2 c8d0

actually i forgot the raidz parameter, and noticed it because my disk was twice the sioze i expected. but that can be easily fixed with.

zpool destroy safe


With a pool created yiou can creaet filsystems on that as well

zfs create safe/home
zfs create safe/projects
chown username safe/*

And both will report the same size, so you can fill them until the whole disk is full, not one of them. This makes zfs volumes more like filesystem directories then partitions.


A problem i have on freebsd is that every time after reboot, the filesystems are not mounted anymore. A solution might be presented later, but until then, you can mount the volumes again with.

zfs -a mount


The device found above can be opened like this :

fdisk /dev/rdsk/c7d0p0

note the rdsk, NOT dsk


ifconfig is present but a little different, and also the interface names are different:

kees@opensolaris:/rpool/images/virtualbox$ ifconfig -a
lo0: flags=2001000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv4,VIRTUAL> mtu 8232 index 1
        inet netmask ff000000 
e1000g0: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 2
        inet netmask ffff0000 broadcast
e1000g2: flags=1004843<UP,BROADCAST,RUNNING,MULTICAST,DHCP,IPv4> mtu 1500 index 4
        inet netmask ff000000 
vboxnet0: flags=1000842<BROADCAST,RUNNING,MULTICAST,IPv4> mtu 1500 index 3
        inet netmask 0 
lo0: flags=2002000849<UP,LOOPBACK,RUNNING,MULTICAST,IPv6,VIRTUAL> mtu 8252 index 1
        inet6 ::1/128 
e1000g0: flags=2004841<UP,RUNNING,MULTICAST,DHCP,IPv6> mtu 1500 index 2
        inet6 fe80::230:48ff:fe71:3336/10 

Setting a static ip address is rather similar

ifconfig e1000g2


cpu info

Use the psrinfo command :

psrinfo -v

gives something like :

Status of virtual processor 0 as of: 01/28/2010 12:22:33
 on-line since 01/22/2010 16:29:47.
 The i386 processor operates at 2800 MHz,
       and has an i387 compatible floating point processor.
Status of virtual processor 1 as of: 01/28/2010 12:22:33
 on-line since 01/22/2010 16:29:49.
 The i386 processor operates at 2800 MHz,
       and has an i387 compatible floating point processor.


Well i reverted to the top command, which states physical memory.


bad PBR sig

A message gotten very early in the boot sequence. It happend after i switched 2 disks in a mirrored ZFS configuration from one supermicro to another. The most probable cause was switching the 2 disk, because swapping them around solved the problem, ZFS mirror probably has an MBR on one disk only.

Personal tools
MediaWiki Appliance - Powered by TurnKey Linux