Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Busybox init process. PID must be 1 #47

Open
szorfein opened this issue Sep 22, 2017 · 8 comments
Open

Busybox init process. PID must be 1 #47

szorfein opened this issue Sep 22, 2017 · 8 comments

Comments

@szorfein
Copy link
Contributor

szorfein commented Sep 22, 2017

Hi, i have a problem with busybox. I put the output that ends up on a kernel panic.

+ echo [134]: zpool import -R /newroot zfsforninja
+ [ ! 134 ]
+ return 134
+ [ 1=1 ]
+ zfs mount
+ grep -q zfsforninja
+ debug -d zfs mount -v0 -a
+ local _cmd _opt _ret
+ [ 5 -ge 1 ]
+ _opt=-d
+ shift
+ [ 4 -ge 1 ]
+ _cmd=zfs mount -v0 -a 
+ break
+ eval zfs mount -v0 -a
+ zfs mount -v0 -a
+ _ret=0
+ echo [0]: zfs mount -v0 -a
+ [ ! 0 ]
+ return 0
+ rm /run/sh.pid
+ echo ROOT=zfsforninja
+ _ret=0
+ echo [0]: dozfs ROOT 1 zfsforninja
+ [ ! 0 ]
+ return 0
+ retval=0
+ exit 0[

>>> Switching to init shell run level 4s
>>> Switching Root
BusyBox v1.27.2 (2017-09-21 16:36:49 -00) multi-call binary.

Usage: switch_root [ -c /dev/console ] NEW_ROOT NEW_INIT [ARGS]

Free initramfs and switch to another root fs:
chroot to NEW_ROOT, delete all in /, move NEW_ROOT to /,
execute NEW_INIT. PID must be 1. NEW_ROOT must be a mountpoint.

    -c DEV Reopen stdio to DEV after switch
[ 58.965964] Kernel panic - not syncing: Attemped to kill init exitcode=0x00000100
...
@tokiclover
Copy link
Owner

The issue is the rootfs is not mountd to switch rootfs. Can you provide the kernel command line argument? I could afterwards trace the erroneous zfs hooks.

@szorfein
Copy link
Contributor Author

szorfein commented Oct 3, 2017

So, my kernel args:

# vim /etc/mkinitramfs.conf
    env="${env} root=zfsforninja zfs=3a015a18-5beb-4bd6-ba6a
    luks=gpg:UUID=bb7e319c-9df9-4c96-96ec-79a1b4d38614:/key.gpg"

And if need, a zpool status to see what it looks like when once mount:

# zpool status
    pool: zfsforninja
    state: ONLINE
    config:
         ...
        zfsforninja                   ONLINE
             3a015a18-5beb-4bd6-ba6a  ONLINE

@szorfein
Copy link
Contributor Author

szorfein commented Oct 3, 2017

Actually, the script (from hooks/zfs) do it:

debug -d zpool import -f $_opt -R $NEWROOT "$_pool"

And my arg: zfs=3a015a18-5beb-4bd6-ba6a need to look at /dev/mapper directory, so maybe change with:

 debug -d zpool import -f $_opt -R $NEWROOT -d /dev/mapper "$_pool"

Or with a special zfs var: ZPOOL_IMPORT_PATH="/dev/mapper:/dev" which need to be exported.

export ZPOOL_IMPORT_PATH="/dev/mapper:/dev"
zpool import  -f $_opt -R $NEWROOT $_pool 

I would do test during the day.

@szorfein
Copy link
Contributor Author

szorfein commented Oct 3, 2017

i'm almost there...
the problem would come from the command line.

the doc tell : zfs=map-dev,map-dev,yes, i have try

  • zfs=1f3c350c-ac35-421c-91e1-2418194ececa-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
  • zfs=luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa

and the boot:

  • eval dmopen luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
  • break
  • eval dmopen luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
  • dmopen luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
    ...
    eval set-- luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
  • set -- luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa
  • local _dev=2418194ececa _hdr= _header _map=luks1-UUID=1f3c350c-ac35-421c-91e1
  • blk 2418194ececa _dev
  • local _adw _blk
  • BLK 2418194ececa
    ...
    _cmd=gpg -qd "/mnt/tok/key.gpg | cryptsetup open /dev/sda luks1-UUID=1f3c350c-ac35-421c-91e1-2418194ececa --key-file=-

It tries to open the wrong device.

@szorfein
Copy link
Contributor Author

szorfein commented Oct 3, 2017

I could finnally boot, it was difficult !
i would release a patch in the evening.

so from hook/zfs:

debug -d zpool import -f $_opt -R $NEWROOT "$_pool"

$NEWROOT is void in boot sequence, i have to change with real path /newroot.

export ZPOOL_IMPORT_PATH="/dev/mapper:/dev/disk/by-uuid:/dev"
debug -d zpool import -f $_opt -R /newroot "$_pool"
unset ZPOOL_IMPORT_PATH

From /etc/mkinitramfs-ll.conf, i modified this:

vim /etc/mkinitramfs-ll.conf
module_zfs="zlib_deflate zpl zavl zcommon znvpair zunicode zfs"
bin_zfs="zfs zpool mount.zfs zdb fsck.zfs mount.zfs"

This requires a lib libgcc_s.so.1, -(on gentoo, she has at /usr/lib64/gcc/x86_64-pc-linux-gnu/6.4.0/libgcc_s.so.1)-, which need copy to /usr/share/minitramfs-ll/usr/lib or elsewhere.

And my kernel line is finally the same than before:

env="${env} root=zfsforninja zfs=3a015a18-5beb-4bd6-ba6a
luks=gpg:UUID=bb7e319c-9df9-4c96-96ec-79a1b4d38614:/key.gpg"

and i pass the step where i had to recompile the kernel :)

@szorfein
Copy link
Contributor Author

szorfein commented Oct 3, 2017

for copy the lib (libgcc_sso.1) for zfs, a trick like this?:

lib=libgcc_s.so.1
cmd=$(find /usr/lib64 -type f | grep $lib | sort -r | head -n1)
dest=$(/usr/share/mkinitramfs-ll/usr/lib)
[ ! -f $dest/$lib ]&& cp -a $cmd $dest/ 

@tokiclover
Copy link
Owner

Sorry I was busy with real life.
Thanks for reporting, debugging and making patches.
Can you make a PR please?

@szorfein
Copy link
Contributor Author

No problem, you want a new PR?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants