I’ve been setting users passwords successfully on CentOS/RHEL from within my kickstarts, using entries like this:
echo p4ssw0rd | passwd --stdin username
… however, that unfortunately doesn’t work on at least Ubuntu (and possibly many other distros as well).
Now — finally — and thanks to this comment, I have an answer; and it looks something like this:
echo username:password | chpasswd
(Note that I’ve only ever done this on CentOS 6.2. It should work in a lot of other places too though, especially other RHEL based distros.)
I really wanted to use the Smokeping init script that comes with Ubuntu 10.04.* LTS on a CentOS 6.2 box. One look at it, however, and you will very quickly realise that out-the-box that isn’t something which is likely to work, possibly for other reasons, but definitely because you don’t have “start-stop-daemon” on a CentOS box; not yet at least
This helpful post suggested that if you pull the dpkg source from one of the Debian mirrors then you could build it, albeit quite nastily, and end up with a successful build of start-stop-daemon. However, it doesn’t have to build so nastily. Newer versions of dpkg build cleanly, as I discovered and have detailed below. As root or using sudo, do the following:
wget -c "http://za.archive.ubuntu.com/ubuntu/pool/main/d/dpkg/dpkg_22.214.171.124ubuntu3.tar.bz2"
tar jfxvh dpkg_126.96.36.199ubuntu3.tar.bz2
./configure --without-install-info --without-update-alternatives --without-dselect
make && make install
Now if you type “which start-stop-daemon” you should discover that it’s built and installed into /usr/local/sbin, and works perfectly just like it’s supposed to. And with that hurdle out the way, I could now finish getting that Ubuntu init script working on CentOS. Happy time
Sorry, this is a bit lazy of me, but at the moment I can only confirm that this works on Ubuntu 10.04.4 LTS. It might, however, work on other versions too, and maybe also with Debian of course.
If when you run an apt-get update you are being told something like this right at the end:
W: GPG error: http://196.x.y.z lucid Release: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 2940ABA983EF826A
… then install “add-apt-key” on the box, and run it, adding the missing key itself to the end of the command, as shown below.
root@xyz-box-bry-01:~# apt-get install add-apt-key Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: add-apt-key 0 upgraded, 1 newly installed, 0 to remove and 5 not upgraded. Need to get 5,314B of archives. After this operation, 81.9kB of additional disk space will be used. Get:1 http://196.x.y.z/ubuntu/ lucid/universe add-apt-key 1.0-0.5 [5,314B] Fetched 5,314B in 0s (270kB/s) Selecting previously deselected package add-apt-key. (Reading database ... 88896 files and directories currently installed.) Unpacking add-apt-key (from .../add-apt-key_1.0-0.5_all.deb) ... Processing triggers for man-db ... Setting up add-apt-key (1.0-0.5) ... root@xyz-box-bry-01:~# add-apt-key 2940ABA983EF826A gpg: directory `/root/.gnupg' created gpg: new configuration file `/root/.gnupg/gpg.conf' created gpg: WARNING: options in `/root/.gnupg/gpg.conf' are not yet active during this run gpg: keyring `/root/.gnupg/secring.gpg' created gpg: keyring `/root/.gnupg/pubring.gpg' created gpg: requesting key 83EF826A from hkp server subkeys.pgp.net gpg: /root/.gnupg/trustdb.gpg: trustdb created gpg: key 83EF826A: public key "Opscode Packages " imported gpg: Total number processed: 1 gpg: imported: 1 OK root@xyz-box-bry-01:~#
I’ve found that sometimes the key server doesn’t have the key and so it’s not imported, but re-running the command generally fixes that as the next key server chosen usually does end up having the key. Once you’ve successfully imported the key, run “apt-get update” again and your problem should no longer exist.
There was something else I wanted to say but I’ve totally forgotten what it was.
Whilst working at IS, I’ve sent VMware servers to Accra, Nairobi, Lagos, Maputo, London, Durban, Cape Town and then of course to two sites in Johannesburg. The servers in Accra, Nairobi, Lagos and Maputo run various virtual machines required by the NMS team (of which I am a member), as well as a whole lot that a sister team of ours uses. The NMS machines are things like Syslog boxes and SNMP gateways, etc.
Connectivity to those regions is anywhere from awesome — if that region is on the eastern side of Africa and connects via Seacom — right down to pretty much unusable. The kind of unusable where you spend a whole day just trying to log into the console of a virtual machine because every time you try to write “root” the word will come out as “rroot” and then “rooooot” and then “rootttt”. It’s one of the most frustrating things I’ve ever had to do in my life, I’m sure of it.
Which kinda brings me to the point of this post. I decided that the best way to deploy the various machines (given that there’s never any time to send the VMware server itself to the region with all the virtual machines already built) was to kickstart them, using preseeds for the Ubuntu boxes and kickstarts for the CentOS/RHEL boxes. This has worked famously for me, and I’m now able to have fully built, NMS standards compliant virtual machines in any of those regions in ten minutes or less.
That was until I upgraded the key infrastructure boxes (dhcp & tftp servers etc.) to Lucid. Suddenly everything ground to a halt. The fix however was very simple. In fact I feel kinda guilty that you’ve had to read this whole long story just to get such a simple solution to your problem.
Before Lucid these boxes ran Hardy. I used tftpd-hpa running as a daemon, using the standard /var/lib/tftpboot directory as the TFTP root. My /etc/default/tftpd-hpa file looked like this:
#Defaults for tftpd-hpa RUN_DAEMON="yes" OPTIONS="-l -s /var/lib/tftpboot"
After upgrading to Lucid that file had changed so that it looked like this:
# /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/srv/tftp" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS=""
However the service wouldn’t start and the network installs kept on failing to even start. Changing the contents of /etc/default/tftpd-hpa to look more like this solved my problem. The “4” is because I switch IPv6 off on all my Lucid machines by adding “ipv6.disable=1″ to the “GRUB_CMDLINE_LINUX_DEFAULT” line in /etc/default/grub.
# /etc/default/tftpd-hpa TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="-4 --secure"
Bounce the service and you’re sorted, and back on the road with your network installs.