I’ve been having fun upgrading, or should one say migrating to RHEL6 (Scientific Linux 6.1). As run KVM (Virtualization) I have a bridge running for my VM to connect to the network.
Now I suddenly noticed that IPv6 was enable on my ‘eth0′ which is part of the bridge, this should not happen as it can cause communication issues (yes I got IPv6 enabled devices on my network). So after a bit of searching I found a bug from Redhat (bug#496444) which states that IPV6INIT=no does not work for a single interface, in fact it does not work at all. The bug is still open.
Now after a bit of searching I found something which works just fine:
# echo 1 > /proc/sys/net/ipv6/conf/ethX/disable_ipv6
To make this permanent add it to /etc/rc.d/rc.local then IPv6 will get disabled every time the server restarts.
We all like RAID, and with Linux it’s a pretty cheap way to get enough space to store ones DVD/Music collection, I run a fairly large RAID5 which is working without a hiccup, but much to my surprise I was checking my /proc/mdstat and saw:
md5 : active raid5 sdh1 sdg1 sdf1 sde1 sdd1
7814033408 blocks level 5, 64k chunk, algorithm 2 [5/5] [UUUUU]
[================>....] resync = 81.5% (1593083392/1953508352) finish=2362.7min speed=2540K/sec
Then I checked dmesg, and /var/log/messages, no errors, no nothing so how could that happen, after a bit of searching I found a long discussion – it seams that after CentOS 5.4 they decided to add /etc/cron.weekly/99-raid-check which is controlled by /etc/sysconfig/raid-check – which by all means is fair, but but with a rebuild speed of 2540K/sec and more than a 7TB raid this might take slightly more than a week.
One can move the check to run only ones a month by moving the script from cron.weekly to cron.monthly, but it will still take time.
In general there is nothing wrong with running a resync on a raid, but it can take some time, and it should not be run every week, and if it is they should have have forced the speed up to something like 20000K/sec or at least have made it configurable. Especially for us who have a controller which can handle +100000K/sec
One small issue one can run into after installing forked-daapd is that using the iPhone/iPad remote one cannot play any songs.
Looking in /var/opt/forked-daapd.log one will see something:
[2011-04-14 11:24:59] laudio: Could not open playback device: No such file or directory
[2011-04-14 11:24:59] player: Could not open local audio
[2011-04-14 11:24:59] dacp: Could not start playback
The reason for this issue is that the device one is trying to send the output to is not know to forked-daapd, or it is not selected by default.
The way to solve this is;
1) install sqlite3 tools
2) use bonjour browser, find the id of the device which will be in HEX, then use the Calculator to convert it into binary.
3) stop forked-daapd (/etc/init.d/forked-daapd stop)
4) open the database with sqlite3:
# sqlite3 /var/cache/forked-daapd/songs3.db
sqlite> insert into speakers (, 1, 100);
sqlite> update speakers
… set selected = 0
… where id = 0;
5) start forked-daapd (/etc/init.d/forked-daapd start)
And now in Remote it should be possible to play songs.
Having reshaped a RAID5 twice of as many weeks, I’ve found that adding a 2TB drive takes a good 36 hours, which is kind of frustrating.
After a bit of searching, I discovered that there is a setting in /sys/block/md*/md/stripe_cache_size which by default is set to 256. This is enough for normal day operations, but during a reshape I was seeing speeds around 16mb/s, which meant that it would take a long time to finish.
There are mixed results, some people run into issue if they increase the value beyond 1024, other people are not seeing problems until they go to 64k. There is a performance test here which show that the optimum speed is 8192, but then again this might vary due to other issues.
linked to http://peterkieser.com/wp-content/uploads/2009/11/stripe_cache_size2.png
Try it out, it might help in your setup.
I recently had the “pleasure” to try a [stag]NAS[/stag] device, more specific a [stag]QNAP TS-410[/stag], which is handy little box which takes 4 harddrives (each upto 2TB), 2 Gigabit network ports, a few USB ports, and a 2 eSATA ports.
Very neat, but it’s let down but the lack of a decent CPU (Marvell 6281 800MHz), which shows that it takes around 18 hours to build a 3x2TB Raid5 (which really IS slow). It might come down to the drives not being aligned, as the OS (software) does not take into consideration that drives which is 1TB or larger does no longer use 512bytes per sector. 2TB drivers (mostly) uses 4kb.
The software in it self is ok’ish, one can do what one wants to do, except for one small thing which is ok for home use, but in the office it would be nice to be able to configure UID and GID per user, and group, or use an external source like LDAP or NIS – currently only AD is available, which is somewhat dissatisfying – as it does not provide UID/GID. So for usage in an environment where NFS is being used this box and any other one from QNAP is a no-go as they all use the same software.
Transfer speed, well with a bit of luck one can get about 20mb/s writing and about 35mb/s for read – that is with a raid5 build with Samsung F4 2TB’s and gigabit network – needless to say I was disappointed.
I was lucky enough to find a nice home for the thing, and it is no longer in posession.
I moved the drives to my old trusted server, and is now getting around 50-55mb/s writing.
One of the most annoying things with [stag]XEN[/stag] and [stag]loopback[/stag] devices is that at some point you will run out of them. Especially if you like me have a setup with more than 15 virtual servers (not all of them are running at the same time).
I just ran into “Error: Device 5632 (vdb) coulnt not be connected. losetup $lo flags $loopdev $file”, which after some digging means that there are no free loopback devices.
The nice thing is that there is a way to solve it, instead of having;
In your [stag]DomU[/stag] replace
disk=[ ..., 'file:/..../file.iso...
disk=[ ..., 'tap:aio:/..../file.iso...
Then you DomU should start just fine.
Read more here.
Most if not all of the photos in the posts are gone.. No I did not loose them, well kind of – on purpose…
After 5 years I felt that I needed some new [stag]hardware[/stag] for my server, and one of the things I decided was to move all my photos to smugmug, where they for a very small amount (current $ â‚¬ exchange rate is on my side) host my photos (have a look).
Therefor all links to photos are in the posts, but the photos does no longer exist (still kind of).
But, I’m now a happy man, current OS ([stag]CentOS 5.2[/stag]), some new hardware, and I don’t have to care about my server for another 5 years.