Older blog entries for prla (starting at number 132)

5 Jul 2008 (updated 5 Jul 2008 at 00:07 UTC) »

It's been an interesting evening. Back at my parents home tonight (will leave back to Evora tomorrow), I've been trying to get the information system framework to run in the new Mac Mini that currently lives in my bedroom. So this entry goes some way towards documenting this evening's trip.

Leopard ships with a fully functional Apache 2.2 copy and getting PHP5 to play along with it is a simple matter of uncommenting one line in httpd.conf. Installing PostgreSQL is a breeze using Marc Liyanage's PostgreSQL package, not forgetting to set the cluster creation encoding to Latin1. This is because everything in the information system is Latin1 and it saves a lot of headaches.

The trouble began when I noticed that Leopard didn't really ship with PostgreSQL bindings in its PHP5 installation. So basically there was no choice other than recompiling PHP from scratch. I tried Marc's PHP5 package which includes PostgreSQL support but alas it all went well until the installation process bombed out in the end with a cryptic error.

So, off to compiling PHP's source which had me searching for the Leopard DVDs so I could install XCode's tools, namely gcc. Once that was done, compiling PHP was a breeze. Problem was that once it got installed, Apache complained that the PHP module had the wrong architecture. One minute of Googling told me that Leopard's Apache comes pre-configured for all 4 archs and so I need to do that for whatever I install that interfaces with it. This is a prospect that clearly sucked.

Miraculously, someone came up with a much better and hassle free choice: stripping the httpd binary of the surplus architectures and leaving 32-bit only. Here's the magic sauce:


$ cd /usr/sbin
$ sudo cp httpd httpd-fat
$ sudo lipo httpd -thin i386 -output httpd

Works like a charm.

Et voila'. It's up and running!

Now I'm spent, I better crawl to bed.

3 Jul 2008 (updated 3 Jul 2008 at 22:07 UTC) »

Adapting to development under CakePHP and the university's information system architecture has been slow but steady and really picked up today. Now I see that whatever I developed in the past under MVC frameworks has really been scarce. It obviously helped to understand the foundations of what models, views and controllers are but I guess I still hadn't grasped what they really are. That, alas (or not), only comes with extensive exposure to somewhat complex system that use them.

In any case, it's been a really interesting trip so far and the best side effect has been learning a lot of simple but neat Emacs tricks with Gonçalo, my supervisor on this particular project. Another important thing is that I'll probably be developing another information system, with different subject matter entirely, and the knowledge I've been acquiring will surely prove invaluable later on. Today has been somewhat of a breakthrough, as I've been implementing from scratch a lot of functionality which, despite simple at the core, were nothing but a major headache less than a week ago.

And when it comes to database design, I may not come up with the best relational designs in the world but I surely understand them much more clearly. Proof is how different (and may I add worse) a schema for a side project of mine was before I got to work on this stuff here and now that I learned a couple of things.

Oh and I've been carrying the Macbook along to work again. I simply cannot live without this baby and I guess using a shitty keyboard on the desktop also prevents me from really feeling comfortable. Other than that I just miss the comfort I find in Mac OS, regardless of my everlasting love for Linux, which I've used for over a decade now.

It's also been two months since I started working and the truth is that I've done little else on the side. Football Manager 2008 Portuguese translation has kicked off and there's a web app I'd like to take a stab on but both are on the backburner until I get back on my feet, so to speak. The translation, however, I need to start as soon as possible.

More to come. Interesting, albeit difficult tiresome and sometimes nerve-wrecking, times.

Database engineering has always been an hassle for me and now I have to deal with quite a bit of it. Now I kinda like it and have been learning a lot. Helps to work directly below someone who's proficient and oozes experience. In the process I've also been picking up a lot of emacs tricks which are a huge help for productivity. This, in fact, is a direct result of leaving my Macbook at home now and using the desktop that's been assigned to me at work.

model name : Intel(R) Core(TM)2 Duo CPU E4500 @ 2.20GHz

So, what I'm developing at work is an information system to manage the performance evaluation of public administration workers and their superiors. In Portugal, this is called SIADAP. I need to deliver the first part of the system, up and running, by the end of next week and not being entirely too productive using CakePHP yet is a bit of a problem.

In the meantime, my back is killing me again. I always predicted I'd have back problems but not when I'm bloody 24 going on 25. I'm hoping I won't need to pay a visit to the osteopath this time around, but it all depends on how I feel later today.

Forgot the damn Pattern Recognition book back in my parents' home. Meaning I'll have to start reading something else for the next couple of weeks. Strongest candidates are "Quantico" by Greg Bear and "Life of Pi" by Yann Martel. I think Quantico will win by a nose, for now.

Hitting a wall at work, I can't seem to get the information system codebase checkout to properly run in my development machine's Apache. Something's up either with the Apache config or the CakePHP config itself. Either way, it's worrying me because I need to get up to speed as soon as possible and here I am wasting time not able to get things even running, let alone write some code.

More on this later...

Hard to believe it's been this long.

Doing the lazy lazy thing for the whole weekend, not giving a damn about any work. Finally finished "Neuromancer" which was both interesting and confusing in places. I guess reading about technology from 1985 with over 20 years of real world hindisight on that same technology explains my confusion. SF authors are right a lot of the time, but not always. Nevertheless, I feel better having finally read it and it amazes me how much "The Matrix" actually resembles this. Now I'm reading "Pattern Recognition" by the very same William Gibson and enjoying it quite a lot more, about 100 pages into it.

Work has picked up and writing information systems for important things in CakePHP is a mystery that slowly unfolds. I better get proficient writing web apps with this framework and that right soon.

Yesterday, made a detour in Lisbon to get D. to the bus station so she could get home for the weekend and decided to go to Colombo's FNAC while I was at it in order to buy Porcupine Tree tickets for the October gig in Almada. While doing so, couldn't resist the fresh money in my wallet, so to speak, and got myself a couple of treats: Portishead's "Third" and Black Mountain "In The Future". Both are sublime and will surely feature in my Top 10 come the end of the year. Unless the second half of the year is absolutely crazy in terms of sheer quality.

But, alas or not, the weekend is coming to an end and I need to pack, shower, dine and get my ass moving back to Evora. Work resumes tomorrow at 9am and I never thought I'd be happy to have a 9-5 job, but I do. I needed the stability for a while.

10 Nov 2006 (updated 10 Nov 2006 at 08:53 UTC) »
Adventures in LDAP land

Until recently, I honestly had no idea what LDAP was all about. My work has now led to me research it a bit and implement a small sized solution for the research centre. I still have no idea what LDAP is all about, but here’s some scribblings I’ve gathered on the matter while we’re at it. Getting LDAP to work on Linux with the OpenLDAP tools is largely a matter of figuring out the right schemas, filling the database, and pointing things at it. But why LDAP? When administering a network of more than trivial size, it soon becomes a pain to create and maintain user accounts. An LDAP server can be used to provide a central point of control for Unix and Samba accounts, as well as email and web server authentication. There’s always more to it than meets the eye, but in this particular instance what we want here is to have a set of workstation machines in a private subnet behind a router - which incidentally acts as the LDAP server as well - having central authentication. Basically, all user login information is stored in the server, leaving only local root (and services) accounts in each machine for administration purposes. Moreover, we want each user home directory to be remotely mounted in an external file server (the HP MSA1000 storage array I’ve been blabbering about) via NFS. This last part will be covered in a forthcoming post. Onwards to the configuration… setting up LDAP involves configuring both the server and how many clients we want using LDAP authentication. In this case, we’re working off a Debian system, configuration filenames can and will vary across different distributions. (The following is, again, in a personal notes style, if you come across this and need any further explanation, feel free to email me and I’ll try my best to help). SERVER SIDE

# apt-get install slapd ldap-utils

Configuration of these, depending on your setup and environment, should be something along these lines:

Omit OpenLDAP server configuration? no
DNS domain name: ldap.example.org
Name of your organization: example_organization
Admin password: <administrative LDAP password>
Database backend to use: BDB
Do you want your database to be removed when slapd is purged? no
Allow LDAPv2 protocol? no

Now is probably a good time to setup some basic organizational/user/group information. This can be done either from scratch, perhaps using some app to manage LDAP, or using a basic set of LDIF (LDAP Data Interchange Files) files. See http://www.moduli.net/pages/sarge-ldap-auth-howto under “Set Up Base Information and Test User and Group” for more on this. One nitpick, also covered in the aforementioned guide, is allowing users to change their own details, including password, as is usually possible when the accounts are stored locally. This can be achieved by editing /etc/ldap/slapd.conf and adding:

access to attrs=loginShell,shadowLastChange,gecos
by dn="cn=admin,dc=ldap,dc=example,dc=org" write
by self write
by * read

CLIENT SIDE

# apt-get install ldap-utils libpam-ldap libnss-ldap nscd

LDAP Server host: 1.2.3.4
The distinguished name of the search base: dc=ldap,dc=example,dc=org
LDAP version to use: 3
Database requires login? no
Make configuration readable/writeable by owner only? yes

The distinguished name of the search base: dc=ldap,dc=example,dc=org
Make local root Database admin: yes
Database requires logging in: no
Root login account: cn=admin,dc=ldap,dc=example,dc=org
Root login password: <enter LDAP admin password here>
Local crypt to use when changing passwords: md5

In /etc/nsswitch.conf:

passwd: ldap files
group: ldap files
shadow: ldap files

In /etc/ldap/ldap.conf:

BASE dc=ldap,dc=example,dc=org
URI ldap://1.2.3.4 # your ldap server IP here

Followed by /etc/init.d/nscd restart. PAM

# apt-get install libpam-passwdqc

Debian has a series of files in /etc/pam.d appended by common- at the beginning of their names, which are included by the other files in that directory for specific services. We can tell PAM to use LDAP for all of these services by modifying these common files. In /etc/pam.d/common-password, comment out and replace:

password required pam_unix.so nullok obscure min=4 max=8 md5

or:

password required pam_cracklib.so retry=3 minlen=6 difok=3
password required pam_unix.so use_authtok nullok md5

with:

# try password files first, then ldap. enforce use of very strong passwords.
password required pam_passwdqc.so min=disabled,16,12,8,6 max=256
password sufficient pam_unix.so use_authtok md5
password sufficient pam_ldap.so use_first_pass use_authtok md5
password required pam_deny.so

Read the pam_passwdqc man page for more about parameters you can give to it. In /etc/pam.d/common-auth comment:

auth required pam_unix.so nullok_secure

replace with:

# try password file first, then ldap
auth sufficient pam_unix.so
auth sufficient pam_ldap.so use_first_pass
auth required pam_deny.so

/ In /etc/pam.d/common-account comment:

account required pam_unix.so

replace with:

# try password file first, then ldap
account sufficient pam_unix.so
account sufficient pam_ldap.so
account required pam_deny.so

And don’t forget to edit /etc/libnss-ldap.conf (which, by the way, on other systems is called /etc/ldap.conf) ! That would have saved me an entire afternoon… REFERENCES

#

16 Oct 2006 (updated 16 Oct 2006 at 15:28 UTC) »
HP MSA1000 Storage Under Linux

These are notes on some experiments setting up hardware RAID on the MSA1000 and accessing the storage space under Linux. This MSA1000 holds five 146,8GB hard drives. We’ll attempt to configure a LUN with a RAID5 disk set comprised of four drives plus a spare. Detailed information on RAID level 5 can be found at: http://en.wikipedia.org/wiki/Redundant_array_of_independent_disks#RAID_5 At first, no units are configured on the MSA1000. Accessing the CLI as outlined in a previous post, we can take a look at our disk set:

CLI> show disks
Disk List: (box,bay) (bus,ID)     Size     Units
 Disk101     (1,01)    (0,00)    146.8GB    none
 Disk102     (1,02)    (0,01)    146.8GB    none
 Disk103     (1,03)    (0,02)    146.8GB    none
 Disk104     (1,04)    (0,03)    146.8GB    none
 Disk105     (1,05)    (0,04)    146.8GB    none

Using the add unit command, we create the aforementioned unit using all four disks plus a spare:

CLI> ADD UNIT 0 DATA="Disk101-Disk104" SPARE="Disk105" RAID_LEVEL=5

Now we have a unit:

CLI> show units

Unit 0:
In PDLA mode, Unit 0 is Lun 1; In VSA mode, Unit 0 is Lun 0.
Unit Identifier   : 
Device Identifier : 600805F3-001828E0-00000000-68460002
Cache Status      : Enabled
Max Boot Partition: Enabled
Volume Status     : VOLUME OK
Parity Init Status: 10% complete
4 Data Disk(s) used by lun 0:
   Disk101: Box 1, Bay 01, (SCSI bus 0, SCSI id  0)
   Disk102: Box 1, Bay 02, (SCSI bus 0, SCSI id  1)
   Disk103: Box 1, Bay 03, (SCSI bus 0, SCSI id  2)
   Disk104: Box 1, Bay 04, (SCSI bus 0, SCSI id  3)
Spare Disk(s) used by lun 0:
   Disk105: Box 1, Bay 05, (SCSI bus 0, SCSI id  4)
Logical Volume Raid Level: DISTRIBUTED PARITY FAULT TOLERANCE (Raid 5)
                           stripe_size=16kB
Logical Volume Capacity : 420,035MB

When initially powered on, the MSA1000 will detect host connections and assign them the default profile of DEFAULT. This profile must be changed to Linux using the ADD CONNECTION command:

CLI> ADD CONNECTION RX1600-1 WWPN=210000E0-8B004E53 PROFILE=LINUX

If all works out well, upon reboot the Linux hosts connected to the MSA1000 will then see the disk array as a single /dev/sda device, just like a regular SCSI disk. This device can then be partitioned or otherwise mangled at will. In our case, we’ll be deploying a Linux LVM solution on top of it, probably with using different filesystems for different logical volumes.

#

16 Oct 2006 (updated 8 Nov 2006 at 14:52 UTC) »
Exploring Linux LVM: Part 1

Part of the challenge I’ve outlined in the previous post is figuring out how to share the MSA1000 disk array between the two servers. Once that’s figured out - and part of it was solved by activating the fibre channel driver in the kernel - the idea is to use the Linux LVM (Logical Volume Manager) to manage the actual available storage space on top of the MSA1000 hardware RAID. Personal notes and scribblings on the matter follow. The Linux Logical Volume Manager Logical Volume Management provides benefits in the areas of disk management and scalability. It is not intended to provide fault-tolerance or extraordinary performance. For this reason, it is often run in conjunction with RAID, which can provide both of these. Logical volume management provides a higher-level view of the disk storage on a computer system than the traditional view of disks and partitions. This gives the system administrator much more flexibility in allocating storage to applications and users. User groups can be allocated to volume groups and logical volumes and these can be grown as required. It is possible for the system administrator to “hold back” disk storage until it is required. It can then be added to the volume(user) group that has the most pressing need. When new drives are added to the system, it is no longer necessary to move users files around to make the best use of the new storage; simply add the new disk into an existing volume group or groups and extend the logical volumes as necessary. In this particular situation the idea is to use the MSA1000 hardware RAID for fault-tolerance and reliability and doing Linux LVM on top of it for creating flexible volumes.

A sample LVM topology Some usual LVM tasks for managing disk space: Initializing a disk or disk partition:

# pvcreate /dev/hda 			(for a disk)
# pvcreate /dev/hda1			(for a partition)

Creating a volume group:

# vgcreate my_volume_group /dev/hda1 /dev/hdb1

This would create a volume group comprising both hda1 and hdb1 partitions. Activating a volume group:

# vgchange -a y my_volume_group

This is needed after rebooting the system or running vgchange -a n Removing a volume group:

# vgchange -a n my_volume_group		(deactivate)
# vgremove my_volume_group			(remove)

Adding physical volumes to a volume group:

# vgextend my_volume_group /dev/hdc1
                                    ^^^^^^^^^ new physical volume

Removing physical volumes from a volume group:

# vgreduce my_volume_group /dev/hda1

The volume to remove shouldn’t be in use by any logical volume. Check this by using the pvdisplay <device> command. Creating a logical volume:

# lvcreate -l1500 -ntestlv testvg

This creates a new 1500MB linear LV and its block device special /dev/testvg/testlv

lvcreate -L 1500 -ntestlv testvg /dev/sdg

The same but in this case specifying the physical volume in the volume group

# lvcreate -i2 -I4 -l100 -nanothertestlv testvg

This creates a 100 LE large logical volume with 2 stripes and stripe size 4 KB. Removing a volume group: The logical volume must be closed before it can be removed:

# umount /dev/myvg/homevol
# lvremove /dev/myvg/homevol

Extending and Reducing a logical volume: Detailed instructions on how to accomplish this for different underlying filesystems can be found here: http://tldp.org/HOWTO/LVM-HOWTO/extendlv.html http://tldp.org/HOWTO/LVM-HOWTO/reducelv.html In a “normal” production system it is recommended that only one PV exists on a single real disk. Reasons for this are outlined at: http://tldp.org/HOWTO/LVM-HOWTO/multpartitions.html Some useful external LVM resources: http://tldp.org/HOWTO/LVM-HOWTO/ http://www.linuxdevcenter.com/pub/a/linux/2006/04/27/managing-disk-space-with-lvm.html http://www.gweep.net/~sfoskett/linux/lvmlinux.html

#

12 Oct 2006 (updated 12 Oct 2006 at 10:33 UTC) »
Linux on the HP DL380 G4 and MSA1000

Lately, in what should be my part-time occupation for the next few months, I’ve been setting up a couple of HP Proliant DL380 G4 servers in addition to an HP MSA1000 fibre channel disk array. The idea in this case is to have both servers (henceforth the DL380s) working independently while sharing the storage space provided on the disk array (henceforth the MSA1000) and hopefully having some sort of load balancing going on between the two. Despite some limited experience using and configuring Linux systems in the past few years, this comes as a new and refreshing challenge for me, considering these are enterprise class servers, something I’ve never had a change to directly deploy from the ground up and maintain.

DL380 G4s and the MSA1000 The next few posts are then intended to provide a first hand account of the path I’ll be walking during the setup of these systems, which will hopefully be useful both for me later on and whoever comes stumbling across this page looking for information on how to setup these or similar systems. Compiling a new kernel In order to better understand and get acquainted with the servers, I’ve decided to go for a test run with a Debian-based Linux distribution, called Alinex, which is developed here at the University of Evora. Later on, when most configuration stages are figured out, this will become a regular Debian installation instead of this slightly different flavour. Because the kernel that ships with Alinex is not SMP-enabled, a new kernel is needed to take advantage of the two Intel Xeon 3.8Ghz processors inside each server. There’s also the need to support the fibre channel adaptar, as well as the Gigabit Ethernet adapters, etc. Fortunately, most distros attempt to have as many kernel options set for compilation as modules, so using the distro’s .config file is a good idea. Later on, the goal will be to have a thin all statically compiled kernel. The only exceptions, then, were support for the fibre channel driver and SMP. The former must have generic FC support enabled under Network Device Support and the qla2xxx driver should be configured to compile as a module (it didn’t seem to work built into the kernel, as it wouldn’t recognize the firmware upon boot) under SCSI Device Support and SCSI low-level drivers. This driver needs to have the firmware image placed in /usr/lib/hotplug/firmware so it gets found and used by the adapter at boot time. This image - and others for similar qlogic adapters - can be found at: ftp://ftp.qlogic.com/outgoing/linux/firmware In this case, the correct firmware image for the qla2312 adaptar is ql2300_fw.bin. This information can be found in the help page of the driver in the kernel configuration:

21xx              ql2100_fw.bin 
22xx              ql2200_fw.bin
2300, 2312, 6312  ql2300_fw.bin 
2322, 6322        ql2322_fw.bin
24xx              ql2400_fw.bin 

Configuring the MSA1000 disk array Once the DL380s are up and running, attention turns to the MSA1000 disk array which needs to be setup. To do that, the easiest way seems to be using the old-fashioned serial port access method to connect to the MSA1000 command line interface (CLI) facility. In this case, HP provides a serial to ethernet RJ45Z cable, which can seem weird at first because it won’t fit in a regular ethernet port. This should be connected to the front of the MSA1000 controller while the serial should obviously be connected to the host. Here, I’ll be using the DL380 itself to configure the disk array. Communication can be achieved with any terminal emulator, for instance Hyper Terminal under Windows or minicom under Linux. Both have worked for me, although minicom has a minor quirk in the default configuration which kept me from accessing the CLI at all. Also, instead of the usual 9600 baud rate, this one runs at 19200. So, minicom should be configured using the following parameters:

Serial Device: /dev/ttyS0 (or whatever the serial port used happens to be)
Bps/Part/Bits: 19200 8N1
Hardware Flow Control: No (important! default is Yes)
Software Flow Control: No

Also, the kernel needs to support the serial port in order to do this. Once minicom is set up in this way, hitting Enter after it opens will drop you the CLI shell:

CLI>

The CLI has extensive help facilities so every possible command has a verbose explanation of its doing by simply using the help command. There’s also extensive documentation from HP on the MSA1000, in particular the HP StorageWorks 1000/1500 Modular Smart Array Command Line Interface User Guide

#

Catch Up

It’s been a while (again) since my last post so I guess some catching up is in order.

First and foremost, these increasingly longer absences keep me thinking about closing down this place. I had a rationale posted on the first day for why I wanted to have such a blog and for a time it worked out. Right now, despite a lot of things going on in my life, I hardly have anything important to share with my meager audience. I’ve always believed that a non-existent blog is a better thing in the so-called “cyberspace” than one where its author has nothing important to say, so don’t be surprised (not that you would, right?) if this one ceases to exist shortly. Not much entropy getting lost, I guess.

Anyway, the web app I’ve been somewhat talking about for the past few months is still under development. Needless to say, not nearly enough time has been spent on it, at least not as much as I would have liked to give to it and definitely not as much as it needs. Still, it’s usable, it’s real, it already has the potential of making people’s lives (a bit) easier. That’s more than many can claim.

However, we still need to take than final step, which is obviously pushing it out the door, for the world at large to pick it up at will. That’s what we’ve been reluctantly focusing on lately. We started using 37Signals excellent project management web app Basecamp. To quote Jim Coudal during his keynote talk on the latest SXSW, Basecamp (and the other 37signals) take the bullshit out of communication. We’ve been experiencing this first-hand, as Basecamp truly takes a uniquely simplified view of project management and developer collaboration. Everything revolves around three simple concepts: messages, TODOs and milestones. Everything else is just treading water, really, so forget about functional specifications, Gantt charts and all that mess. My chances of becoming a 37signals Getting Real evangelist have just increased tenfold.

We’ve also been using Campfire for real-time chat and that’s been working out too. It’s going to become important from this week on as we will be physically distant for a good three months and the project cannot stop now, of all times. Coupled with Writeboard, Campfire has everything we need to communicate effectively during project development.

So where do we stand right now? As I said before, we’re pushing for public release soon. We’ve set a July 21st deadline ourselves within Basecamp and I wonder how realistic that can be. We’ll try but considering how novice we both are, I seriously doubt it. We already cut a lot of features we’d like to have up front, but there’s a need to realise that “release early, release often” must leap from theory into practice. It was true for open source apps but I believe it’s more true than ever when it comes to web apps nowadays.

Personally, after a long and crappy semester at university, there’s no need or reason to deny that I’m very tired, kind of burnt out actually, and in need of something completely different from what I’ve been doing for the past few months/years. Hopefully next year will be my last academic year (well, it must, as I’ve been doing this for far too long) and then I can look forward to the rest of my life from a very different angle. I don’t want to make these last five or six years sound like crap, because they weren’t, but I guess it’s finally taking its toll. I’ve never been an easily likable person, I’ve always hated going with the flow and making peace both with God and Devil at the same time. But lately it’s been a downward spiral of small, mundane, average, day-to-day conflicts with people that probably don’t deserve it and, most of all, conflicts inside my head. I’m just dog tired of it all and for every place where my guilt is written, here’s an apology.

Well, I guess this is just the net result of these past few months of controlled insanity, of constant restarts. That’s it. Tired of constantly restarting every damned day.

There’s two exams to go and then I can forget about school for a while, hopefully (and that’s a long shot, for a thousand other reasons I could get into but won’t) recharging my batteries for the last decisive year.

#

123 older entries...

New Advogato Features

New HTML Parser: The long-awaited libxml2 based HTML parser code is live. It needs further work but already handles most markup better than the original parser.

Keep up with the latest Advogato features by reading the Advogato status blog.

If you're a C programmer with some spare time, take a look at the mod_virgule project page and help us with one of the tasks on the ToDo list!