Monday, November 27, 2006

scans in a fog

I have a scanjet 6300c at home, and for quite a while now, the glass on the innards side of the scanner has had a thin film or fog like quality to it. My wife was scanning pictures in and was complaining that the little 'all-in-one' scanner/printer/copier scans much better than the 6300. True enough the scans on the 6300c have a fog or haze to them (like when you have the contrast too high and the colors get less defined) .

Well, I'd been adverse to actually doing something about it since the manual and HP's online documentation stated you cannot clean the inside of the glass on the 6000 series. However, after a few web searches I found some third party evidence that you can open it up safely and clean the innards.

Sure enough, it was _extremely_ easy. Just pop the scan lid off (the manual shows you how so you can attach the page feeder), and then pop two screw covers on the front end of the glass top portion of the scanner. After that just take a torqx-10 head and undo 4 screws holding the lid on.

With the lid off, it's obvious you do not want to remove the glass since it's packed tightly with spacers that look like they would break easily if you tried. However, since it's just a hunk of plastic and glass, it's fairly easy to clean in place.

Cleaning it, according to the common sense instructions here:

I discovered there was a thin sheen of oil residue on the bottom side of the glass. Apparently, I deduced, the manufacturing oil/grease used on the head, plastic belt, and gears evaporates from the heat of the lamp and gets plastered on the glass. This happens slowly over time, but will always affect picture quality.

After several thorough cleanings with some photo lens cleaner and cloth, both my wife and I could not see any residue or streaks on either side of the glass when held up to the light from several angles.

I then used some mini-vac attachments to suck up a very tiny amount of dust that had collected on the bottom (the chassis on this model is probably not as air tight as it could be) and then I used compressed air to blow off the mirrors and light. I then popped the scan glass back on, re-tightened the screws and replaced the scan lid.

Some test scans proved a world of difference in quality. I'll probably be maintaining a clean schedule with it now every three months.

Friday, September 22, 2006

peril vs perl

[14:21] do you have your peril sensitive sunglasses?
[14:21] (and your towel?)
[14:26] damn. I only brought my perl sensitive sunglasses.

Tuesday, August 15, 2006

amazingly difficult to find 'getting a MAC address from a interface?' answer.

I was trying to convert a piece of network software to FreeBSD. Unfortunately, even though the author quotes
"should compile on any POSIX like OS," it doesn't (POSIX != LINUX). It should be obvious, when they drop things like:

if (ioctl (sock, SIOCGIFADDR, &ifr) < 0)

throughout the code.

So the above is from a section of code trying to obtain the MAC address on a interface. Doing this task
on FreeBSD can be done through ioctl(), but is discouraged by the developers. They encourage you to use getifaddrs().

This strange and wonderful function grabs all sorts of information from the interface list, however, a couple of things
are not obvious about it's use. If you really want to play arround with IF_LINK type addresses on the interfaces returned
you need to be using sockaddr_dl not just sockaddr. Overloading sockaddr with magic casting is the new pink these days and
thats what's required. It confused me at first since the manpage has:

struct ifaddrs *ifa_next; /* Pointer to next struct */
char *ifa_name; /* Interface name */
u_int ifa_flags; /* Interface flags */
struct sockaddr *ifa_addr; /* Interface address */
struct sockaddr *ifa_netmask; /* Interface netmask */
struct sockaddr *ifa_broadaddr; /* Interface broadcast address */
struct sockaddr *ifa_dstaddr; /* P2P interface destination */
void *ifa_data; /* Address specific data */

for the structure.. same with /usr/include/ifaddrs.h. It turns out copying sa_data of sa_len from your friendly
struct sockaddr does not give your the ethernet address from ifa_addr. :( :( :( :(.

Too make a long story short:

/* getmac.c -- retrieve the mac address from a interface name */
#include <stdio.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <net/if_dl.h>
#include <ifaddrs.h>

int main(int argc, char ** argv)
struct ifaddrs *ifaphead;
unsigned char * if_mac;
int found = 0;
struct ifaddrs *ifap;
struct sockaddr_dl *sdl = NULL;

if (argc < 2)
fprintf(stderr,"\t %s <interface name>\n",argv[0]);

if (getifaddrs(&ifaphead) != 0)
perror("get_if_name: getifaddrs() failed");

for (ifap = ifaphead; ifap && !found; ifap = ifap->ifa_next)
if ((ifap->ifa_addr->sa_family == AF_LINK))
if (strlen(ifap->ifa_name) == strlen(argv[1]))
if (strcmp(ifap->ifa_name,argv[1]) == 0)
found = 1;
sdl = (struct sockaddr_dl *)ifap->ifa_addr;
if (sdl)
/* I was returning this from a function before converting
* this snippet, which is why I make a copy here on the heap */
if_mac = malloc(sdl->sdl_alen);
memcpy(if_mac, LLADDR(sdl), sdl->sdl_alen);
if (!found)
fprintf (stderr,"Can't find interface %s.\n",argv[1]);

fprintf (stdout, "%02X%02X%02X%02X%02X%02X\n",
if_mac[0] , if_mac[1] , if_mac[2] ,
if_mac[3] , if_mac[4] , if_mac[5] );



Wednesday, August 9, 2006

command line new tabs in konqueror

The following is a simple shell script using the KDE dcop client to open new tabs in konqueror (or a new instance of konqueror if it's not running). I use this from pine's url-viewers setting to launch new tabs with a url contained in a email.


KONQ_WIN=`dcop konq* | head -1`

if [ "$KONQ_WIN" = "" ]; then
/usr/local/bin/konqueror $1 &
NEW_TAB=`dcop $KONQ_WIN konqueror-mainwindow#1 action newtab`
dcop $NEW_TAB activate
dcop $KONQ_WIN konqueror-mainwindow#1 openURL $1
return 0

Monday, July 10, 2006

fun with MSI, ACPI, and freebsd.

Last two weeks I had been working on and off on a problem related to ACPI and some 1u servers using MSI-9618 motherboards. After booting freebsd, I couldn't get the second NIC to work at all, it turns out after looking at 'vmstat -i' that em0 and em1 where sharing an interrupt:

# vmstat -i
interrupt total rate
irq3: sio1 52807402 55
irq4: sio0 1604 0
irq14: ata0 36 0
irq16: em0 em1+ 2527378 2
irq19: uhci1+ 86991 0
cpu0: timer 1886077214 2000
cpu1: timer 1886076861 2000
Total 3827577486 4058

From past experience I knew this was from poor resource assignment from the BIOS when ACPI isn't enabled. So I built and installed the acpi.ko modules and installed it. These are webware 1185s from pogolinux. The chassis manual that comes with it labels them as P1-103 series and a part number as MS-9218 1U rackmounts.

Then when I booted, right after it should be switching to multi-user, it appeared to hang. I enabled ALT_BREAK_TO_DEBUGER in the kernel and tried again, except the keycode for break to debugger did nothing. I posted about the hang on the freebsd-acpi list with no response. Then out of hand I tried sshing to the box... it worked! So it wasn't really hung, just the serial console was unresponsive.

Now looking closer at the dmesg I realized the resources for sio0 and sio1 were wrong! sio0 was getting 0x2F8 and sio1 was getting 0x3F8 when it should be the other way around. I rechecked the BIOS settings, everything was ok. I downloaded a new BIOS from the MSI taiwan site that stated the fix was for 'redhat 4' installs. That didn't work.

I posted to freebsd-acpi again with what I knew. This time I got a reply and after some back and forth, I generated a new .asl file that would probe sio0 and sio1 in the correct order! Now all my interrupts are assigned correctly:

[root@pogo-1 ~]# vmstat -i
interrupt total rate
irq1: atkbd0 6 0
irq3: sio1 13587145 55
irq4: sio0 1243 0
irq14: ata0 36 0
irq16: em0 uhci3 3237 0
irq17: em1 21 0
irq19: uhci1+ 66550 0
cpu0: timer 485520843 2000
cpu1: timer 485520575 2000
Total 984699656 4056

And everything seems to be working great, although my post's problem according to the resulting discussion on freebsd-acpi seems to be more common than imagined!

Thursday, April 20, 2006

http compression

One of the things I also use regularly at work is wget. In order to work on compression I needed a tool that would recursively spider a web site and had the appropriate features, but would also support compression. At the time, nothing out there supported this, so a year ago or so I finished a patch to wget to add compression.

You can check out the latest version of wget using the subversion repository outlined here. Apply this patch from the top directory and as long as your OS has zlib support, then you should be able to use the '-z' switch this patch adds to request compressed files from a webserver.


Here's an example how a 39K download turns into a 10K download with compression, and downloads in half the time.

$ ./wget
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 39599 (39K) [text/html]
Saving to: `index.html'

100%[=======================================>] 39,599 --.-K/s in 0.005s

13:22:32 (8.39 MB/s) - `index.html' saved [39599/39599]

$ ./wget -z
Connecting to connected.
HTTP request sent, awaiting response... 200 OK
Length: 10546 (10K) [text/html]
Saving to: `index.html.1'

100%[=======================================>] 10,546 --.-K/s in 0.002s

13:22:39 (5.62 MB/s) - `index.html.1' saved [10546/10546]

$ diff index.html index.html.1
$ echo $?

Wednesday, April 19, 2006


In my job, I deal with a lot of tcp traffic, esp http, and ipv6.

For awhile I've been using both bozohttpd and Apache.
I used bozohttpd because it's simple, light weight, command line driven and easy to hack on. The big win for bozohttpd was the fact you could drop it into inetd and let inetd take care of the ipv6 compliance side. However, bozohttpd is lacking in several useful features and in many cases is missing some standards compliancy -- so in those cases I used Apache. Everyone here tests with Apache, but I absolutely despise Apache's convoluted "do everything" configuration and setup. It can take me hours to remember, research and setup even simple changes (esp if it requires a missing module!). Compiling Apache can be a royal PITA... Basically, it's too flexible.

Recently I've taken a liking to lighttpd. It's very fast, easily configurable, and restricted enough in it's feature set to allow easy module configuration. It only has one problem. You could use it for ipv6 and not ipv4 or vice versa. Common mistake really, people never take the time and effort to use sock storage structure and properly do a dual stack server, they try to 'hack' their ipv4 server into v6 with #ifdefs, etc. Bad bad bad. I digress.

Adjusting lighttpd to work on v4 and v6 in the same process was easy. Easy that is if you're using freebsd.

sysctl net.inet6.ip6.v6only=0

Then set up lighttpd to serve v6 addressing, and you're set. This basically enables v4 compat ipv6 addressing like ::ffff:, so all their #ifdef'd ipv6 only code still chomps on the numbers just fine and listening on :: still gets you v4 traffic.