The Smell of Molten Projects in the Morning

Ed Nisley's Blog: Shop notes, electronics, firmware, machinery, 3D printing, laser cuttery, and curiosities. Contents: 100% human thinking, 0% AI slop.

Category: PC Tweakage

Remembering which tweaks worked

  • Adobe Flash in Kubuntu Hardy x86_64

    Flash is one of those things that comes heartbreakingly close to actually working, particularly in a 64-bit Linux OS. There’s an intricate scaffolding that plugs the 32-bit Flash code into the 64-bit Firefox superstructure, but …

    Worse, there’s an error in the Hardy Flash 10 repository entry that points to a nonexistent Macromedia download website, so all the usual hints & tips don’t work. That seems to have broken in December and the automatic installation fails quietly. Using sudo apt-get update flashplugin-nonfree reveals the problem.

    Unfortunately, the sum total of all my fiddling was a Flash installation that sorta-kinda worked, with a significantly flaky crust.

    What works slightly better: force the version to Flash 9 by pinning the flashplugin-nonfree package version in apt-get. Start by removing everything related to Flash. Then, with a clean slate, this post on the Ubuntu forums shows how to get a clean installation:

    1.) Create /etc/apt/preferences with this entry:

    Explanation: Flash plugin from hardy-backports was broken on 12/24/2008;
    Explanation: pinning to hardy-updates for now until it is fixed
    Package: flashplugin-nonfree
    Pin: release a=hardy-updates
    Pin-Priority: 980

    2.) sudo apt-get update

    3.) sudo apt-get install flashplugin-nonfree

    4.) Restart Firefox

    It still gives me a blank gray background where the Flash should appear, which is sometimes cured by reloading the page, but it seems more stable.

    Adobe has a genuine 64-bit Flash alpha out, but (as of the 16 December 2008 version) it reliably crashes on sites I really care about. Like, for example, nytimes.com.

    Memo to self: un-pin this in a few months and see what happens.

    Update: It still jams up with gray screens where Flash should be, no matter what I do.

  • Network Hard Drives: Why Not

    The question came up as to whether an external hard drive with a network interface was a Good Thing for backups and suchlike.

    For humongous drives, a 100 Mbit/s network tap is painfully slow. In round numbers, on a good day you’ll get 80 Mbit/s throughput; call it 10 MBytes/s.

    Transferring 1 TB at 10 MB/s requires a bit over a day: 28-ish hours. Streaming media will work fine, but filling up the drive in the first place will be tedious.

    I was reminded of this the hard way when I had to do a full-drive backup to the file server in the basement. Seemed to take forever, but when I ran the numbers it was ticking along just about as fast as it could possibly go…

    A USB local drive is better: 40 MB/s, more or less, if the software stack can keep up with it. I eventually pulled the drive, popped it on a USB-IDE adapter, jacked it into the server, and got it done that way.

    Now, if you have gigabit Ethernet everywhere, things might be faster, but the limiting factor then becomes the drive’s sustained rate, which is probably a tad over 100 MB/s if you’re transferring large files.

    Fancy eSATA drives have a higher burst rate, but the bits just don’t come off the platters all that much faster.

    I’d be astounded if a consumer-grade network drive came anywhere close to those numbers. I have an IOmega 500 GB drive that’s an absolute piece of crap…

    Feed the obvious keywords into Wikipedia and get all the numbers.

  • Kubuntu Remote Desktop via SSH Tunnel

    In the process of setting up a new PC for my mother, I finally figured out how to get remote desktop sharing in Kubuntu Hardy working. You’d think the bog-standard (and default) krfb would work, but it crashed every time. Come to find out, after much searching, the solution boils down to this…

    Shoot krfb in the head, use vnc4server and x11vnc.

    Use synaptic or apt-get to install those and all their dependencies on the remote machine (i.e., the one that will become my mother’s PC). While you’re at it, uninstall krfb: good riddance.

    Run vncpasswd and feed in an appropriate password that you’ll use to authorize yourself to the vnc session.

    Log out, restart X (with Ctrl-Alt-Backspace, perhaps), log back in again to get all the X11 infrastructure up to speed.

    On your local machine (i.e., mine), use SSH to sign in to the remote box:

    ssh -p unusual-port -L 5900:localhost:5900 remote.PC.ip.addr

    The -L creates a tunnel from your local machine’s port 5900 to the remote machine’s port 5900, through the authorized SSH session.

    I use an unusual port because running SSH on port 22 on an internet-facing machine (even behind a firewall router) is just plain dumb. I doubt the unusual port provides much protection, but it should shake off a few script kiddies.

    [Update: Just in case you regard shared-key authorization and a nonstandard port as evidence of clinical paranoia, read that. One of the comments notes that using a nonstandard port gets rid of all the low-speed zombies…]

    Incidentally, the firewall router must forward the unusual port directly to the PC’s local IP address, which requires a bit of tweakage all by itself; that depends on which router you have. Word to the wise: do not use DHCP to get the PC’s IP address. Think about it.

    That PC is also set up with my RSA keys, so that the kiddies can’t brute-force a username / password login attack. And, yes, I regenerated the keys after the Debian goof.

    This is still on my LAN, so I use a dotted quad IP address (being too lazy to tweak /etc/hosts for a temporary machine), but you can use the host name maintained by DynDNS or their ilk for a truly remote box. See this post for the straight dope on making that work.

    Then fire up a remote-desktop client like, for example, krdc on your local PC, with the “remote desktop” address aimed at:

    localhost:5900

    That’s the local end of the SSH tunnel to the remote PC. It won’t work if you aim it at the remote machine’s IP address, because it’s not watching for incoming connections (nor is the router forwarding them).

    Type in the password and shazam you should see whatever’s appearing on the remote desktop. Mouse & keyboard control should work just fine, too. Word to the wise: make sure your local monitor is bigger than the remote monitor; while you can scroll around or scale what you see, that’s icky.

    It should be obvious that you cannot “switch users” to a different X console on the remote box and expect it to work. I tried it, just for grins, and it doesn’t. You could probably tunnel another session in through port 5901 (or 5900 + whatever the X11 console might be), but I haven’t tried that.

    Last year I set my mother up with Verizon’s cheapest DSL service: 768 kb/s down and 16 kb/s up, all for a whopping 15 bucks plus tax a month. Yes, that’s 16 kb/s: slower than old-school dial-up modems. Sheesh & similar remarks. So all this fancy remote-desktop GUI stuff won’t work for diddly with the PC in her apartment.

    SSH and the command line rule!

  • Syncing Zire 71 in Kubuntu Hardy

    I have a somewhat antique Palm Zire 71 that has, periodically, synced perfectly with various flavors of GNU/Linux. On the other hand, sometimes a new release / kernel / version prevents it from syncing at all.

    My life is simple enough that I really don’t need to actively sync it with an online calendar, which is a damn good thing. Back when I needed to do hotsyncing, it always came heartbreakingly close to working; apparently that’s still the case. Having to comb out a complete set of duplicate addressbook entries pretty much soured me on futher experimentation.

    Currently, the Zire on the outs with Ubuntu / Kubuntu Hardy. The hack that makes it work goes a little something like this:

    The file /etc/modprobe.d/libpisock9 blacklists the visor module, which allegedly lets all the pilot-* programs connect using libusb, but that flat-out doesn’t work for me.

    Replace this stanza inside /etc/udev/rules.d/60-symlinks.rules:

    #KERNEL=="ttyUSB*", ATTRS{product}=="Palm Handheld*|Handspring *|palmOne Handheld", \
    #                                       SYMLINK+="pilot"
    
    With this one:
    BUS=="usb", SYSFS{product}=="Palm Handheld*|Handspring *|palmOne Handheld", \
    KERNEL=="ttyUSB*", NAME="ttyUSB%n", SYMLINK+="pilot", GROUP="dialout", MODE="0666"

    Make sure you’re in the dialout group. If you’re not, add yourself, log out, then log back in again.

    I back the Zire up once a month, which is rarely enough that I just load the visor module by hand:

    sudo modprobe visor

    Create a directory for backing up into:

    cd ~/Zire71
    mkdir 2009-01-03

    And then backing up the Zire is easy enough. Pop the thing in the cradle, poke the hotsync button, and quick like a bunny whack Enter on this:

    pilot-xfer -p /dev/ttyUSB1 -b 2009-21-03/

    The ttyUSB1 device will, of course, vary depending on whether you have any other USB-serial gizmos plugged in at the time.

    Frankly, the utter unreliability and instability of this whole USB PDA mess is one of the reasons why, IMHO, GNU/Linux really isn’t “ready for the desktop” despite the fact that all our boxen here run it. I don’t particularly want a phone / camera / PDA / ebook reader / pocketwarmer, but I can see I’ll wind up with one some day just to get a USB interface that actually works.

    Memo to self: remember to modprobe visor

    Update: Xubuntu 8.10 fixed all that, so USB hotplugging seems to work right out of the box. Install pilot-link, then just:

    pilot-xfer -p usb: -b /path/to/backups

    Now, whether syncing to contacts & calendars works correctly, I cannot say.

  • Daily Yard Picture

    5 November 2008
    5 November 2008

    Being that sort of bear, I took a picture of the back yard from our patio every day at 7 am wall-clock time. DST/EST changeovers threw their usual monkey wrenches into the mix, not to mention my lack of attention to the camera’s internal clock settings, but I eventually got 321 pictures of the same scene at more or less the same time of day.

    That’s all well and good, but this is the movie age…

    The plan: use ffmpeg or maybe mencoder to convert the still images into a movie.

    • Zero: copy the files to a unique subdirectory to protect the originals!
    • One: sort & rename by date
    • Two: resize images
    • Three: convert to a movie
    • Four: . . . profit!
    10 November 2008
    10 November 2008

    I’d uploaded the files whenever I used the camera for something else, so the actual file dates were fairly well scrambled and didn’t correspond to the EXIF data inside the image file. Digikam‘s batch file rename operation can sort out the files in ascending order of EXIF date and rename them into something a bit more uniform & boring like 0001.jpg, which is vital for ffmpeg.

    I used the camera’s full resolution, which is much too large for video, so I created Yet Another Subdirectory called Smaller to hold the reduced-size images. Imagemagick‘s convert program then squishes them down:

    for f in *jpg ; do convert -verbose -resize 640x480 $f Smaller/$f; done

    You can smash them even further to get a teeny postage-stamp movie for your media player.

    Make the movie:

    ffmpeg -r 3 -i %04d.jpg daily-3.mp4

    The file specifier %04d must exactly match the filename sequence and a missing file will stop ffmpeg dead in its tracks. The file names coming out of your camera won’t work if they’re not exactly sequential, which is highly unlikely over the course of the year.

    You can use mencoder:

    mencoder "mf://*.jpg" -mf fps=10 -o daily800.avi -ovc lavc -lavcopts vcodec=msmpeg4v2:vbitrate=800

    Then it’s showtime! I’d upload it, but you don’t have a need to know for our backyard activiites.

    There, now, wasn’t that easy?

    I didn’t actually figure all this out from first principles, of course. The basics are out there if you rummage around for a while with the obvious keywords.

    Memo to self: affix a stable camera platform to the side of the house!

  • Udev rule to create /dev/scanner

    For some unknown reason, Kubuntu 8.04 doesn’t create a /dev/scanner link while it’s figuring out all the SCSI devices. I wanted to make the link sort of generic for any scanner that I might plug in, but I had to settle for a unique udev match.

    The scanner popped out of udev as /dev/sg5 this time and

    udevinfo --query=all --attribute-walk --name=/dev/sg5
    

    emits this useful chunk:

    looking at parent device '/devices/pci0000:00/0000:00:1e.0/0000:05:05.0/host4/target4:0:2/4:0:2:0':
        KERNELS=="4:0:2:0"
        SUBSYSTEMS=="scsi"
        DRIVERS==""
        ATTRS{device_blocked}=="0"
        ATTRS{type}=="3"
        ATTRS{scsi_level}=="3"
        ATTRS{vendor}=="HP      "
        ATTRS{model}=="C7670A          "
        ATTRS{rev}=="3945"
        ATTRS{state}=="running"
        ATTRS{timeout}=="0"
        ATTRS{iocounterbits}=="32"
        ATTRS{iorequest_cnt}=="0x656"
        ATTRS{iodone_cnt}=="0x656"
        ATTRS{ioerr_cnt}=="0x2"
        ATTRS{modalias}=="scsi:t-0x03"
        ATTRS{evt_media_change}=="0"
        ATTRS{queue_depth}=="2"
        ATTRS{queue_type}=="none"
    

    Plucking the readable bits out produces this stanza for/etc/udev/rules.d/60-symlinks.rules

    #-- hack to create /dev/scanner
    SUBSYSTEMS=="scsi", ATTRS{vendor}=="HP", ATTRS{model}=="C7670A", SYMLINK+="scanner"
    

    Then you can use that to fire up xsane thusly:

    sane hp:/dev/scanner

    With that in hand, edit GIMP’s ~/.gimp-whatever/menurc and ~/.gimp-whatever/pluginrc to replace sg5 (or whatever) with scanner.

    Works like a champ…

    The straight dope on writing udev rules is at http://www.reactivated.net/writing_udev_rules.html

    Memo to self: there’s got to be a way to make this generic, perhaps by piggybacking on whatever udev stanza assigns the scanner group to that /dev/sg? device.

    Update: make sure you’re in the scanner group

    sudo usermod -a -G scanner username
  • Updating Dynamic DNS in Kubuntu

    Of course I do remote admin for my mother’s PC, which means I must know its IP address, which means it’s running ddclient and updating an entry at dyndns.

    The setup seems straightforward. In /etc/ddclient.conf you find:

    # Configuration file for ddclient generated by debconf
    #
    # /etc/ddclient.conf
    
    pid=/var/run/ddclient.pid
    daemon=3600
    protocol=dyndns2
    ssl=yes
    use=web web=checkip.dyndns.com
    use=web web=checkip.dyndns.com/, web-skip="IP Address"
    syslog=yes
    server=members.dyndns.org
    login=your-own-id-here
    password='make-up-your-own'
    hostname-at-dyndns

    [Update: Something about checkip changed enough that the old line didn’t work. The web-skip made it work again. ]

    But, as the comment in that file shows, that’s not where you configure ddclient in (K)Ubuntu. You actually tweak the entries in /etc/default/ddclient so the right answer pops out when ddclient gets reconfigured by something or another.

    It’s not clear to me how ddclient figures out when to update the DNS entry (when an update is “necessary”), so I also force an update by putting this in /etc/rc.local:

    ddclient -force

    Actually, I added that tweak when I was setting up another, slightly newer, PC for her and managed to fire off ddclient from my network. That aimed the DNS entry at my IP address and, had I not already been signed into her system, would have locked me until her ddclient grabbed it back.

    I hate it when that happens.

    Memo to self: make sure the defaults match the current configuration.