The Smell of Molten Projects in the Morning

Ed Nisley's Blog: Shop notes, electronics, firmware, machinery, 3D printing, laser cuttery, and curiosities. Contents: 100% human thinking, 0% AI slop.

Category: Photography & Images

Taking & making images.

  • Monthly Image: Bootleg Bell Ringers

    MHVLUG meetings end around 8 pm and, depending on this-and-that, the bell atop Old Main on the Vassar College campus will be tolling the hour as we emerge. Here’s a scene-setting photo from Wikimedia, taken from about where I parked the car:

    Vassar College Old Main Building
    Vassar College Old Main Building

    Although the bell didn’t have its usual steady rhythm after the most recent meeting, I didn’t expect this:

    Bell Ringers atop Vassar Old Main
    Bell Ringers atop Vassar Old Main

    The tree grows in the near foreground, not over Old Main.

    Two of them realized the risk of permanent hearing damage, but do you see the real hazard?

    Take a closer look:

    Bell Ringers atop Vassar Old Main - detail
    Bell Ringers atop Vassar Old Main – detail

    No, it’s not the guy leaning against the historic-but-flimsy railing. That folded-dipole antenna over on the right side most likely connects to Vassar’s 45 W UHF EMS repeater; at that range, RF can burn deeply.

    Obviously, the student body needs more amateur radio operators…

    Taken with the Canon SX230HS braced on the side of the Forester and zoomed all the way.

  • HP 7475A Plotter: Pen Holder Height Map Cap

    The “pen holder” in an HP 7475A plotter carries the pen across the width of the paper:

    HP 7475A - Pen Holder - overview
    HP 7475A – Pen Holder – overview

    Given that it was designed to carry pens, not knives, I wasn’t surprised that the spring-loaded finger clamping the knife adapter didn’t apply enough force to hold the adapter in place against the cutting forces. I figured a quick test of a gizmo to stabilize the adapter would be in order, even though I knew:

    • The pen holder doesn’t apply enough downward force
    • The knife adapter doesn’t have a depth-of-cut shroud around the blade

    In order to build the gizmo, I need the carrier’s dimensions…

    An overhead photo of the pen holder shows the layout in the XY plane:

    HP7475A - pen holder - top view
    HP7475A – pen holder – top view

    I shouldn’t have used graph paper as a background, because the next step was to remove the background and isolate the carrier:

    HP7475A - pen holder - top view - isolated
    HP7475A – pen holder – top view – isolated

    The carrier measures 26.8 mm front-to-back, so scaling a grid to match that dimension provides a coordinate system overlay:

    HP7475A - pen holder - top view - 1 mm grid
    HP7475A – pen holder – top view – 1 mm grid

    The (0,0) origin sits at the lower left, so you can read off all the relevant coordinates as needed.

    However, rather than go full-frontal digital, I resized the isolated image to 20 pixel/mm, turned it into a height map, and treated it like a chocolate mold or cookie cutter with gray values scaled to the desired height:

    • Black = background to be removed
    • Dark gray = 2.5 mm thick
    • Medium gray = 3.5 mm
    • Light gray = 7 mm
    • White = 10 mm

    Drawing the walls with a 40 pixel diameter pen makes them 2 mm wide at 20 pixel/mm:

    HP7475A - knife stabilizer
    HP7475A – knife stabilizer

    It’s painfully obvious why I don’t do much freehand drawing, although the knife adapter hole is supposed to be oval.

    As with cookie cutters and chocolate molds, there’s no need for that much resolution, so I rescaled it to 4 pixel/mm, saved that tiny image as a PNG file, and handed it to OpenSCAD’s surface() function to get a solid model. This being a one-off, I typed this OpenSCAD source code directly into the OpenSCAD editor on the fly, then remembered to save it (!) before shutting down:

    rotate([0,180,0])
    mirror([0,0,1])
    scale([0.25,0.25,10/100])
    difference() {
      translate([0,0,-2.0]) render(convexity=10)
        surface("/long-and-tedious-path/HP7475A - knife stabilizer - scaled.png",center=true);
      translate([0,0,-200])
        cube(400,center=true);
    }
    

    The mirror() transformation inverts the model top-to-bottom along the Z axis, compensating for the flip from drawing the height map as though the walls rise upward from the pen carrier, after which the flip() transformation puts the flat side down to make it buildable.

    The height map image conversion produces a bazillion irrelevant faces, but it’s quick and easy:

    HP7475A - Roland knife stabilizer - height map model
    HP7475A – Roland knife stabilizer – height map model

    I’ve been using Slic3r’s Hilbert Curve  pattern for top & bottom infill to get a nice textured result:

    Roland knife stabilizer - height map - Slic3r preview
    Roland knife stabilizer – height map – Slic3r preview

    Which printed just about like you’d expect:

    HP 7475A - Roland knife adapter and stabilizer - height map - bottom view
    HP 7475A – Roland knife adapter and stabilizer – height map – bottom view

    I reamed out the hole with a step drill (the HP pens are close enough to 7/16 as to make no difference here) to get the knife adapter to fit, but the walls and suchlike came out close enough.

    Then it just snapped into place:

    HP 7475A - Roland knife adapter and stabilizer - height map
    HP 7475A – Roland knife adapter and stabilizer – height map

    Actually, no, it didn’t just snap into place: some (dis)assembly was required.

    First, remove the brass knife bearing from the adapter, push the knife adapter shell into the pen holder, slide the stabilizer cap down over the adapter, press it firmly around the pen holder, reinstall the brass knife bearing, then it’s ready.

    The cuts in the green vinyl just to the left of the knife blade (in a window decoration sheet I spotted in a trash can) show that the blade can cut, albeit with some finger pressure, but the fancy red stabilizer didn’t stay stuck on the pen carrier nearly as well as I expected. A screw attachment will help with that, which calls for going all digital on those coordinates.

    But it was quick & easy…

  • Monthly Image: Left Cross

    It’s the start of a new riding season and we’re returning from a concert at Vassar. I’m cranking 20+ mph, pushed by a gusty tailwind.

    T minus 7 seconds:

    Cedar Valley Rd - Left Cross - T-7
    Cedar Valley Rd – Left Cross – T-7

    The white car approaches the intersection a bit faster than usual, which leads me to expect a New York State Rolling Stop-and-Go right turn directly in front of me.

    T minus 5 seconds:

    Cedar Valley Rd - Left Cross - T-5
    Cedar Valley Rd – Left Cross – T-5

    The white car slows enough that I now expect a stop with the front end well onto the shoulder. A quick check in the mirror shows no traffic behind me: I can take the lane if needed. This intersection always has a large gravel patch spanning the shoulder, so I must move closer to the fog line anyway.

    T minus 2 seconds:

    Cedar Valley Rd - Left Cross - T-2
    Cedar Valley Rd – Left Cross – T-2

    The white car comes to a full stop, not too far onto the shoulder, and my fingers come off the brakes. I gotta work on that fingers-up position, though.

    Whoops, a classic left cross from the black SUV!

    T minus 1 second:

    Cedar Valley Rd - Left Cross - T-1
    Cedar Valley Rd – Left Cross – T-1

    I’m now braking hard, barely to the left of the gravel patch.

    T zero:

    Cedar Valley Rd - Left Cross - T-0
    Cedar Valley Rd – Left Cross – T-0

    Well, that was close.

    Somewhat to my surprise, the white car hasn’t crept any further onto the shoulder.

    The SUV driver gives me a cheery wave, as if to thank me for not scratching the doors. I never make hand gestures, but I did tell him he does nice work.

    It’s hard to not see a faired long-wheelbase recumbent, head-on in bright sunlight, not to mention that I’m wearing my new Sugoi Zap Bike Jacket in Super Nova retroreflective lime green with retroreflective lime green utility gloves.

    I. Am. Visible. In. Any. Light. Dammit.

    It is, apparently, easy to mis-judge a bike’s speed, although driver-ed courses used to recommend that you err on the side of not trying to beat an oncoming vehicle. Perhaps that recommendation has become inoperative?

    The corresponding maneuver by a car passing you is known as a right hook.

    Memo to Self: Always look at the license plate to give the camera a straight-on picture.

  • Kenmore 158 UI: Button Rework

    Simplifying the Kenmore 158 UI’s buttons definitely improved the user experience:

    Kenmore 158 Controller - Simplified Buttons
    Kenmore 158 Controller – Simplified Buttons

    The trick depends on specifying the colors with HSB, rather than RGB, so that the buttons in each row have the same hue and differ in saturation and brightness. The Imagemagick incantations look like this:

    • Disabled: hsb\(${HUE}%,50%,40%\)
    • Unselected: hsb\(${HUE}%,100%,70%\)
    • Selected: hsb\(${HUE}%,100%,100%\)

    For whatever reason, the hue must be a percentage if the other parameters are also percentages. At least, I couldn’t figure out how to make a plain integer without a percent sign suffix work as a degree value for hue.

    Anyhow, in real life they look pretty good and make the selected buttons much more obvious:

    Kenmore 158 UI - Simplified buttons - contrast stretch
    Kenmore 158 UI – Simplified buttons – contrast stretch

    The LCD screen looks just like that; I blew out the contrast on the surroundings to provide some context. The green square on the left is the Arduino Mega’s power LED, the purple dot on the right is the heartbeat spot.

    The new “needle stop anywhere” symbol (left middle) is the White Draughts Man Unicode character: ⛀ = U+26C0. We call them checkers here in the US, but it’s supposed to look like a bobbin, as you must disengage the handwheel clutch and stop the main shaft when filling a bobbin; the needle positioning code depends on the shaft position sensor.

    Weirdly, Unicode has no glyphs for sewing, not even a spool of thread, although “Fish Cake With Swirl” (🍥 = U+1F365) came close. Your browser must have access to a font with deep Unicode support in order to see that one…

    You can’t say I didn’t try:

    Unicode characters - bobbin-like shapes
    Unicode characters – bobbin-like shapes

    The script that generates all the buttons:

    ./mkBFam.sh NdDn  9 ⤓
    ./mkBFam.sh NdUp  9 ⤒
    ./mkBFam.sh NdAny 9 ⛀ 80 80 40
    ./mkBFam.sh PdOne 33 One 120 80
    ./mkBFam.sh PdFol 33 Follow 120 80
    ./mkBFam.sh PdRun 33 Run 120 80
    ./mkBFam.sh SpMax 83  🏃 80 80 40
    ./mkBFam.sh SpMed 83  🐇 80 80 40
    ./mkBFam.sh SpLow 83  🐌
    montage *bmp -tile 3x -geometry +2+2 Buttons.png
    display Buttons.png
    

    The script that generates all the versions of a single button:

    # create family of button images
    # Ed Nisley - KE4ZNU
    # March 2015
    
    [ -z $1 ] && FN=Test || FN=$1
    [ -z $2 ] && HUE=30  || HUE=$2
    [ -z $3 ] && TXT=x   || TXT=$3
    [ -z $4 ] && SX=80   || SX=$4
    [ -z $5 ] && SY=80   || SY=$5
    [ -z $6 ] && PT=25   || PT=$6
    [ -z $7 ] && BDR=10  || BDR=$7
    
    echo fn=$FN hue=$HUE txt=$TXT sx=$SX sy=$SY pt=$PT bdr=$BDR
    
    echo Working ...
    
    echo Shape
    
    echo Buttons
    echo  .. Disabled
    convert -size ${SX}x${SY} xc:none \
      -fill hsb\(${HUE}%,50%,40%\) -draw "roundrectangle $BDR,$BDR $((SX-BDR)),$((SY-BDR)) $((BDR-2)),$((BDR-2))" \
      ${FN}_s.png
    convert ${FN}_s.png \
      -font /usr/share/fonts/custom/Symbola.ttf  -pointsize ${PT}  -fill gray20  -stroke gray20 \
      -gravity Center  -annotate 0 "${TXT}"  -trim -repage 0x0+7+7 \
      \( +clone -background navy -shadow 80x4+4+4 \) +swap \
      -background snow4  -flatten \
      ${FN}0.png
    
    echo  .. Enabled
    convert -size ${SX}x${SY} xc:none \
      -fill hsb\(${HUE}%,100%,70%\) -draw "roundrectangle $BDR,$BDR $((SX-BDR)),$((SY-BDR)) $((BDR-2)),$((BDR-2))" \
      ${FN}_s.png
    convert ${FN}_s.png \
      -font /usr/share/fonts/custom/Symbola.ttf  -pointsize $PT  -fill black  -stroke black \
      -gravity Center  -annotate 0 "${TXT}"  -trim -repage 0x0+7+7 \
      \( +clone -background navy -shadow 80x4+4+4 \) +swap \
      -background snow4  -flatten \
      ${FN}1.png
    
    echo  .. Pressed
    convert -size ${SX}x${SY} xc:none \
      -fill hsb\(${HUE}%,100%,100%\) -draw "roundrectangle $BDR,$BDR $((SX-BDR)),$((SY-BDR)) $((BDR-2)),$((BDR-2))" \
      ${FN}_s.png
    convert ${FN}_s.png \
      -font /usr/share/fonts/custom/Symbola.ttf  -pointsize ${PT}  -fill black  -stroke black \
      -gravity Center  -annotate 0 "${TXT}"  -trim -repage 0x0+7+7 \
      \( +clone -background navy -shadow 80x4+4+4 -flip -flop \) +swap \
      -background snow4  -flatten \
      ${FN}2.png
    
    echo BMPs
    for ((i=0 ; i <= 2 ; i++))
    do
     convert ${FN}${i}.png -type truecolor ${FN}${i}.bmp
    # display -resize 300% ${FN}${i}.bmp
    done
    
    rm ${FN}_s.png ${FN}?.png
    
    echo Done!
    
  • RPi: Time-lapse Photos

    The Raspberry Pi doc provides a recipe for the simplest possible time-lapse webcam: fire fswebcam once a minute from a cron job.

    The crontab entry looks much like their example:

    * * * * * /home/ed/bin/grabimg.sh 2>&1
    

    I put all the camera details in the ~/.config/fswebcam.conf config file:

    # Logitech C130 / C510 camera
    device v4l2:/dev/video0
    input 0
    resolution 1280x720
    set sharpness=128
    jpeg 95
    set "power line frequency"="60 hz"
    #no-banner
    

    That simplifies the ~/bin/grabimg.sh script:

    #!/bin/bash
    DATE=$(date +"%Y-%m-%d_%H.%M.%S")
    fswebcam -c /home/ed/.config/fswebcam.conf /mnt/samba/webcam/$DATE.jpg
    

    The output directory lives on a Samba-shared USB stick jammed in the back of the Asus router, so I need not putz with a Samba server on the RPi.

    Manually mounting the share, which for the moment is the /testfolder/webcam directory on the USB stick:

    sudo mount -t cifs -o user=ed //gateway/testfolder/webcam /mnt/samba
    

    I’m pretty sure automagically mounting the share will require the same workarounds as on my desktop box, but this fstab entry is a start:

    #-- ASUS router Samba share
    //gateway/testfolder/webcam	/mnt/samba	cifs	auto,uid=ed,credentials=/root/.gateway-id 0 0
    

    That requires a corresponding credentials file with all the secret info:

    domain=WHATSMYNET
    username=ed
    password=pick-your-own
    

    This is mostly a test to see how long it takes before something on the RPi goes toes-up enough to require a manual reboot. Disabling the WiFi link’s power saving mode seems to keep the RPi on the air all the time, which is a start.

  • Lurid Filament Colors vs. Monochrome Images

    An experiment with images of an object made with translucent magenta PETG…

    The Slic3r preview of the object looks like this, just so you know what you should be seeing:

    Necklace Heart - Slic3r Preview
    Necklace Heart – Slic3r Preview

    It’s pretty much a saturated red blob with the Canon SX230HS in full color mode:

    Necklace Heart - Slic3r Preview
    Necklace Heart – Slic3r Preview

    Unleashing The GIMP and desaturating the image based on luminosity helps a lot:

    Necklace Heart - magenta PETG - desaturate luminosity
    Necklace Heart – magenta PETG – desaturate luminosity

    Desaturating based on either lightness or average, whatever that is, produced similar results.

    Auto level adjustment plus manual value tweaking brings out more detail from that image:

    Necklace Heart - magenta PETG - desaturated - adjusted
    Necklace Heart – magenta PETG – desaturated – adjusted

    I also tried using the camera in its B&W mode to discard the color information up front:

    Necklace Heart - circle detail
    Necklace Heart – circle detail

    It’s taken through the macro adapter with the LEDs turned off and obviously benefits from better lighting, with an LED flashlight at grazing incidence. You can even see the Hilbert Curve top infill.

    The object of the exercise was to see if those tiny dots would print properly, which they did:

    Necklace Heart - dots detail
    Necklace Heart – dots detail

    Now, admittedly, PETG still produces fine hairs, but those dots consist of two layers and two thread widths, so it’s a harsh retraction test.

    A look at the other side:

    Necklace Heart - detail
    Necklace Heart – detail

    All in all, both the object and the pix worked out much better than I expected.

    Leaving the camera in full color mode and processing the images in The GIMP means less fiddling with the camera settings, which seems like a net win.

  • RPi: Logitech QuickCam for Notebook vs. fswebcam

    Combining the camera data I collected a while ago with a few hours of screwing around with this old Logitech camera:

    Logitech QuickCam for Notebook Plus - front
    Logitech QuickCam for Notebook Plus – front

    I’m convinced it’s the worst camera I’d be willing to use in any practical application.

    The camera offers these controls:

    fswebcam --list-controls
    --- Opening /dev/video0...
    Trying source module v4l2...
    /dev/video0 opened.
    No input was specified, using the first.
    Available Controls Current Value Range
    ------------------ ------------- -----
    Brightness 128 (50%) 0 - 255
    Contrast 128 (50%) 0 - 255
    Gamma 4 1 - 6
    Exposure 2343 (8%) 781 - 18750
    Gain, Automatic True True | False
    Power Line Frequency Disabled Disabled | 50 Hz | 60 Hz
    Sharpness 2 0 - 3
    Adjusting resolution from 384x288 to 320x240.
    

    Putting the non-changing setup data into a fswebcam configuration file:

    cat ~/.config/fswebcam.conf
    # Logitech QuickCam for Notebook Plus -- 046d:08d8
    device v4l2:/dev/video0
    input gspca_zc3xx
    resolution 320x240
    scale 640x480
    set sharpness=1
    #jpeg 95
    set "power line frequency"="60 hz"
    

    Trying to use 640×480 generally produces a Corrupt JPEG data: premature end of data segment error, which looks no better than this and generally much worse:

    Logtech 08d8 - 640x480
    Logtech 08d8 – 640×480

    The top of the picture looks pretty good, with great detail on those dust particles, but at some point the data transfer coughs and wrecks the rest of the image. I could crop the top half to the hipster 16:9 format of 640×360, but the transfer doesn’t always fail that far down the image.

    The -R flag that specifies using direct reads instead of mmap, whatever that means, doesn’t help. In fact, the camera generally crashes hard enough to require a power cycle.

    Delaying a second with -D1 and / or skipping a frame with -S1 don’t help, either.

    The camera works perfectly at 640×480 using fswebcam under Xubuntu 14.04 on a Dell Latitude E6410 laptop, so I’m pretty sure this is a case of the Raspberry Pi being a bit underpowered for the job / the ARM driver taking too long / something totally obscure. A random comment somewhere observed that switching from Raspbian to Arch Linux (the ARM version) solved a similar video camera problem, so there’s surely room for improvement.

    Dragorn of Kismet reports that the Raspberry Pi USB hardware doesn’t actually support USB 2.0 data rates, which also produces problems with Ethernet throughput. The comments in that slashdot thread provide enough details: the boat has many holes and it’s not a software problem.

    For lack of anything more adventurous, the config file takes a 320×240 image and scales it up to 640×480, which looks about as crappy as you’d expect:

    Logtech 08d8 - 320x240 scaled
    Logtech 08d8 – 320×240 scaled

    Even that low resolution will occasionally drop a few bytes along the way, but much less often.

    The picture seems a bit blown out, so set the exposure to the absolute minimum:

    fswebcam -c ~/.config/fswebcam.conf --set exposure=781 "Logitech 08d8 - expose 781.jpg"
    

    Which looks like this:

    Logitech 08d8 - expose 781
    Logitech 08d8 – expose 781

    Given that’s happening a foot under a desk lamp aimed well away from the scene, the other end of the exposure scale around 18000 produces a uselessly burned out image. I think a husky neutral-density filter would be in order for use with my M2’s under-gantry LED panels. The camera seems to be an early design targeting the poorly illuminated Youtube / video chat market segment (I love it when I can talk like that).

    There’s probably a quick-and-dirty Imagemagick color correction technique, although Fred’s full-blown autocorrection scripts seem much too heavy-handed for a Raspberry Pi…