Archive for category Software

Subaru Forester: Speed Demon!

I finally figured out why the Forester feels so slow:

Subaru Forester - speedometer

Subaru Forester – speedometer

Here in the Northeast US, the maximum legal speed anywhere is 65 mph, less than half-scale, and typical around-town speeds hit 40 mph, barely 1/4 of full scale.

For all practical purposes, that needle barely moves during our usual trips.

I like analog gauges to represent smoothly varying quantities that you must read at a glance, but a big digital display would actually be more useful than that thing.

A 150 mph speedometer scale makes no sense in what’s basically a shrunken all-wheel-drive SUV, even with minimal off-road capabilities. Yes, perhaps the Forester could hit 150 mph, but why not have the scale top out around, say, 100 mph? Above that, you shouldn’t be paying much attention to the speedo, anyway.

The Sienna’s speedo went to 110 and, to the best of my knowledge, that needle never passed 85 mph, tops. However, ordinary (and legal) driving speeds filled the lower half of the scale, with the highest useful speeds in the next quadrant beyond vertical.

Yes, I know why the speedos sport such absurd numbers. I don’t have to like it.

There’s a servo motor (or some such) driving the needle; calibration has been a simple matter of software for a long, long time.

For whatever it’s worth, the Forester and the Sienna have both tachometers and automatic transmissions, a combination that converts shifting into a spectator sport. The Forester’s continuously variable transmission moves the tach needle in smooth glides, rather than abrupt jumps.

About these ads


Fit Test Blocks for 3D Printers: OpenSCAD Version

During one of my recent presentations, somebody asked about the accuracy of 3D printed parts, which reminded me of another member of Coasterman’s Essential Calibration Set: the perimeter width/thickness test block. Back in the day, calibrating the extruder meant getting the actual ratio of the thread width to its thickness to match the ideal value you told Skeinforge to use; being a bit off meant that the final dimensions weren’t quite right.

But when I got it right, the Thing-O-Matic printed a test block with considerable success, despite the horrible retraction zittage:

Perimeter Calibration Block - yellow 1.10 rpm 0.33 0.66 mm

Perimeter Calibration Block – yellow 1.10 rpm 0.33 0.66 mm

Alas, feeding the STL to Slic3r showed that it was grossly non-manifold, and none of the automated repair programs produced good results. Turns out it’s an STL created from a Sketchup model, no surprise there, and the newer slicers seem less tolerant of crappy models.

Sooo, here’s a new version built with OpenSCAD:

Fit Test Blocks - build view

Fit Test Blocks – build view

You get three blocks-and-plugs at once, arranged in all the useful orientations, so you can test all the fits at the same time. They come off the platform about like you’d expect:

Fit test blocks

Fit test blocks

I tweaked the code to make the plugs longer than you see there; the short ones were mighty tough to pry out of those slots.

I ran the plugs across a fine file to clean the sides, without removing any base material, and the plugs fit into the slots with a firm push. I’d do exactly the same thing for a CNC milled part from the Sherline, plus breaking the edges & corners.

The plugs doesn’t fit exactly flush in the recesses for the two models on the right side of that first image, because the edges and corners aren’t beveled to match each other. It’s pretty close and, if it had to fit exactly, you could make it work with a few more licks of the file. The left one, printed with the slot on the top surface, fits exactly as flush as the one from the Thing-O-Matic.

Of course, there’s a cheat: the model allows 0.1 mm of internal clearance on all sides of the plug:

Fit Test Block - show view

Fit Test Block – show view

The outside dimensions of all the blocks and plugs are dead on, within ±0.1 mm of nominal. You’d want to knock off the slight flange at the base and bevel the corners a bit, but unless it must fit inside something else, each object comes off the platform ready to use.

Feel free to dial that clearance up or down to suit your printer’s tolerances.

The OpenSCAD source code:

// Fit test block based on Coasterman's perimeter-wt.stl
// Ed Nisley - KE4ZNU - May 2014

Layout = "Show";

//- Extrusion parameters must match reality!
//  Print with 2 shells and 3 solid layers

ThreadThick = 0.20;
ThreadWidth = 0.40;

Protrusion = 0.1;			// make holes end cleanly

function IntegerMultiple(Size,Unit) = Unit * ceil(Size / Unit);

// Dimensions

Clearance = 0.1;

PlugSize = [10.0,10.0,25.0];
BlockSize = [25.0,13.0,20.0];

PlugOffset = 10.0;

// Useful routines

module ShowPegGrid(Space = 10.0,Size = 1.0) {

  RangeX = floor(100 / Space);
  RangeY = floor(125 / Space);

	for (x=[-RangeX:RangeX])
	  for (y=[-RangeY:RangeY])


module Block() {
	difference() {
		translate([0,PlugSize[1] - PlugSize[1]/2 - BlockSize[1]/2,-PlugOffset])

module Plug(Clear = 0.0) {
	minkowski() {
		if (Clear > 0.0)

// Build it


if (Layout == "Block")

if (Layout == "Plug")

if (Layout == "Show") {
	translate([0,PlugSize[1] - PlugSize[1]/2 - BlockSize[1]/2,-PlugOffset])

if (Layout == "Build") {

	translate([-30,0,0]) {

	translate([30,0,0]) {


, ,

Leave a comment

Boneheads Raven Skull: Extruder Contamination, Continued

The Boneheads Raven Skull demo came out reasonably well, albeit in a reduced size, on the Squidwrench Frank-o-Squid:

TOM286 - Raven Skull on platform

TOM286 – Raven Skull on platform

So I ran off a full-size version on the M2 for comparison:

Raven Skull - on M2 platform

Raven Skull – on M2 platform

The extruder apparently contained a gobbet of black PLA, left over from the Pink Panther Woman, that managed to hang on inside until the very tip of the beak:

Raven Skull - beak contamination

Raven Skull – beak contamination

Close inspection found two black strands closer to the base of the printed parts:

Raven Skull - black contamination

Raven Skull – black contamination

The rear of the skull joins the front just behind the eye sockets, where the solid bottom layers make a visible contrast with the air behind the perimeter threads elsewhere. Refraction darkens some of the threads, but the two black patches stand out clearly.

If it weren’t natural PLA, those flaws wouldn’t be nearly so noticeable.

Were I doing this stuff for a living, I might dedicate a hot end (or an entire extruder) to each color and be done with it.

All in all, the printed quality is about as good as I could expect from a glorified glue gun.

The extreme slowdown while printing the tip of the beak pushed Pronterface’s remaining time estimate over the edge:

Boneheads - Raven - Pronterface time estimate

Boneheads – Raven – Pronterface time estimate

I’m not sure what the correct value should be …


Leave a comment

Poughkeepsie to Rochester Road Trip: The Movie

With the Sony HDR-AS30V camera Gorilla Taped to the Sienna’s dashboard, we drove it to Rochester with a bank shot off Saratoga:

Saratoga Rt 50

Saratoga Rt 50

I then converted nearly 5000 images into Yet Another Crappy Youtube Movie that is, mercifully, only 00:02:43 long.

The key steps:

mkdir /tmp/Video
cd /tmp/Video
sn=1 ; for f in /mnt/backup/Video/2014-05-29/* ; do printf -v dn 'dsc%05d.jpg' "$(( sn++ ))" ; cp -a $f $dn ; done
avconv -r 30 -i dsc%05d.jpg -q 5 Pok-Saratoga-Rochester.mp4

I tossed out a few images you didn’t need to see, then renumbered the remainder:

sn=1 ; for f in * ; do printf -v dn 'dsc%05d.jpg' "$(( sn++ ))" ; mv $f $dn ; done

The point of this exercise was to find out how Youtube treats “HD” movies. The original 1920×1080 MP4 file weighed in at nearly 500 MB with very good quality (due to the -q 5), but the Youtube “HD” result exhibits terrible compression artifacts; the black cloth crawls with huge checkerboard squares. Because the relatively slow-moving sequences at traffic signals and rest stops have excellent quality, I’d say Youtube’s video bit rate just doesn’t support images that change completely from frame to frame. Makes sense; nobody could watch such a thing, so why allocate that many bits?

Now I have another Youtube movie-making data point


Twiddling Linux Swap Performance

Depending on a solid model’s complexity, OpenSCAD will sometimes chew through system memory, consume the entire swap file, and then fall over dead. In an attempt to work around that situation, I recently jammed a 32 GB USB drive into the back of the box, turned it into a swap device, and then told the kernel to back off its enthusiasm for swapping.

Format the USB drive as a swap device:

sudo mkswap /dev/sd??   #--- unmount it before you do this!
Setting up swapspace version 1, size = 31265292 KiB
no label, UUID=0f559a8c-67b7-4fa3-a709-17aeec3104c4

Add it to /etc/fstab and set swap priorities:

# swap was on /dev/sdb3 during installation
UUID=e8532714-ad80-4aae-bee7-a9b37af63c8c none  swap sw,pri=1	0 0
UUID=0f559a8c-67b7-4fa3-a709-17aeec3104c4 none	swap sw,pri=5	0 0

Turn it on:

sudo swapon -a

Following those directions, dial back the kernel’s swappiness and limit the file cache growth:

sudo sysctl -w vm.swappiness=1
sudo sysctl -w vm.vfs_cache_pressure=50

Those commands now live in /etc/sysctl.d/99-swappiness.conf:

cat /etc/sysctl.d/99-swappiness.conf
# Improve responsiveness by reducing cache swapping

For whatever reason, WordPress turns underscores into blanks, so those obvious typos aren’t, really.

And then it should Just Work.

The box has 4 GB of RAM and, under normal circumstances, doesn’t swap at all, so I expect the USB drive should kick in only for extreme OpenSCAD models. The swappiness tuning should help during ordinary operation with large file operations.

I have no results to report, but if something blows up, I know what changed…

Leave a comment

Hall Effect LED Current Control: Crisp Gate Drive Shaping

Because the current control loop closes through the Arduino loop(), the code’s path length limits the bandwidth. Worse, the PWM filter imposes a delay while the DC value catches up with the new duty cycle. Here’s what that looks like:

LoopStatus ILED 50 mA div - 200 50 150 25 mA

LoopStatus ILED 50 mA div – 200 50 150 25 mA

The setpoint current for this pulse is 200 mA, ramping upward from 50 mA. It should have started from 25 mA, but the loop really wasn’t under control here.

The top trace goes low during the drain current measurement, which occurs just before the code nudges the gate drive by 1 PWM count to reduce the error between the setpoint and the measurement. A delay(1) after each PWM change, plus the inherent delay due to all the program statements, produces an update every 1.7 ms, more or less.

Even at that low rate, the current overshoots by 50 mA before the loop can tamp it down again. The current varies by 200 mA for 7 PWM counts, call it 30 mA per count at the high end, so overshooting by 50 mA comes with the territory. There’s just not a lot of resolution available.

The program reads each pulse duration and amplitude from an array-of-structs, so it’s a simple matter of software to save the gate drive voltage at the end of each pulse and restore it when that pulse comes around on the guitar again:

	if (millis() >= (EventStart + (unsigned long)Events[EventIndex].duration)) {
		Events[EventIndex].drive_a = VGateDriveA;						// save drive voltages
		Events[EventIndex].drive_b = VGateDriveB;

        if (++EventIndex > MAX_EVENT_INDEX)								// step to next event
		    EventIndex = 0;

		VGateDriveA = Events[EventIndex].drive_a;						// restore previous drives
		VGateDriveB = Events[EventIndex].drive_b;



		digitalWrite(PIN_ENABLE_A,Events[EventIndex].en_a);				// enable gates for new state

        NeedHallNull = !(Events[EventIndex].en_a || Events[EventIndex].en_b);	// null sensor if all off

		EventStart = millis();                                          // record start time

… which produces this happy result, with a different time scale to show all four pulses in the array:

I Sense Amp  ILED 50 mA div - 200 100 150 50 mA

I Sense Amp ILED 50 mA div – 200 100 150 50 mA

The top trace shows the current amp output that goes into the Arduino analog input and the bottom trace shows the MOSFET drain current. Notice those nice, crisp edges with a nearly complete lack of current adjustment.

The small bumps in the amp output just after the LED turns off happen while the the code nulls the Hall effect sensor offset. Whenever the LEDs turn off, the code nulls the sensor, which is probably excessive; it really doesn’t have much else to do, so why not?

This trickery doesn’t improve the loop bandwidth at all, because the code must still drag the current to meet each setpoint, but now that happens only when the pulse first appears. After a few blinks, the current stabilizes at the setpoint and the loop need handle only slight variations due to temperature or battery voltage changes.

Speaking of voltages:

VDS ILED 50 mA div - 200 100 150 50 mA

VDS ILED 50 mA div – 200 100 150 50 mA

The top trace now shows the MOSFET drain voltage and the bottom still has the LED current. There’s only 650 mV of difference at the drain for currents of 50 mA and 200 mA through the LEDs, with about 1 V of headroom remaining at 200 mA.

The power supply delivers 7.4 V to the anode end of the LEDs, so they drop 6.3 V @ 200 mA and 5.7 V @ 50 mA. Some informal knob twiddling suggests that the MOSFET loses control authority at about 6.5 V, so, given that there’s not much energy in the battery below 7.0 V anyway, the program could limit the  maximum current to 50 mA when the battery hits 7 V, regain 650 mV of headroom, and run at reduced brightness (and perhaps a different blink pattern) until the battery drops to 6.5 V, at which point the lights go out.

There’s more improvement to be had in the code, but those pulses look much better.

(If you’re keeping track, as I generally don’t, this is Post Number 2048: love those round numbers!)



Firefox Accounts: Total FAIL

So I’m in the process of installing Xubuntu 14.04LTS on a box and get to the point where I’m ready to install various daemons and utilities, then tweak their settings, so it’s time to have the new Firefox inhale all my settings from the Firefox on my 13.10 desktop, which will let me find all my blog posts with that information. This used to be a simple matter of going into the new Firefox’s Preferences, getting a one-time pairing code, typing it into the other desktop, and away it went, synchronizing the two installations.

But, no.

While I wasn’t watching, Firefox crept up to Version 29 and, at some point, Mozilla introduced Firefox Accounts. Why would they do that? Here’s a hint:

Firefox Accounts is a consumer account system which provides access to services run by Mozilla, such as Firefox Marketplace and the next version of Firefox Sync.

Firefox Marketplace? Say no more: money changes everything!

Oh, and the “next version of Firefox Sync” is totally incompatible with the “old version” used by all existing Firefox installations.

But it gets worse (emphasis mine):

What if I don’t want to update to the new Sync?

  • While the old version of Sync will continue to work, the latest version of Firefox doesn’t support adding new devices to the old version of Sync. This means that you won’t be able to sync with a new device.
  • Mozilla will continue to host the old version of Sync for a limited time to allow for migration to Firefox Accounts.

In order to sync the 14.10 Firefox, I must upgrade the 13.10 Firefox, but after I do that, none of the other boxes will be able to sync with either of them. I haven’t checked whether Firefox Version 29 is offered for the 10.04LTS installation that’s running on the LinuxCNC boxes.

My 13.10 desktop has endured many, many, many automatic Firefox upgrades during their recent version incrementing mania and, for whatever reason, it doesn’t offer “New Sync” as an option, despite being at the same Version 29 as the 14.04 installation. This is likely a problem with some Firefox extension or another, but I disabled them to no avail.

When all else fails, you always create a new profile by starting the Firefox Profile Manager:

firefox -profilemanager

That works as expected; the new and completely bare profile let me create a new Firefox Account, which entails the usual to-ing and fro-ing with emailed one-time authorizations and suchlike. OK, now I can use the shiny new Firefox Marketplace, should I so desire. Be still, my heart!

So, we progress.

But my original intent was to get all the setup data into the 14.04 Firefox, so (on the 13.10 Firefox) I followed the directions about transferring the old settings into the new profile, which involves tediously hand-copying a bunch of files from one cryptic directory to another. This is a brutally user-hostile operation that only geeks should endure; there is absolutely no automation to be found.

Having a new profile, albeit without any of the old extensions, I attempt to sync my settings, only to discover that the new Firefox Sync will not synchronize my stored passwords, which was pretty much the whole point of this exercise.

Turns out that’s deliberate:

Firefox Sync will not synchronize your passwords if a master password is set. If you would like to continue synchronizing your passwords, try removing your master password before synchronizing.

Now, why would I have a master password? Because, long ago, the good folks at Mozilla highly recommended it (emphasis mine):

It takes only fifteen seconds for a prying user sitting at your computer to see the list of all the passwords you have told Firefox or Thunderbird to save. The list is shown plain as day. It can include webmail and forum passwords or email server passwords. Using a Master Password is highly recommended, to prevent such prying users from seeing the list. By setting a Master Password, anyone using your profile will be prompted to enter the master password when access to your stored passwords is needed.

So, the new Firefox Sync requires a Firefox Account that doesn’t do anything I need done and, in order to sync my 13.10 settings into the 14.04 box, I must have a new Firefox Account and make both Firefox installations less secure.

I think it’s possible to remove the master password, sync the stored passwords, then restore the master password. When you remove the password, you get a confirmation message:

You have deleted your Master Password. Your stored web and email passwords, form data, and private keys will not be protected.

Firefox allegedly uses the Gnome keyring to get a master password protecting the whole Firefox session, but displaying all the stored passwords is just a few clicks away after that; needless to say, Firefox on 13.10 doesn’t use the keyring. Given that Chromium on Xubuntu 13.10 does not use the Gnome keyring, it’s entirely unprotected. Maybe the 14.04 box will use the keyring for both browsers?

What the hell do those people smoke? I want some of that, right here, right now!

Verily, money changes everything…