After a heavy but hardly extraordinary snowstorm the day after a Christmas spent in northern New Jersey with family, it was time to return home to San Francisco. It had been over a day since the snow had subsided, plenty of time for things to get back up to speed, and although with air travel on the rise this year passengers whose flights had been canceled were desperate to find a way back home, since I had a reserved spot I wasn't too worried. Nevertheless, I left my parents' house with substantial margin to get to the airport in plenty of time.

My plan was to leave the station in suburban New Jersey on the 11:06 southbound to Secaucus, connect on one of a series of LIRR trains to Jamaica, then get to the airport 2 hours ahead of my departure. No problem.

But things didn't quite go according to plan.

First, my sister and I were waiting at the platform, at which we'd arrived with 5 minutes to spare, first 5 minutes, then 10, then 15 minutes after the 11:06 scheduled departure time. Finally we heard the whistle, followed soon after by the announcement that the 11:23 to Hoboken (stopping at Secaucus) was arriving.

I was puzzled by this until I realized the train must have been on a weekend schedule. I'd seen nothing about this on the New Jersey Transit website when I'd set up my schedule the previous night. It could have been far worse than the 17 minute delay -- weekend trains run only every hour. So 25 minutes would have been normal, with a worst-case wait of 55 minutes past the normal 11:06 departure. I got lucky.

The train got to Secaucus without incident. I then assertively moved to Platform 2, where trains bound for New York stop, and got there within 30 seconds of the doors closing on a waiting train. Lucky again, although the following train was only 5 minutes later.

Next was New York Penn Station. Another problem: I'd just missed that group of trains for Jamaica, the last leaving at 12:35, with the next train at 13:10. Actually if the 12:35 had been at all delayed and I committed no missteps I might have had a chance had I sprinted, but instead of rushing like a crazy guy I decided instead to buy some of the spectacular bagels sold there, listen to a guitarist after tossing a dollar into his case, and generally sucking up the atmosphere of the exceptional station. When the train finally arrived, at 13:00, I was able to board it without rush, and pulled out Bangkok Haunts, the book I was reading. I had 2:20 until my departure time -- no longer as good as I'd wanted, but still it should be no problem.

After a slow approach the train reached Jamaica, and I again assertively moved to avoid getting caught in congestion for the Air Train. Except when I reached the Air Train ticket kiosks, they were taped over. This didn't look good.

"There's a free bus to JFK -- go downstairs for a free bus to JFK" a heavyset guy in a uniform was announcing. I moved quickly down the escalator to the ground floor.

Here the scene was surreal. A group of us was speed-walking to reach the back of the line waiting for the bus. It extended a solid hundred meters before I reached the end. I'd made good time, however, and even in a few short seconds there was a substantial and growing line behind me.

I saw some guys who had the lackadasical air of employees on duty. "Do you have any idea how long this will take?" I asked neither of them in particular.

"Maybe two hours. There's only one bus making round trips."

This was beyond absurd. How many of these people had two hours to spare? I pondered finding a taxi, afraid to search because I'd lose my space in line, and it was possible they were incorrect, and there were more buses. But then I saw a taxi pull to the nearby curb, and I instantly left my spot in line to run over.

It took four of us, charging $15 each, all highly illegal but I was hardly in a position to argue, and was only grateful he was there. And so we left. Traffic around the airport was appalling. He said the Airtrain was running intraterminal, just not to and from the train station. I paid and tipped the driver and ran into the airport. But I still had time; when I glanced at my watch it was around 14:15.

Next I took an elevator to the ticket level. There was a guy there wearing a fur coat, which I thought fairly unusual. When I commented on the travel situation, he said he'd taken the bus from Jamaica Station. He seemed surprise at my description of the huge lines: he said it had only been 45 minutes from the station to the gate. So he was probably with the earlier wave of trains, and perhaps the bus was able to stay on top of the passenger flow somehow. In any case, it seems just missing the 12:35 train from Penn Station may have almost been more costly than I'd imagined. Another example of the non-linear cascade of travel delays.

Lines at the ticket counters of the terminal were appalling. However, I found a group of eight self-serve kiosks with only one person ahead of me. Unfortunately the kiosks were clogged with international travelers who were required to find assistance to verify their documents. I waited approximately 7 minutes before a women was able to take care of two of the passengers, liberating a kiosk for me. At this point I was able to check in within 60 seconds.

From the kiosk to security it was like a refugee camp. Military cots were laid out along the walkways, each occupied by someone with a look of futility and defeat. I tried to avoid eye contact and passed to security.

Security went fairly quickly, actually, although I'm convinced my "Priority" line was actually slower. Then a long walk to Terminal 35 and here I was at the gate with an hour in hand. Things looked good as I sat on the ground next to an electrical power stand until I realized it was 15:05 and no boarding announcement yet. With the huge crowd that wasn't good.

Five minutes later a voice announced the plane is ready to go except for the first officer, who just arrived in LaGuardia. Result: 40 minute delay. The Air Travel system is such a fragile, delicate house of cards. It promises to get people to their destination quickly if not comfortably and so often fails on both grounds. On the other hand, all of the rail links of my trip are almost a non-issue; I'm perfectly happy to sit on the train reading, reviewing the papers I've been assigned, play around with computer code, or whatever. Although given the fiasco at the Jamaica station, I'm was sure at least a few people will be extremely grateful to Mr. First Officer.

I boarded in the first group, being first class, where a few attempted intruders were deterred. Maybe if there were a "death penalty" -- ticket cancellation, call up the next stand-by, for this sort of flagrant selfeshness it would be less common. There was a short delay (multiplied by a LOT of passengers) while these folks were weeded from the line and then I was on.

Boarding took awhile, then there was a long taxi delay (delayed off the gate and you pay with heavy interest as you lose your reservation on the runway). We finally took off in darkness at 17:20.

The flight was slow. We descended into unfavorable air to reduce turbulance in the middle of the flight, then had to overshoot San Francisco and land from the west due to the winds. Still, other than wanting to get home, it wasn't so bad. It's actually fairly pleasant in first class, even not adjusting for the fact I was in a plane. Plenty of room, all the water you can drink, well-prepared food, frequent attention. Really nice. I managed to even avoid being suffocated with any noxious odor of self-importance from fellow passengers. People behaved civilly to the stewardesses, and were generally courteous to each other. I decided it was a good flight to have invested those miles.

My neighbor, in fact, had reason to be grateful -- to the late first officer. He'd flown from Puerto Rico with time to spare only to sit on the runway for two full hours waiting for a gate to open. He boarded well after others, in fact I wonder if the plane had been held just a bit specifically for him. I've been in the same position except that the plane didn't wait and I missed a connection to Europe.

It was raining heavily in San Francisco when we landed, but I called Cara and got BART with an "expectation value" 10 minute delay (trains run every 20 minutes). BART went smoothly except the squeaky voice over the intercom announcing stops was barely audible and I only just got out of the train in time at my stop, 24th Street in Mission, thinking I was still at the preceding Glen Park station. But I did escape in time, and Cara was waiting to pick me up (the air seemed obscenely warm after New Jersey's sub-freezing temperatures), avoiding a two mile run through heavy rain or an indeterminant wait for a bus.

## Wednesday, December 29, 2010

## Saturday, December 25, 2010

### Garmin FIT decoder

I hope anyone reading this is having a fruitful Christmas. I'm personally in New Jersey, where it's cold by San Francisco standards, but excellent weather for running, which I'm finding I can still do despite months of neglect. I know: patience! It's hard, but last year I learned the hard way that going too hard too quickly is a loan which needs to be paid back at a very high rate of interest...

It is a good day for me -- what a great Christmas present this find was, from Japan: a GARMIN FIT decoder library for Perl. In addition to the library, some sample applications are provided: fitdump and fitsed.

So why am I so pleased with these? Well, many reasons, but one is that when uploading files to Strava, I tend to have two problems: either I forget to reset the unit from one day to the next and I end up with "rides" (or "runs") which span multiple days, or I neglect to shut it off when I get on the train or in a car, and end up with motorized segments (this is really good at scoring KOMs, but is, to put it lightly, cheating).

In the first case, it should now be relatively easy to write s short code to fragment FIT files at time gaps exceeding a certain threshold, for example one hour. And the second case I should be able to deal with: how hard can it be to determine if someone is in a car or train with decent accuracy?

I have an algorithm idea for this, one I've already proposed to Strava, but I think it's best to hold onto it until I've had a chance to code it up with Garmin::FIT and see how well it works.

It is a good day for me -- what a great Christmas present this find was, from Japan: a GARMIN FIT decoder library for Perl. In addition to the library, some sample applications are provided: fitdump and fitsed.

So why am I so pleased with these? Well, many reasons, but one is that when uploading files to Strava, I tend to have two problems: either I forget to reset the unit from one day to the next and I end up with "rides" (or "runs") which span multiple days, or I neglect to shut it off when I get on the train or in a car, and end up with motorized segments (this is really good at scoring KOMs, but is, to put it lightly, cheating).

In the first case, it should now be relatively easy to write s short code to fragment FIT files at time gaps exceeding a certain threshold, for example one hour. And the second case I should be able to deal with: how hard can it be to determine if someone is in a car or train with decent accuracy?

I have an algorithm idea for this, one I've already proposed to Strava, but I think it's best to hold onto it until I've had a chance to code it up with Garmin::FIT and see how well it works.

## Friday, December 24, 2010

### 4-dimensional mazes

As I noted last time, the maze algorithm doesn't know are care about topography or dimensionality. All it cares about is a list of nodes with each node's initially unconnected neighbors. How the nodes are laid out is post-processing.

One-dimensional mazes aren't so challenging: each node has 2 neighbors. Two directions, for example east and west.

With a 2-dimensional maze with square nodes, each node has 4 neighbors (no cutting corners). Obviously 2-dimensional is more challenging than one-dimensional: there's no the opportunity for dead-ends and multiple path options. Sample directions are north, east, south, and west: two more than the 1-dimensional case.

To three dimensions, each node (assuming cubic nodes) has 6 neighbors. In addition to north, east, south, and west, you can add up and down to the list of directions.

So there's a pattern here. Each time you add a dimension you add two more neighbors and two more directions, the two directions being the opposite of each other. West is the opposite of east, south the opposite of north, down the opposite of up.

It happens we live in a 3-dimensional world, but there's nothing particularly special mathematically about three dimensions. The maze algorithm doesn't care. It's just as happy in 4 dimensions as in 3.

So following the pattern, if we add a dimension, we need to add two neighbors and add two directions. I like "in" and "out" for these directions.

My maze program, which I wrote to generate semiconductor structures using the structure generator for semiconductor modeling. But for debugging, it also generates a simple text rendering of the maze.

Here's a one-dimensional maze. It's fairly dull:

In on the left, out on the right, no choices to be made in between.

Two-dimensional mazes offer choices. Here you enter on the left, exit to the right:

For 3-d, the structure editor will have no problems (I'll show that in a later post), but for text-art, I'll represent multiple layers like multiple "floors" in a building floor plan. Access from one floor to another are represented with arrows: an upward arrow for access to the floor above, a downward arrow for access to a floor below, or an up-down arrow for access to either.

Here's a 3 × 3 × 3 example, where again you enter on the left and exit on the right:

For four dimensions, I simply extend the floor plan model. In addition to stacking floors one above the other, I stack them in multiple columns. In a node, a right arrow signifies that there's access to the corresponding node in the column to the right, a left arrow signifies access to the corresponding to the column to the left, while a left-right arrow specifies access to either. With so many choices, up to eight, from each node things get complex quickly. Even for a simple 3 × 3 × 3 × 3 case, which has 91 nodes:

In this example, the entrance is in the upper right "floor" on the left wall. The exit is in the upper middle "floor" on the right wall.

This was done in Scheme Lisp. A long time ago, I wrote a 4-dimensional maze generator in Microsoft BASIC. In that case, I made a video game of it. You can fairly clearly project three dimensions onto a 2-dimensional screen. But 4-dimensions get challenging. So I just represented three: the fourth dimension was hidden.

The trick is which dimensions should be visible? For coordinates

So to do this, in addition to the standard pivots in three dimensions, I added two additional actions: permute right and permute left.

Suppose

Anyway, enough of that for now. Maybe I'll do something about bicycling again soon.

One-dimensional mazes aren't so challenging: each node has 2 neighbors. Two directions, for example east and west.

With a 2-dimensional maze with square nodes, each node has 4 neighbors (no cutting corners). Obviously 2-dimensional is more challenging than one-dimensional: there's no the opportunity for dead-ends and multiple path options. Sample directions are north, east, south, and west: two more than the 1-dimensional case.

To three dimensions, each node (assuming cubic nodes) has 6 neighbors. In addition to north, east, south, and west, you can add up and down to the list of directions.

So there's a pattern here. Each time you add a dimension you add two more neighbors and two more directions, the two directions being the opposite of each other. West is the opposite of east, south the opposite of north, down the opposite of up.

It happens we live in a 3-dimensional world, but there's nothing particularly special mathematically about three dimensions. The maze algorithm doesn't care. It's just as happy in 4 dimensions as in 3.

So following the pattern, if we add a dimension, we need to add two neighbors and add two directions. I like "in" and "out" for these directions.

My maze program, which I wrote to generate semiconductor structures using the structure generator for semiconductor modeling. But for debugging, it also generates a simple text rendering of the maze.

Here's a one-dimensional maze. It's fairly dull:

+--+--+--+--+--+--+--+--+ +--+--+--+--+--+--+--+--+ 0 1 2 3 4 5 6 7

In on the left, out on the right, no choices to be made in between.

Two-dimensional mazes offer choices. Here you enter on the left, exit to the right:

+--+--+--+--+--+--+--+--+ 7 | | | | + + + +--+--+--+ + + 6 | | | | | + + +--+ +--+--+--+ + 5 | | | | | | + +--+ +--+ + + + + 4 | | | | | | +--+ + + +--+--+--+ + 3 | | | | | + +--+ + + +--+--+ + 2 | | | | | | + + + + +--+--+ +--+ 1 | | | | | | | + + + +--+ + +--+ + 0 | | | +--+--+--+--+--+--+--+--+ 0 1 2 3 4 5 6 7

For 3-d, the structure editor will have no problems (I'll show that in a later post), but for text-art, I'll represent multiple layers like multiple "floors" in a building floor plan. Access from one floor to another are represented with arrows: an upward arrow for access to the floor above, a downward arrow for access to a floor below, or an up-down arrow for access to either.

Here's a 3 × 3 × 3 example, where again you enter on the left and exit on the right:

z = 2 +--+--+--+ 2 |↓ |↓ | + + +--+ 1 | | +--+ + + 0 |↓ |↓ | +--+--+--+ 0 1 2 z = 1 +--+--+--+ 2 |↑ |↑ |↓ | + +--+ + 1 |↓ |↓ | +--+--+--+ 0 |↕ |↓ |↕ | +--+--+--+ 0 1 2 z = 0 +--+--+--+ 2 | |↑ + + + + 1 |↑ |↑ | | +--+--+ + 0 |↑ ↑ |↑ | +--+--+--+ 0 1 2

For four dimensions, I simply extend the floor plan model. In addition to stacking floors one above the other, I stack them in multiple columns. In a node, a right arrow signifies that there's access to the corresponding node in the column to the right, a left arrow signifies access to the corresponding to the column to the left, while a left-right arrow specifies access to either. With so many choices, up to eight, from each node things get complex quickly. Even for a simple 3 × 3 × 3 × 3 case, which has 91 nodes:

z = 2 +--+--+--+ +--+--+--+ +--+--+--+ 2 | → →| |↓←| → ←| |↓ | ←|↓ | +--+--+--+ +--+--+--+ +--+ +--+ 1 | ↓ | | → ↓ | → ←| | ←| + +--+--+ +--+--+ + +--+ + + 0 |↓ |↓ →| |↓→ | ←| | ←| | +--+--+--+ +--+--+--+ +--+--+--+ 0 1 2 0 1 2 0 1 2 z = 1 +--+--+--+ +--+--+--+ +--+--+--+ 2 |↓ | →|↓ | |↑ |↓↔| →| |↕ |↓←|↕←| +--+ + + + +--+ + + +--+--+ 1 |↓→| ↑ | | ←|↑ |↓ | | | +--+--+ + +--+ +--+ +--+--+--+ 0 |↑→|↑→| →| |↕←| ←| ←| |↓ ↓ | +--+--+--+ +--+--+--+ +--+--+--+ 0 1 2 0 1 2 0 1 2 z = 0 +--+--+--+ +--+--+--+ +--+--+--+ 2 |↑ ↑→| | ↑ | ←| |↑ ↑ |↑ | +--+--+--+ + + +--+ +--+--+ + 1 |↑ | →| | →| |↑←| | ←| | +--+ + + +--+--+--+ + + +--+ 0 | →| →| →| |↑←| ← ←| |↑ | ↑ | +--+--+--+ +--+--+--+ +--+--+--+ 0 1 2 0 1 2 0 1 2

In this example, the entrance is in the upper right "floor" on the left wall. The exit is in the upper middle "floor" on the right wall.

This was done in Scheme Lisp. A long time ago, I wrote a 4-dimensional maze generator in Microsoft BASIC. In that case, I made a video game of it. You can fairly clearly project three dimensions onto a 2-dimensional screen. But 4-dimensions get challenging. So I just represented three: the fourth dimension was hidden.

The trick is which dimensions should be visible? For coordinates

*x*,*y*,*z*, and*u*, I'd render*x*,*y*, and*z*, with*u*hidden. Whether access is available or not in the u dimension isn't shown. There'd be a number of options here; for example you could put a symbol on the screen for whether u-direction "in" and/or "out" access is open. But instead I allowed the user to "rotate" so the*u*direction is visible.So to do this, in addition to the standard pivots in three dimensions, I added two additional actions: permute right and permute left.

Suppose

*x*is the direction ahead,*y*to the right, and*z*up. Then if I permute right, then I shift each coordinate:*y*is ahead,*z*is to the right, and*u*is up. Do it again and*z*is ahead,*u*is right, and*x*is up. A third permute and*u*is ahead,*x*is right, and*y*is up. A fourth permute and I'm back where I started. Initially*u*is hidden, then*x*is hidden, then*y*is hidden, then*z*is hidden. A fourth permute and*u*is hidden again.Anyway, enough of that for now. Maybe I'll do something about bicycling again soon.

## Monday, December 20, 2010

### semiconducting mazes

Okay, so this is mostly a bike blog, but since I got a new job in October, my time for riding has dropped off substantially. I'm trying to ride into work more often: once to twice per week would be good. But it's 42 miles one-way, so I've got to "make time" for that to happen.

Anyway, in my job, one of the products I use is a structure generator for semiconductor device modeling. So, for example, I can build models for field-effect transistors, bipolar junction transistors, p-i-n diodes, photo-detectors, light-emitting diodes, even static random access memory cells and other small circuits. Fun stuff, really. The models do a remarkably good job of matching reality, even though the reality of semiconductors can be fairly complex.

But when I found myself home sick with a bad cold, between naps I needed an activity to keep myself occupied. It was a good exercise to reacquaint myself with Scheme, the scripting language used by the structure editor. My Scheme was rusty, to say the least. Memories of 6.001 in my distant past.

And the best way to learn a computer language is to do an exercise, and since my job is modeling semiconductor devices, I should model a semiconductor device. But I was feeling poorly, and needed something to cheer myself up, and yet another FinFET simulation, for example, wasn't going to do that.

If Scheme has any strengths, it's in linked lists and recursion. And when I think linked lists and recursion, I think of mazes. From an early age I was fascinated with mazes. The labyrinth scene in The Shining, for example, still haunts me when I think about it.

So I set for myself the task of simulating a semiconductor maze. The thing here is that if you attach an electrode to the starting square and another to the goal square then apply, in simulation, a potential difference, current should almost instantly flow between the two. In other words, the maze will be solved. Would the simulator be able to solve the maze?

Well, the answer is it is, and remarkably well. Here's a plot of the current density through a simulated maze of doped Silicon, with an impurity concentration of 1 million arsenic atoms per cubic micrometer, enough to turn the semioconducting silicon into a decent conductor:

And here's another with more squares:

In each case a positive potential was applied to the left, causing current to flow to the right. Current density is indicated by color: red is the most current, blue is no current. White in the plot is "nothing": electrons can't go there (this isn't quite the same as "vaccuum", which I also could have modeled, but it's computationally quicker and mathematically similar to treat the walls of the maze as impermittable nothingness). The current flows only through the maze solution. Since the current was carried by electrons, this means the electrons, which are negatively charged, flow from right to left. They solve the maze backwards, you might say, although from a simulation perspective the direction doesn't matter.

You can even see the current cutting corners. When going straight, it spreads out to use the full width of the Si region. But when it turns a corner, it crowds to the inside to shorten the path.

So how did I create the maze? The algorithm, which is well-established, is simple. I start with a list of nodes plus a "stack" in which I can access only the top element. Basic actions with stacks include "pushing" an element onto the stack and "popping" the top element off the stack.

The nodes are each defined in terms of their neighbors. Neighbors can be either connected or disconnected, but initially each node has all of its neighbors disconnected.

I start with one random node on the left side of the maze. I put the number associated with that node onto the stack. Then I begin my process:

That's it! Very simple. The algorithm doesn't care where nodes are located or how they're laid out. The maze could be 2-dimensional, 3-dimensional, or even 4-dimensional. If 2-dimensional, it could be laid out in a square, as hexagons, as triangles, or as irregularly shaped elements. The maze is really a very general concept. It's really just a way of connecting a constrained network such that each element is connected to each other element by exactly one path.

A side-effect of this algorithm is, if you begin the process with the start node at the bottom of the stack, once the goal node is at the top of the stack (eventually it has to be if the maze has a solution), the stack contains the solution, from bottom to top. My code marks the elements of the stack with a number sequence indicating those nodes to be part of the solution, and in what order. However, this isn't relevant to this work: the solutions shown here were generated by the electron transport modeling in the semiconductor device modeler.

But once the abstract maze is connected, it needs to be mapped to "real space" for the simulation. So using the constructor functions in the structure editor, I defined polygons for each node such that the node was connected to the appropriate neighbors. Then I defined the semiconductor properties for the interconnected region, specified which models I wanted, and ran the simulation. Actually, constructing the maze took the most computer power. Solving it was quick: well under a minute for the larger of the two mazes shown here.

So all good fun. And it was effective at improving my Scheme skills. After this, designing my normal semiconductor device structures seemed simple in comparison.

Anyway, in my job, one of the products I use is a structure generator for semiconductor device modeling. So, for example, I can build models for field-effect transistors, bipolar junction transistors, p-i-n diodes, photo-detectors, light-emitting diodes, even static random access memory cells and other small circuits. Fun stuff, really. The models do a remarkably good job of matching reality, even though the reality of semiconductors can be fairly complex.

Knights of the Lambda Calculus |

And the best way to learn a computer language is to do an exercise, and since my job is modeling semiconductor devices, I should model a semiconductor device. But I was feeling poorly, and needed something to cheer myself up, and yet another FinFET simulation, for example, wasn't going to do that.

If Scheme has any strengths, it's in linked lists and recursion. And when I think linked lists and recursion, I think of mazes. From an early age I was fascinated with mazes. The labyrinth scene in The Shining, for example, still haunts me when I think about it.

*close-up of model of the maze, from The Shining by Stanley Kubrik. The algorithm described here would need to be slightly modified to create this one, since it has loops (pairs of nodes connected by multiple paths).*

So I set for myself the task of simulating a semiconductor maze. The thing here is that if you attach an electrode to the starting square and another to the goal square then apply, in simulation, a potential difference, current should almost instantly flow between the two. In other words, the maze will be solved. Would the simulator be able to solve the maze?

Well, the answer is it is, and remarkably well. Here's a plot of the current density through a simulated maze of doped Silicon, with an impurity concentration of 1 million arsenic atoms per cubic micrometer, enough to turn the semioconducting silicon into a decent conductor:

And here's another with more squares:

In each case a positive potential was applied to the left, causing current to flow to the right. Current density is indicated by color: red is the most current, blue is no current. White in the plot is "nothing": electrons can't go there (this isn't quite the same as "vaccuum", which I also could have modeled, but it's computationally quicker and mathematically similar to treat the walls of the maze as impermittable nothingness). The current flows only through the maze solution. Since the current was carried by electrons, this means the electrons, which are negatively charged, flow from right to left. They solve the maze backwards, you might say, although from a simulation perspective the direction doesn't matter.

You can even see the current cutting corners. When going straight, it spreads out to use the full width of the Si region. But when it turns a corner, it crowds to the inside to shorten the path.

So how did I create the maze? The algorithm, which is well-established, is simple. I start with a list of nodes plus a "stack" in which I can access only the top element. Basic actions with stacks include "pushing" an element onto the stack and "popping" the top element off the stack.

The nodes are each defined in terms of their neighbors. Neighbors can be either connected or disconnected, but initially each node has all of its neighbors disconnected.

I start with one random node on the left side of the maze. I put the number associated with that node onto the stack. Then I begin my process:

- If the stack is empty, I'm finished. The maze is done.
- If the top node on the stack has any neighbors which have as yet no connections, then pick one at random, connect the top node in the stack with that randomly chosen neighbors, put that neighbor on the top of the stack, and repeat.
- On the other hand, if the top node in the stack has no unconnected neighbors (neighbors which have no connections, to that node or to another node), then pop the stack and repeat.

That's it! Very simple. The algorithm doesn't care where nodes are located or how they're laid out. The maze could be 2-dimensional, 3-dimensional, or even 4-dimensional. If 2-dimensional, it could be laid out in a square, as hexagons, as triangles, or as irregularly shaped elements. The maze is really a very general concept. It's really just a way of connecting a constrained network such that each element is connected to each other element by exactly one path.

A side-effect of this algorithm is, if you begin the process with the start node at the bottom of the stack, once the goal node is at the top of the stack (eventually it has to be if the maze has a solution), the stack contains the solution, from bottom to top. My code marks the elements of the stack with a number sequence indicating those nodes to be part of the solution, and in what order. However, this isn't relevant to this work: the solutions shown here were generated by the electron transport modeling in the semiconductor device modeler.

But once the abstract maze is connected, it needs to be mapped to "real space" for the simulation. So using the constructor functions in the structure editor, I defined polygons for each node such that the node was connected to the appropriate neighbors. Then I defined the semiconductor properties for the interconnected region, specified which models I wanted, and ran the simulation. Actually, constructing the maze took the most computer power. Solving it was quick: well under a minute for the larger of the two mazes shown here.

So all good fun. And it was effective at improving my Scheme skills. After this, designing my normal semiconductor device structures seemed simple in comparison.

## Saturday, December 18, 2010

### Low-Key party

Thanks to all for the great time at the Low-Key Hillclimbs party last night! And thanks to Sports Basement for giving us space to hold it.

It's a lot of work doing the slides for the awards, but I think it's really important to try and recognize all of the great contributions that both riders and volunteers make to the success of the series. Once again, despite not really trying, we raised money for the Lance Armstrong Foundation and the Open Space Trust. Each of these charities does great work, similar really. I view the Open Space Trust as fighting cancer of the land, while LAF fights cancer of the body. The two are really intertwined.

And a huge thanks to Strava for providing some wonderful prizes. We awarded this with a card game. I was a bit inspired by Charles Ardai's excellent book "Fifty-to-One", in which a very simple card game was played with very high stakes. We played a series of games which were closer to "one-to-one", until only two were left standing. Ron Brunner and Judy Colwell were the most skillful players, and took the prizes.

Low-Key and Strava are very much aligned spiritually. Both allow the "average" rider to compete, with a focus on hills, against others of similar speed. One of the big motivations for starting Low-Keys was to provide an on-line archive of climbing times over a range of local climbs, measured under controlled conditions, as previously climbing times were generally passed on by oral tradition. Strava of course takes this to a new level.

We announced the schedule for next year, and two highlight climbs are Mix Canyon Road in Vacaville and Palomares Road near Sunol. I've not ridden the former, but by all accounts it's one of the most difficult climbs in the Bay Area (although Bohlman-On Orbit-Bohlman actually rates higher with my scoring scheme). Nobody would describe Palomares as particularly difficult, but the route profile provides for plenty of opportunity for climbers to test each other. Shorter climbs are sometimes tougher than longer ones, as they provide no opportunity for recovery: you've simply got to push pain envelope every second of the way.

Another highlight: the vote for the 2011 series slogan. The winner, in a very close vote (24-19) was "Veni, Vedi, Ascendi". Second place was another excellent slogan: "Rise and Climb". I love both of these, but I wasn't disappointed in the winning slogan (on which I didn't vote), as I'd submitted it. It was the first time ever my slogan came out on top, and it was the only slogan I'd submitted which made it past the first round of on-line voting. (

*You don't need to see our permit*crashed and burned... too bad)

But shorter term, as we've done a few times before, we'll be organizing the

*Megamonster Enduro Ride*this winter: on 12 February 2011, weather permitting. This ride is the work of Kevin Winterfield, who organizes it as part of an annual trip out here from his present home in Connecticut. How great is that?

In case you missed the fun, or had difficulty seeing the slides in the bright light, I've put copies here:

Now I just need to set up the web pages for next year... Expect the design to be similar to this year: I'm finally getting the bugs out of the 2010 Perl code!

## Tuesday, December 14, 2010

### re-cycling the Marin Headlands

Back in April I lamented the closing of the Marin Headlands for construction. Well, the construction was finally completed last month, and Sunday for the first time since then I rode the loop.

I've been sick for two weeks now: sniffling, hacking, and coughing has been epidemic at work, and after Thanksgiving it was my turn in line. So I've been feeling sort of crappy, riding as I'm able. Yet yesterday I motivated myself to try a loop of the challenging and beautiful Headlands. The last time I'd ridden to the Hawk Hill summit was September, when it had been unpaved, serene, and illegal. The chance to have ridden it without car traffic had been too much to resist. Now the pavement is done and yesterday, fearful of what I would observe, I set off.

The result: the pavement is pristine, as expected, which is nice, of course. And my fears of "improvement" were unfounded: there's been little "improvement" to degrade what had been, with the exception of tourist vehicles, a spectacular gem of a road so close to the city of San Francisco. There's a prominent traffic circle at the intersection of McCullough and Conzelman, a few pull-outs presumibly so slower moving (???) cars can allow others to pass, obviously improved guard rails, and sand bags on McCullough which seem to serve as a buffer for vehicles which drive off the road. All of this seems like the sort of infrastructure you'd expect to see on Alpine roads taken by long-distance travelers. The traffic on Conzelman, on the other hand, is virtually all tourists driving their rental-SUVs up the hill so they can admire the view and snap digital photos. And on McCullough most of the traffic is bicycles: there's not much reason for cars to drive there.

So my suspicion that this was all a pork-fest for the formerly Madame Speaker remains. As a result we all get to use those spectacular roads one less summer of our lives, and the economy is driven incrementally further into its chasm of debt. Oh, I forget, it's "stimulus". The new guard rails will generate all sorts of revenue for future generations, spurring a rebirth of American productivity.

Anyway, the loop is much nicer in this season of fall/winter than it is in the summer, anyway. The weather isn't that much different, and the tourists are much reduced. And nothing spoils a good road like cars.

Which is why the proper approach is to close it to cars. Create a pedestrian lane on the "view" side, a bidirectional bike lane on the inland side, and let people enjoy the beautiful, short hike to the summit and back. Oh, people would probably whine and complain about disabled access, and I'm sympathetic, but if disabled access requires maintaining vehicular access, then we should just convert the entire National Park trail system into paved vehicular roads. Such an investment would truly be "stimulus": simulating a little health and exercise, and substantially improving the tranquility of what should be a very wonderful place.

I've been sick for two weeks now: sniffling, hacking, and coughing has been epidemic at work, and after Thanksgiving it was my turn in line. So I've been feeling sort of crappy, riding as I'm able. Yet yesterday I motivated myself to try a loop of the challenging and beautiful Headlands. The last time I'd ridden to the Hawk Hill summit was September, when it had been unpaved, serene, and illegal. The chance to have ridden it without car traffic had been too much to resist. Now the pavement is done and yesterday, fearful of what I would observe, I set off.

*Hawk Hill in September, during repaving.*

The result: the pavement is pristine, as expected, which is nice, of course. And my fears of "improvement" were unfounded: there's been little "improvement" to degrade what had been, with the exception of tourist vehicles, a spectacular gem of a road so close to the city of San Francisco. There's a prominent traffic circle at the intersection of McCullough and Conzelman, a few pull-outs presumibly so slower moving (???) cars can allow others to pass, obviously improved guard rails, and sand bags on McCullough which seem to serve as a buffer for vehicles which drive off the road. All of this seems like the sort of infrastructure you'd expect to see on Alpine roads taken by long-distance travelers. The traffic on Conzelman, on the other hand, is virtually all tourists driving their rental-SUVs up the hill so they can admire the view and snap digital photos. And on McCullough most of the traffic is bicycles: there's not much reason for cars to drive there.

So my suspicion that this was all a pork-fest for the formerly Madame Speaker remains. As a result we all get to use those spectacular roads one less summer of our lives, and the economy is driven incrementally further into its chasm of debt. Oh, I forget, it's "stimulus". The new guard rails will generate all sorts of revenue for future generations, spurring a rebirth of American productivity.

Anyway, the loop is much nicer in this season of fall/winter than it is in the summer, anyway. The weather isn't that much different, and the tourists are much reduced. And nothing spoils a good road like cars.

Which is why the proper approach is to close it to cars. Create a pedestrian lane on the "view" side, a bidirectional bike lane on the inland side, and let people enjoy the beautiful, short hike to the summit and back. Oh, people would probably whine and complain about disabled access, and I'm sympathetic, but if disabled access requires maintaining vehicular access, then we should just convert the entire National Park trail system into paved vehicular roads. Such an investment would truly be "stimulus": simulating a little health and exercise, and substantially improving the tranquility of what should be a very wonderful place.

## Saturday, December 4, 2010

### Caltrain weekend baby bullet schedule

*Another day on the Caltrain bike car (StreetsBlog)*

Back in February I proposed a weekend train schedule for Caltrain, one which would make the train an attractive option for those traveling along the Peninsula on weekends. That schedule wasn't based on any estimation of actual resource constraints. Rather it was what I expected it would take to start to make a dent in the car traffic on 101 every Saturday and Sunday. That proposal was for express trains traveling north and south each hour, with two limited trains taking care of the secondary stations, also each hour north and south. Without service at least comparable to this, service which is typical of rail systems around the world excluding ours where we've sold our souls to the auto industry, those willing and able to drive will find themselves hard pressed to claim the train is the preferred alternative for travel subject to external time constraints.

*I'm sitting next to Ammon Skidmore at a JPB meeting in 2009, where I argued for better weekend service. It was John's petition which really got things rolling, however. (Richard Masoner)*

Well, John Murphy's petition and advocacy seems to have gotten the Joint Powers Board (JPB) over its considerable activation barrier and they've finally agreed to do an "experiment" of weekend "baby bullet" (express train) service.

Here's the new schedule.

Basically a train leaves San Jose @ 10:35 am and arrives in San Francisco @ 11:39, 32 minutes faster than it would have had it been a local (there's a cost of 2 minutes per stop; the baby bullet skips 15 stops, which should save only 30 minutes, but they apply some extra padding at the end of the local train schedule to improve on-time statistics, probably figuring anyone taking a local doesn't care about a few extra minutes here or there). The train then leaves again at 11:59, returning to San Jose where it arrives at 1:03 pm, the same 64 minute travel time it spent traveling northward. The exercise is repeated with a train leaving San Jose at 5:35 pm, spending 6:39 pm to 6:59 pm in San Francisco, then returning to San Jose at 8:03 pm.

Now I'm sure this sort of schedule is convenient for train crews. This whole exercise fits nicely into a single 10-hour shift with an unhurried 4-hour lunch break. But does it do anything to help passengers? Well, I suspect somewhat, but if I'm going to the peninsula I want to get there much earlier than the post-noon arrival times of the "morning" southbound, and if I lived on the Peninsula and want to spend a day in San Francisco I'd probably want to return home later than 6:59 pm: just one extra hour would give me time to get an early dinner, for example.

So really this will do nothing to substantially increase weekend demand for Caltrain. However, at least I hope it will demonstrate a latent demand for faster service, as passengers who would have taken the local take the baby bullets instead, resulting in even emptier local trains during these brief time periods. I'd love to be able to take an express south (doesn't much matter where, but Palo Alto works), do a ride in the Peninsula hills, then take another back home. And maybe that's enabled even with the late start. But none of the weekend group rides start much after 10 am, so I would have been oh-so-much happier had that early express train pulled out at 8 am than at a minute before noon.

Another option is to ride south from the City then get the afternoon northbound back in time for dinner. That seems like something worth trying once or twice.

So I'm glad things are, glacially, moving in the correct direction, but it's frustrating knowing that at the rate Caltrain changes, I'll be dead before we get anywhere close to the sort of rail service any resident of Europe takes for granted.

## Wednesday, December 1, 2010

### cat feeder FAIL

I live with three cats. Of the three, the older of the two males has food issues. He was found, nearly starved, as a kitten, and apparently learned from that harsh lesson to never take the future availability of food for granted. He has a special weakness for crunchies, which he loves. These have the further disadvantage of being calorie dense: he can consume a lot more of them before reaching stomach capacity.

So to keep him at a healthy weight it's important to control how much he gets. With this in mind, before leaving for a three-day mini-vacation back in October (which included some fantastic riding; okay, mandatory cycling content), it was time to break out the automated feeder.

The feeder has a reservoir of food connected to the outside world with a tube containing a rotating screw. The screw turns on when programmed to do so, channeling food down the tube, where it pushes open a one-way door and then falls into a feeding tray, where it is quickly consumed by enthusiastic kitties.

Normally programming this wonder of mechanical engineering is absolutely something I want to do at least a day early. It's important to make sure everything is working before trusting it with the cats' well-being. For example, sometimes it gets into a mode where it ignores a programmed dispense cycle. This is solved with a hard-reset followed by reprogramming, but obviously the cats don't know how to do this. Not yet, anyway (the female is quite clever, however: I wouldn't put it past her).

So after doing the hard-reset, I programmed it with two feeding cycles plus a quick "test cycle" to make sure it was working. The test cycle rotated the dispensing screw for the 10 seconds I requested, so I figured the unit was in working order. Here's what I wanted to program:

So a three-minute "morning snack" followed by a four-minute "evening meal". However, I made a mistake, something I didn't catch in any of the numerous times I reviewed my program:

The result? It all ended in tears:

Needless to say, the older male was a happy furry. And the several piles of vomited crunchies attested to what happens when a stomach jammed with dry crunchies is supplemented with water from the drinking fountain: absorb, expand, eject.

So note to self: always beta-test code, even if it's something as simple as a cat feeder.

So to keep him at a healthy weight it's important to control how much he gets. With this in mind, before leaving for a three-day mini-vacation back in October (which included some fantastic riding; okay, mandatory cycling content), it was time to break out the automated feeder.

The feeder has a reservoir of food connected to the outside world with a tube containing a rotating screw. The screw turns on when programmed to do so, channeling food down the tube, where it pushes open a one-way door and then falls into a feeding tray, where it is quickly consumed by enthusiastic kitties.

Normally programming this wonder of mechanical engineering is absolutely something I want to do at least a day early. It's important to make sure everything is working before trusting it with the cats' well-being. For example, sometimes it gets into a mode where it ignores a programmed dispense cycle. This is solved with a hard-reset followed by reprogramming, but obviously the cats don't know how to do this. Not yet, anyway (the female is quite clever, however: I wouldn't put it past her).

So after doing the hard-reset, I programmed it with two feeding cycles plus a quick "test cycle" to make sure it was working. The test cycle rotated the dispensing screw for the 10 seconds I requested, so I figured the unit was in working order. Here's what I wanted to program:

- on: 6:00 am
- off: 6:03 am
- on: 5:00 pm
- off: 5:04 pm

So a three-minute "morning snack" followed by a four-minute "evening meal". However, I made a mistake, something I didn't catch in any of the numerous times I reviewed my program:

- on: 6:00 am
- off: 6:03 am
- on: 6:00 pm
- off: 5:04 pm

The result? It all ended in tears:

Needless to say, the older male was a happy furry. And the several piles of vomited crunchies attested to what happens when a stomach jammed with dry crunchies is supplemented with water from the drinking fountain: absorb, expand, eject.

So note to self: always beta-test code, even if it's something as simple as a cat feeder.

## Sunday, November 28, 2010

### Low-Key Hillclimbs step response and CTL time constant

This year I was able to ride weeks 3,5,6,7,8, and 9 of the Low-Key Hillclimbs(I coordinated weeks 1-2 and week 4 was canceled due to concerns over rain).

I also started a new job the Monday following the week 3 climb, which reduced my training to essentially the Low-Key Hillclimbs only (I also got in 2 long Sunday rides and one 45-mile commute to work during this period). Previous to this, I'd had a week's vacation in Italy with lots of riding embedded within a longer period where I had weekend rides and weekday training sessions.

My scores clearly suffered from the neglect as the series progressed. I did a regression of the scores, which are normalized to the median rider (sex-adjusted) time which scores 100. Here's the result:

Curiously the time constant came out within 1% of the 42-day CTL time constant.

Anyway, not much significance here, and more sources of variability than I can count, but I thought it was amusing. It appears the 42 day time constant does a fairly good job at matching the rate at which I lose fitness. Note this is my trajectory fitness under my current level of "training": a hard hillclimb effort one day a week, four weeks out of five, with the occasional long ride.

Also on the plot I show an asymptotic projection to my week 3 result with the same time constant. I'd need to start training at the level I was pre-new-job to follow this trajectory. Not enough time before the 1 Jan San Bruno Hillclimb, even were I to get in those mid-week training rides again. And that would be hard to pull-off.

I also started a new job the Monday following the week 3 climb, which reduced my training to essentially the Low-Key Hillclimbs only (I also got in 2 long Sunday rides and one 45-mile commute to work during this period). Previous to this, I'd had a week's vacation in Italy with lots of riding embedded within a longer period where I had weekend rides and weekday training sessions.

My scores clearly suffered from the neglect as the series progressed. I did a regression of the scores, which are normalized to the median rider (sex-adjusted) time which scores 100. Here's the result:

Curiously the time constant came out within 1% of the 42-day CTL time constant.

Anyway, not much significance here, and more sources of variability than I can count, but I thought it was amusing. It appears the 42 day time constant does a fairly good job at matching the rate at which I lose fitness. Note this is my trajectory fitness under my current level of "training": a hard hillclimb effort one day a week, four weeks out of five, with the occasional long ride.

Also on the plot I show an asymptotic projection to my week 3 result with the same time constant. I'd need to start training at the level I was pre-new-job to follow this trajectory. Not enough time before the 1 Jan San Bruno Hillclimb, even were I to get in those mid-week training rides again. And that would be hard to pull-off.

## Saturday, November 27, 2010

### Comparing the Terrible Two, Climb to Kaiser, and Death Ride: Peak Climbing Segments

Today I'll take another look at the route data from the Death Ride, Climb to Kaiser, and the Terrible Two.

Here for each climb I first map the data onto a 50 meter grid, then slightly smooth it using an estimated 15 second time constant (the same as when I calculate a climb rating), then I map it back to a 50 meter point separation (since smoothing disturbs this at the beginning and end of the ride) then I calculate the total climbing (with zero threshold) between each pair of points on the route. For each segment length (which is the number of contiguous points minus one multiplied by 50 meters) I find the segment which maximizes this total climbing.

The only efficiency trick is if I have a given segment length, I don't need to calculate it fresh each time: if I start with the segment from points 1000 to 2000 (length 50 km), and I want to calculate the climbing from points 1001 to 2001, I take the previous sum, subtract the climbing from 1000 to 1001, and then add the climbing from 2000 to 2001. Similarly, if I have the climbing from points 1 to 2000, and I want to calculate the climbing from points 1 to 2001, I need only add the climbing from 2000 to 2001, saving a lot of time.

Anyway, here's the result for the three climbs:

So up to around 30 km, the Terrible Two has the steepest segments, then from 30 km to 160 km the Climb to Kaiser has the greatest climbing density, then the Death Ride catches climb to Kaiser. They're fairly well matched to the point the Death Ride ends at 200 km, then Climb to Kaiser adds a bit more until it comes to a merciful halt at 250 km. The Terrible Two matches Climb to Kaiser over the same distance but overtakes it with a sheer volume which lasts out to 314 km.

Also of substantial interest is where these peak climbing segments fall. So for each amount of climbing, I plotted a band showing where the shortest segment containing that amount of climbing is. First, the Death Ride:

So the steepest short stuff is on Ebbetts East. For a bit longer, it moves to Monitor East and then to Monitor West. Beyond 940 meters or so you need combinations: first the Ebbetts pair, then the Monitor pair, then finally (on the range of the plot) to Monitor-Ebbetts-Ebbetts.

Here's the same analysis applied to Climb to Kaiser:

Obviously the nastiest grades are at Big Creek. For more climbing you go to Tollhouse. And for more still, start at Big Creek and go all the way to Kaiser summit. If you run out of climbing here (Big Creek is preceded by a descent) you go from Tollhouse and continue past Shaver Lake.

Finally, Terrible Two:

For shorter segments, Gualala and Fort Ross compete for the steepest. For longer, you go to the Geysers. If you run out of room there, the repeated climb-descent segments of Skaggs Springs are the densest climbing. After this you get into more extended combinations.

Anyway, I like this analysis, and will likely add it to the pages for future Low-Key events.

Here for each climb I first map the data onto a 50 meter grid, then slightly smooth it using an estimated 15 second time constant (the same as when I calculate a climb rating), then I map it back to a 50 meter point separation (since smoothing disturbs this at the beginning and end of the ride) then I calculate the total climbing (with zero threshold) between each pair of points on the route. For each segment length (which is the number of contiguous points minus one multiplied by 50 meters) I find the segment which maximizes this total climbing.

The only efficiency trick is if I have a given segment length, I don't need to calculate it fresh each time: if I start with the segment from points 1000 to 2000 (length 50 km), and I want to calculate the climbing from points 1001 to 2001, I take the previous sum, subtract the climbing from 1000 to 1001, and then add the climbing from 2000 to 2001. Similarly, if I have the climbing from points 1 to 2000, and I want to calculate the climbing from points 1 to 2001, I need only add the climbing from 2000 to 2001, saving a lot of time.

Anyway, here's the result for the three climbs:

So up to around 30 km, the Terrible Two has the steepest segments, then from 30 km to 160 km the Climb to Kaiser has the greatest climbing density, then the Death Ride catches climb to Kaiser. They're fairly well matched to the point the Death Ride ends at 200 km, then Climb to Kaiser adds a bit more until it comes to a merciful halt at 250 km. The Terrible Two matches Climb to Kaiser over the same distance but overtakes it with a sheer volume which lasts out to 314 km.

Also of substantial interest is where these peak climbing segments fall. So for each amount of climbing, I plotted a band showing where the shortest segment containing that amount of climbing is. First, the Death Ride:

So the steepest short stuff is on Ebbetts East. For a bit longer, it moves to Monitor East and then to Monitor West. Beyond 940 meters or so you need combinations: first the Ebbetts pair, then the Monitor pair, then finally (on the range of the plot) to Monitor-Ebbetts-Ebbetts.

Here's the same analysis applied to Climb to Kaiser:

Obviously the nastiest grades are at Big Creek. For more climbing you go to Tollhouse. And for more still, start at Big Creek and go all the way to Kaiser summit. If you run out of climbing here (Big Creek is preceded by a descent) you go from Tollhouse and continue past Shaver Lake.

Finally, Terrible Two:

For shorter segments, Gualala and Fort Ross compete for the steepest. For longer, you go to the Geysers. If you run out of room there, the repeated climb-descent segments of Skaggs Springs are the densest climbing. After this you get into more extended combinations.

Anyway, I like this analysis, and will likely add it to the pages for future Low-Key events.

## Tuesday, November 23, 2010

### Comparing the Terrible Two, Climb to Kaiser, and Death Ride: Profiles

Last time I used grade histograms and total climbing versus climbing threshold to compare three really solid one-day riding challenges: the Death Ride, Climb to Kaiser, and the Terrible Two.

What I left out is sort of the obvious: the route profiles. Every big ride shows its route profile, and they all tend to sort of look the same: up and down. So to compare the three, I plotted them on the same axes. I use km for distance and meters for altitude:

That one's worth more than the measly 100 kpixels shown here, so if you click on the plot you should see a higher resolution version.

Anyway, the Death Ride and Climb to Kaiser look sort of imposing here. The Terrible Two is almost lost under the giant altitudes of those other two rides. Climb to Kaiser is perhaps most striking of the three, starting down in the lowlands before rising to and beyond the lofty altitudes of the Death Ride. Indeed, Climb to Kaiser is a very special ride, one I often recommend to those who think the Death Ride is the best thing since the invention of the cable-actuated derailleur.

Next time I'll introduce yet another way to plot the altitude data: total climbing versus distance, where for each distance the ride segment was chosen to maximize the climbing.

What I left out is sort of the obvious: the route profiles. Every big ride shows its route profile, and they all tend to sort of look the same: up and down. So to compare the three, I plotted them on the same axes. I use km for distance and meters for altitude:

That one's worth more than the measly 100 kpixels shown here, so if you click on the plot you should see a higher resolution version.

Anyway, the Death Ride and Climb to Kaiser look sort of imposing here. The Terrible Two is almost lost under the giant altitudes of those other two rides. Climb to Kaiser is perhaps most striking of the three, starting down in the lowlands before rising to and beyond the lofty altitudes of the Death Ride. Indeed, Climb to Kaiser is a very special ride, one I often recommend to those who think the Death Ride is the best thing since the invention of the cable-actuated derailleur.

Next time I'll introduce yet another way to plot the altitude data: total climbing versus distance, where for each distance the ride segment was chosen to maximize the climbing.

## Monday, November 22, 2010

### Comparing the Terrible Two, Climb to Kaiser, and Death Ride

First, a bit about the climbing algorithm. Just two paragraphs, I promise. Then on to good stuff.

I just described an algorithm by which you can calculate total climbing from a route profile with a given "climbing threshold" designed to both eliminate small bumps which can be coasted over, but more importantly to get rid of the small altitude fluctuations which occur with "noise" in both barometric and GPS-based altimeters. I proposed that a 5 meter threshold worked well, but that there was no unique answer for what is the "true" climbing of a route.

Well, that algorithm does work well for a relatively small climbing threshold, but when you crank the threshold up to the size of the largest hills on a ride, it can be fooled. For this post, I thus added an extra step to the process, in which peaks closer than the climbing threshold from the minimum altitude on the ride, and valleys closer than the threshold from the maximum altitude on the ride are both pruned. This may not be perfect, either, but suits the purpose of this post.

That purpose is to compare three rides, each of which is considered a major climbing challenge for Northern California cyclists: the Death Ride, Climb to Kaiser, and the Terrible Two.

I've done each of these rides at least twice, and so have a special affection for each of them. They each have their own unique character, each their own brand of challenge.

Total climbing plotted versus climbing threshold provides a certain signature to the nature of a given route. So I'll do that here for these three climbs, where in each case I downloaded data from recent versions of the rides from Garmin Connect:

Impressive, all three. Each claims to have the most climbing for some range of climbing thresholds. With a low threshold, including my recommended value of 5 meters (indicated with the dotted line on the plot), the Terrible Two comes out ahead. Indeed, the Terrible Two is well names, presenting the rider with a series of extremely steep challenging climbs, most notably Skaggs Springs Road out to Highway 1, then to seal the deal, Fort Ross Road back inland. Yet none of these climbs is particularly long. The net climbing drops to zero at around 832 meters, the Gaysers.

Now consider the Climb to Kaiser. That scores second with a 5 meter threshold. But at around 25 meters if fades behind the Death Ride: Climb to Kaiser has its share of bumps. Then it fades more until it hits a wonderful plateau at near 400 meters, just enough to filter out the descent-climb at Big Creek Road. Beyond that it's clear sailing all the way out to 2695 meters, or 8840 feet, the altitude gained between the start and Kaiser Pass. This is what Kaiser is famous for: the sheer enormity of the altitude change between Clovis and the turn-around.

The Death Ride is the last of the three climbs at the 5 meter threshold. But where it excels is between approximately 70 meters and 900 meters. That's what the Death Ride is: a series of long, steady climbs followed by long, steady (and fast!) descents.

This plot compares the magnitude of climbing on the routes, but it doesn't directly say anything about grade. For that, I go to my climbing histogram, where I plot the net climbing at or above each given grade for each of the three climbs.

This plot is less ambiguous. It's clear at dishing out the steep, Terrible Two is the terrible winner. It's a solid 5% steeper at each given total climbing point than the Death Ride over much of the plot. Those are five painful percent. Climb to Kaiser delivers its dose of nasty at Big Creek. That's about as tough as it gets in this sort of ride, but it's just one climb, while Terrible Two dishes out the pain multiple times throughout its 200 miles.

Of course, Death Ride supporters will always say the altitude is the big factor there. And it is a big factor. Even though Climb to Kaiser gets higher than the Death Ride, the Death Ride stays at over 4000 foot altitude throughout, while the Climb to Kaiser just visits for part of the day. But Climb to Kaiser has terrible heat in the valley, ironically providing some of the greatest difficulty on the flattest portion of the ride. And while not up to Fresno's standards, Terrible Two is famous for its heat as well, subjecting riders to the heat for more of the course than Kaiser where it's really only the final 25 miles after riders drop from the sky.

So I stand behind the numbers: Terrible Two first, Climb to Kaiser second, and Death Ride third. All three are tough, though: it would be a terrible mistake to take the Death Ride for granted.

I just described an algorithm by which you can calculate total climbing from a route profile with a given "climbing threshold" designed to both eliminate small bumps which can be coasted over, but more importantly to get rid of the small altitude fluctuations which occur with "noise" in both barometric and GPS-based altimeters. I proposed that a 5 meter threshold worked well, but that there was no unique answer for what is the "true" climbing of a route.

Well, that algorithm does work well for a relatively small climbing threshold, but when you crank the threshold up to the size of the largest hills on a ride, it can be fooled. For this post, I thus added an extra step to the process, in which peaks closer than the climbing threshold from the minimum altitude on the ride, and valleys closer than the threshold from the maximum altitude on the ride are both pruned. This may not be perfect, either, but suits the purpose of this post.

That purpose is to compare three rides, each of which is considered a major climbing challenge for Northern California cyclists: the Death Ride, Climb to Kaiser, and the Terrible Two.

I've done each of these rides at least twice, and so have a special affection for each of them. They each have their own unique character, each their own brand of challenge.

Total climbing plotted versus climbing threshold provides a certain signature to the nature of a given route. So I'll do that here for these three climbs, where in each case I downloaded data from recent versions of the rides from Garmin Connect:

Impressive, all three. Each claims to have the most climbing for some range of climbing thresholds. With a low threshold, including my recommended value of 5 meters (indicated with the dotted line on the plot), the Terrible Two comes out ahead. Indeed, the Terrible Two is well names, presenting the rider with a series of extremely steep challenging climbs, most notably Skaggs Springs Road out to Highway 1, then to seal the deal, Fort Ross Road back inland. Yet none of these climbs is particularly long. The net climbing drops to zero at around 832 meters, the Gaysers.

Now consider the Climb to Kaiser. That scores second with a 5 meter threshold. But at around 25 meters if fades behind the Death Ride: Climb to Kaiser has its share of bumps. Then it fades more until it hits a wonderful plateau at near 400 meters, just enough to filter out the descent-climb at Big Creek Road. Beyond that it's clear sailing all the way out to 2695 meters, or 8840 feet, the altitude gained between the start and Kaiser Pass. This is what Kaiser is famous for: the sheer enormity of the altitude change between Clovis and the turn-around.

The Death Ride is the last of the three climbs at the 5 meter threshold. But where it excels is between approximately 70 meters and 900 meters. That's what the Death Ride is: a series of long, steady climbs followed by long, steady (and fast!) descents.

This plot compares the magnitude of climbing on the routes, but it doesn't directly say anything about grade. For that, I go to my climbing histogram, where I plot the net climbing at or above each given grade for each of the three climbs.

This plot is less ambiguous. It's clear at dishing out the steep, Terrible Two is the terrible winner. It's a solid 5% steeper at each given total climbing point than the Death Ride over much of the plot. Those are five painful percent. Climb to Kaiser delivers its dose of nasty at Big Creek. That's about as tough as it gets in this sort of ride, but it's just one climb, while Terrible Two dishes out the pain multiple times throughout its 200 miles.

Of course, Death Ride supporters will always say the altitude is the big factor there. And it is a big factor. Even though Climb to Kaiser gets higher than the Death Ride, the Death Ride stays at over 4000 foot altitude throughout, while the Climb to Kaiser just visits for part of the day. But Climb to Kaiser has terrible heat in the valley, ironically providing some of the greatest difficulty on the flattest portion of the ride. And while not up to Fresno's standards, Terrible Two is famous for its heat as well, subjecting riders to the heat for more of the course than Kaiser where it's really only the final 25 miles after riders drop from the sky.

So I stand behind the numbers: Terrible Two first, Climb to Kaiser second, and Death Ride third. All three are tough, though: it would be a terrible mistake to take the Death Ride for granted.

Labels:
analysis,
climb ratings,
Climb to Kaiser,
Death Ride,
Terrible Two

## Friday, November 19, 2010

### testing the corrected total climbing algorithm

Okay, so dĆ©ja-vu time. Once again I exercise my total climbing algorithm, except this time doing it right.

Here's the ride from last Sunday again, where I plot total climbing versus the climbing threshold. I've "greyed out" the result from the flawed version of the flawed algorithm. The difference is most evident for the larger values of the climbing threshold, where there's more pruning to be done. As long as pruning was on isolated segments, the old algorithm did okay. It's when multiple sequential climbs and descents were clipped that things went sour.

Now I look once again at the data from my ride. I plot here only the data from after I first crossed the west peak of Mt Tam. Here's the result using a 5 meter threshold, compared to the full measured profile:

Next, the 15 meter threshold. This did okay last time, and with the improved clipping, it does even better. 15 meters is close to the 50 foot threshold I believe was used with the old Avocet 50. However, it does merge the two climbs approaching the west peak, and when riding, those really do feel like two climbs separated by a descent, so I'd prefer they not be merged. Points to 5 meters. However, unlike the last time, the 15 meter algorithm is doing much better on the climbs in Sausalito and San Francisco.

So finally the 50 meter threshold. This one is clearly merging quite a lot. It has made substantial progress with the corrected algorithm, but still simply fails to give adequate recognition to the repeated onslaught of small steep hills San Francisco likes to dish out if you're not careful in route selection (and I have a habit "neglecting" to avoid the steep stuff).

Obviously this time I checked all of these using the reverse direction as well. Descending in the reverse direction should equal climbing in the forward. And indeed while the two directions don't necessarily result in the identical selection of points, they do always result in the same total climbing. In some cases there are multiple points at the same altitude and the direction may affect which how the ties are broken.

So there it is. I wish I hadn't made that mistake earlier, but I'm prone to mistakes. Sigh.

Here's the ride from last Sunday again, where I plot total climbing versus the climbing threshold. I've "greyed out" the result from the flawed version of the flawed algorithm. The difference is most evident for the larger values of the climbing threshold, where there's more pruning to be done. As long as pruning was on isolated segments, the old algorithm did okay. It's when multiple sequential climbs and descents were clipped that things went sour.

Now I look once again at the data from my ride. I plot here only the data from after I first crossed the west peak of Mt Tam. Here's the result using a 5 meter threshold, compared to the full measured profile:

Next, the 15 meter threshold. This did okay last time, and with the improved clipping, it does even better. 15 meters is close to the 50 foot threshold I believe was used with the old Avocet 50. However, it does merge the two climbs approaching the west peak, and when riding, those really do feel like two climbs separated by a descent, so I'd prefer they not be merged. Points to 5 meters. However, unlike the last time, the 15 meter algorithm is doing much better on the climbs in Sausalito and San Francisco.

So finally the 50 meter threshold. This one is clearly merging quite a lot. It has made substantial progress with the corrected algorithm, but still simply fails to give adequate recognition to the repeated onslaught of small steep hills San Francisco likes to dish out if you're not careful in route selection (and I have a habit "neglecting" to avoid the steep stuff).

Obviously this time I checked all of these using the reverse direction as well. Descending in the reverse direction should equal climbing in the forward. And indeed while the two directions don't necessarily result in the identical selection of points, they do always result in the same total climbing. In some cases there are multiple points at the same altitude and the direction may affect which how the ties are broken.

So there it is. I wish I hadn't made that mistake earlier, but I'm prone to mistakes. Sigh.

## Thursday, November 18, 2010

### total climbing and descending algorithm: correct version

Last time I demonstrated how important it is to take some care in pruning points to simplify the profile while conserving climbs which are in excess of the minimum threshold. Before that, I'd proposed an algorithm to calculate total climbing and descending, but I was sloppy. Nevertheless even my flawed algorithm demonstrated that a climbing threshold of 5 meters was desirable.

There's one key thing I was missing in my previous algorithm, and that's to make sure pruned segments are bounded on both the high and low side by adjacent points. So here's my revised approach.

First, merge monotonic segments, as follows:

So now the profile consists of uphill segments alternating with downhill segments (or perhaps just a single flat segment if there's no climbing or descending in the route). Next we prune it down:

There we go: that's it. An example, where the threshold is 5, follows. The current point will be marked in

We start at the first point:

The next The next two segments are both climbing, so we delete the next point:

Now we have a climb-descend combo, so we move to the next point:

1,

Now a descend-climb, so next point:

1, 4,

Climb-descend, so next point:

1, 4, 3,

Descend-climb, so next point:

1, 4, 3, 6,

Again, to the next point:

1, 4, 3, 6, 2,

7-6-6 doesn't have both a climb and a descent, so the middle point goes:

1, 4, 3, 6, 2,

7-6-4 lacks a climb, so the middle point gets pruned:

1, 4, 3, 6, 2,

Now we have a 7-4-6 descend-climb, so next point:

1, 4, 3, 6, 2, 7,

4-6-1 is good, and these are the final three points, so we're done with the first part. Now we go to the second part... back to the first point:

The first four points are 1-4-3-6. The middle two points are encapsulated by the outer two, and the difference is less than 5, so they go. Normally we'd back step twice at this point, but we're at the first point already, so there's nowhere to go:

Again the points are within 5, and neither is either the largest or smallest of the four points starting with the current point, so again the middle two get snipped:

Here the 7 and 4 are less than 5 apart, but the 7 is the largest of the four, so we don't want to delete it. So we move up:

1,

Here the 4 and 6 are less than 5 apart, and neither is larger than the 7 or less than the 1, so they go, and we back up:

There's fewer than four points left starting with the current point, so we're done.

So from those original points:

1,

All we ended up with in the end was the climb from 1 to 7 and the descent back. We could run the points in the opposite order and get the same result.

There's one key thing I was missing in my previous algorithm, and that's to make sure pruned segments are bounded on both the high and low side by adjacent points. So here's my revised approach.

First, merge monotonic segments, as follows:

- Start with the first point.
- Until the following two segments are either climb-descend or descend climb, or until the following point is the final point in the ride, delete the following point.
- Move on to the next point and repeat the previous step if there's at least three points past the current point.

So now the profile consists of uphill segments alternating with downhill segments (or perhaps just a single flat segment if there's no climbing or descending in the route). Next we prune it down:

- Start with the first point.
- Check to make sure there's at least four points starting with the current point; if not, we're done.
- If of the four points starting with the present point, if the difference between the middle two are less than the climbing threshold, and if neither of these middle two points are greater than both the first and last points nor less than both the first and last points, then delete the middle two points and back up two positions (or to the start of all points if there's fewer than two points prior to the present position). If there was no deletion, move to the next point.
- Repeat step 2.

There we go: that's it. An example, where the threshold is 5, follows. The current point will be marked in

**green**. S point about to be deleted will be marked in**red**:We start at the first point:

**1**,**2**, 4, 3, 6, 2, 7, 6, 5, 4, 6, 1The next The next two segments are both climbing, so we delete the next point:

**1**, 4, 3, 6, 2, 7, 6, 5, 4, 6, 1Now we have a climb-descend combo, so we move to the next point:

1,

**4**, 3, 6, 2, 7, 6, 5, 4, 6, 1Now a descend-climb, so next point:

1, 4,

**3**, 6, 2, 7, 6, 5, 4, 6, 1Climb-descend, so next point:

1, 4, 3,

**6**, 2, 7, 6, 5, 4, 6, 1Descend-climb, so next point:

1, 4, 3, 6,

**2**, 7, 6, 5, 4, 6, 1Again, to the next point:

1, 4, 3, 6, 2,

**7**,**6**, 6, 4, 6, 17-6-6 doesn't have both a climb and a descent, so the middle point goes:

1, 4, 3, 6, 2,

**7**,**6**, 4, 6, 17-6-4 lacks a climb, so the middle point gets pruned:

1, 4, 3, 6, 2,

**7**, 4, 6, 1Now we have a 7-4-6 descend-climb, so next point:

1, 4, 3, 6, 2, 7,

**4**, 6, 14-6-1 is good, and these are the final three points, so we're done with the first part. Now we go to the second part... back to the first point:

**1**,**4, 3**, 6, 2, 7, 4, 6, 1The first four points are 1-4-3-6. The middle two points are encapsulated by the outer two, and the difference is less than 5, so they go. Normally we'd back step twice at this point, but we're at the first point already, so there's nowhere to go:

**1**,**6, 2**, 7, 4, 6, 1Again the points are within 5, and neither is either the largest or smallest of the four points starting with the current point, so again the middle two get snipped:

**1**, 7, 4, 6, 1Here the 7 and 4 are less than 5 apart, but the 7 is the largest of the four, so we don't want to delete it. So we move up:

1,

**7**,**4, 6**, 1Here the 4 and 6 are less than 5 apart, and neither is larger than the 7 or less than the 1, so they go, and we back up:

**1**, 7, 1There's fewer than four points left starting with the current point, so we're done.

So from those original points:

1,

**2, 4, 3, 6, 2,**7,**6, 6, 4, 6,**1All we ended up with in the end was the climb from 1 to 7 and the descent back. We could run the points in the opposite order and get the same result.

## Wednesday, November 17, 2010

### total climbing and descending algorithm: FAIL

In my last few blog posts, I proposed a total climbing algorithm. Now to tell the truth, I didn't just invent this for the first time now -- years ago I wrote a program to decode files from the Specialized pBrain, a rather well-designed altitude-recoring cycling computer which actually cost more over 10 years ago than an Edge 500 does today, but having downloadable altitude profiles was so novel, I considered it worth every penny, at least until the bike I had it mounted to was stolen. I lost track of that old source code so I basically started from scratch to facilitate generating climbing statistics for the Low-Key Hillclimbs.

Anyway, I was reviewing the plots I posted last time and I realized the algorithm I just described was flawed. You can see it most clearly in the 50 meter threshold plot:

Ouch -- there's a bunch of low-lying fruit which remains unpicked in the 90-95 km range, all passed up for a higher point at km 103. This clearly isn't calculating total climbing properly.

Since the algoritm calculates climbing and descending it had better be the case that climbing calculated in one direction equals descending calculated in the opposite direction, and for that to be the case the points chosen for the tops and bottoms of each sub-climb (or descent) should have the same altitudes. This is clearly not the case here:

What a total mess: the algorithm is choosing different peak and valley points if it starts from the right than if it starts from the left. Neither of these directions yields the optimal choice. For example there's a big juicy peak at km 9 which neither direction chooses.

The problem is when I had 4 consecutive points and eliminted the middle two (the second of three segments) if it was less than the climbing threshold. Yet I reclessly pruned sub-standard segments without due attention to maximizing the altitude gain of the surviving segment. Consider the following bad example, which if the points are processed left-to-right the algorithm I described would yield:

Bad, bad, bad. The top of that hill has been snipped off. The following is the correct choice:

Once you've got a good picture of what you want, the algorithm almost produces itself. And in this case I got a big deja-vu from having worked on this 10 years ago.

Anyway, the "correct" approach next time...

Anyway, I was reviewing the plots I posted last time and I realized the algorithm I just described was flawed. You can see it most clearly in the 50 meter threshold plot:

Ouch -- there's a bunch of low-lying fruit which remains unpicked in the 90-95 km range, all passed up for a higher point at km 103. This clearly isn't calculating total climbing properly.

Since the algoritm calculates climbing and descending it had better be the case that climbing calculated in one direction equals descending calculated in the opposite direction, and for that to be the case the points chosen for the tops and bottoms of each sub-climb (or descent) should have the same altitudes. This is clearly not the case here:

What a total mess: the algorithm is choosing different peak and valley points if it starts from the right than if it starts from the left. Neither of these directions yields the optimal choice. For example there's a big juicy peak at km 9 which neither direction chooses.

The problem is when I had 4 consecutive points and eliminted the middle two (the second of three segments) if it was less than the climbing threshold. Yet I reclessly pruned sub-standard segments without due attention to maximizing the altitude gain of the surviving segment. Consider the following bad example, which if the points are processed left-to-right the algorithm I described would yield:

Bad, bad, bad. The top of that hill has been snipped off. The following is the correct choice:

Once you've got a good picture of what you want, the algorithm almost produces itself. And in this case I got a big deja-vu from having worked on this 10 years ago.

Anyway, the "correct" approach next time...

## Tuesday, November 16, 2010

### testing the total climbing algorithm

On Sunday I did the following ride. I'd like to say this sort of thing is typical but sadly it was the longest ride I'd done in at least 5 weeks. New job and a tendency toward Sunday rain around here has kept me from getting in any long ones lately:

I imported my Edge 500 data into GoldenCheetah, exported CSV, and ran it through my algorithm. As a test, I varied the climbing threshold from 1 meter to 100 meters. Here's the the result:

Clearly, there's no "correct" answer, no unique value of total climbing. We generally agree if I make 100 different cycling computers and each measure distance, they should agree within, for example, 0.1% if perfectly calibrated. Sure, there's issues of whether they measure the distance traversed by the handlebars or by the tire-road contact patch if the bike banks into corners. But these are minor differences. However, with "total climbing", assuming you don't want to measure every vibration as a "climb", the result is sort of arbitrary. In the the "feel" test applies. If something feels like a climb, it should probably be counted as a climb. It should be memorable.

So I pulled out selected climbing thresholds for the "feel" test. I already noted I don't think if momentum alone can deliver the potential energy to get over an altitude difference, that should count as a full climb. So 5 meters is the lower limit. Here's the result for 5 meters, where for clarity I've restricted the plot to the portion of the climb from the Mt Tamaplais parking area back home:

Already I can tell you I remember every one of those "climbs" and they are real. So I'm fairly sure I have my answer already. From a noise standpoint, I think we can agree the Edge 500 does rather well. I'd hope there's been improvements since the Avocet 50, close to twenty years older.

Here's the result for 15 meters:

That's not too bad, really. But starting from the left is the short climb to the Mt Tam parking lot near the East Peak. There I stopped to check on updates on Miles, who works at the snack bar but is on leave for cancer treatment, and of course to admire the view. From there the road descends, climbs, descends again, then climbs to the West Peak, the so-called "Golf Ball" due to the old military spherical antenna mounted there. Unfortunately these little climb-descend sequences get merged with a 15 meter threshold. So 15 meters is too high. 5 meters did better by the "feel test".

Now 50 meters: this is what that yields:

That's just plain silly. Okay, so it got Pacific Ave climbing out of the Presidio up to Davisidero. But it discarded Fillmore Street, the famed climb from the old San Francisco Grand Prix. And I assure you that Fillmore Street very much qualified for the "feel test". You'd have to be a real hard-case to claim the threshold should be 50 meters.

So there it is: 5 meters wins. 1 meter and you're counting stuff which can be coasted, and are subject to the whim of minor barometric variations (assuming pressure is used to help with altimetry). Much more than 5 meters, though, and you're losing stuff which the legs will tell you is a real and worthy climbing segment.

I imported my Edge 500 data into GoldenCheetah, exported CSV, and ran it through my algorithm. As a test, I varied the climbing threshold from 1 meter to 100 meters. Here's the the result:

Clearly, there's no "correct" answer, no unique value of total climbing. We generally agree if I make 100 different cycling computers and each measure distance, they should agree within, for example, 0.1% if perfectly calibrated. Sure, there's issues of whether they measure the distance traversed by the handlebars or by the tire-road contact patch if the bike banks into corners. But these are minor differences. However, with "total climbing", assuming you don't want to measure every vibration as a "climb", the result is sort of arbitrary. In the the "feel" test applies. If something feels like a climb, it should probably be counted as a climb. It should be memorable.

So I pulled out selected climbing thresholds for the "feel" test. I already noted I don't think if momentum alone can deliver the potential energy to get over an altitude difference, that should count as a full climb. So 5 meters is the lower limit. Here's the result for 5 meters, where for clarity I've restricted the plot to the portion of the climb from the Mt Tamaplais parking area back home:

Already I can tell you I remember every one of those "climbs" and they are real. So I'm fairly sure I have my answer already. From a noise standpoint, I think we can agree the Edge 500 does rather well. I'd hope there's been improvements since the Avocet 50, close to twenty years older.

Here's the result for 15 meters:

That's not too bad, really. But starting from the left is the short climb to the Mt Tam parking lot near the East Peak. There I stopped to check on updates on Miles, who works at the snack bar but is on leave for cancer treatment, and of course to admire the view. From there the road descends, climbs, descends again, then climbs to the West Peak, the so-called "Golf Ball" due to the old military spherical antenna mounted there. Unfortunately these little climb-descend sequences get merged with a 15 meter threshold. So 15 meters is too high. 5 meters did better by the "feel test".

Now 50 meters: this is what that yields:

That's just plain silly. Okay, so it got Pacific Ave climbing out of the Presidio up to Davisidero. But it discarded Fillmore Street, the famed climb from the old San Francisco Grand Prix. And I assure you that Fillmore Street very much qualified for the "feel test". You'd have to be a real hard-case to claim the threshold should be 50 meters.

So there it is: 5 meters wins. 1 meter and you're counting stuff which can be coasted, and are subject to the whim of minor barometric variations (assuming pressure is used to help with altimetry). Much more than 5 meters, though, and you're losing stuff which the legs will tell you is a real and worthy climbing segment.

## Sunday, November 14, 2010

### total climbing and descending algorithm

Last time I described how the goal was to eliminate climbs less than some threshold (which I'll call

A climb is represented as a series of points, each with a distance and altitude. I'll ignore distance. Distance might be used to apply some numerical smoothing to the data before implementing this algorithm, but that's optional. Here I care only about altitude.

One approach might be to consider consecutive points and count only climbing or descending which exceeds

So I've got to be careful in how I do this.

I'll look at four consecutive points at a time. I'll call these points

Here's an example derived from randomly-generated numbers.... I'll set

38, 26, 23, 20, 46, 42, 44, 29, 8, 41

In each case the point I've been labeling "

First, step 1:

Step 1 now is done, because 20 < 38, and 20 < 46. Then step 2 does nothing (since 46 > 20, and 46 > 42). So step 3: 46 ‒ 20 > 5, so I don't delete any points, and move on to the next point: 38,

Step 1: I'm not at the first position, so I skip this.

Step 2: 46 > 42 > 27, so 42 goes

38,

Step 3: 46 ‒ 20 > 5, so I don't delete anything and move ahead (step 4):

38, 20,

Step 1: Not at the first position, so not needed.

Step 2: 29 is the largest of 27, 29, and 8, so nothing to delete here.

Step 3: 29 ‒ 27 < 2, so I need to delete the pair

38, 20,

Step 4: we move back a point:

38,

Steps 1, 2, and 3 do nothing, so we move the point back up:

38, 20,

Step 1: Not needed, since I'm not at the first position.

Step 2: there's no point

Step 3: there's no point

38, 20, 46,

I've got only two points left, so I'm done:

38, 20, 46, 8, 41

So I see I have a descent of 18, followed by a climb of 26, then a descent of 38, then finally a climb of 33. Total climbing = 26 + 33 = 59, while total descending = 18 + 38 = 56.

So that's it: really simple.

This example wasn't the best, because it failed to demonstrate why you need to step back after deleting point pairs. The reason is if you delete

Here's the result for Mt Hamilton Road, using data I got from Garmin Connect:

Here's these key points plotted on top of the original profile:

One comment on this algorithm: it's not totally rational. Consider the following points:

0, 1 .

Total climbing = 1, since I don't touch either the starting or the finishing point. But now I add an additional point:

0, 1, 0 ,

and now total climbing is 0. So adding a point reduced the total climbing. But what can you do? The alternative is total climbing minus total descending might not equal the net altitude change, so I'm willing to live with the anomaly.

*h*) while preserving the starting and ending altitude for a route. Here's the algorithm I implemented._{min}A climb is represented as a series of points, each with a distance and altitude. I'll ignore distance. Distance might be used to apply some numerical smoothing to the data before implementing this algorithm, but that's optional. Here I care only about altitude.

One approach might be to consider consecutive points and count only climbing or descending which exceeds

*h*. This obviously fails because a climb might consist of 1000 consecutive one-meter altitude gains. So then I might combine all adjacent non-descending segments together, then all adjacent non-climbing segments, etc, to yield alternating climb-descend-climb-descend. I could then eliminate the segments which gain or lose less than_{min}*h*But this isn't so easy either. Consider a climb which gains 4 meters, descends one, then gains 4, descends 1, one hundred times total. That's a 300 meter climb, and certainly should be counted._{min}So I've got to be careful in how I do this.

I'll look at four consecutive points at a time. I'll call these points

*a*,*b*,*c*, and*d*. If I move on to the next point, the old*b*becomes the new*a*, the old*c*becomes the new*b*, the old*d*becomes the new*c*, and the new*d*is the point which follows the old*d*. Additionally, if I delete a point, the following points shift to fill the missing position. For example, if I delete*c*, the old*d*becomes the new*c*, and the point following the old*d*becomes the new*d*.- First, if I'm at the first position in the data (otherwise this isn't needed: until
*b*is either higher than both*a*and*c*, or lower than both*a*and*c*, or until there is no remaining point*c*, delete*b*. - Next, until
*c*is either higher than both*b*and*d*, or lower than both*b*and*d*, or until there are no remaining point*d*, delete*c*. - If there's still a point
*d*, then if the difference between the altitudes of*b*and*c*is less than*h*, then delete both_{min}*b*and*c*. - If
*b*and*c*were deleted, move back a point. Otherwise, move ahead a point. - If there's still a point
*c*, repeat, otherwise we're done.

Here's an example derived from randomly-generated numbers.... I'll set

*h*= 5. I list altitudes, which could be meters, but that's irrelevant:_{min}38, 26, 23, 20, 46, 42, 44, 29, 8, 41

In each case the point I've been labeling "

*a*" will be**marked**:**38**, 26, 23, 20, 46, 31, 44, 29, 8, 41First, step 1:

**38**, 23, 20, 46, 42, 27, 29, 8, 41**38**, 20, 46, 42, 27, 29, 8, 41Step 1 now is done, because 20 < 38, and 20 < 46. Then step 2 does nothing (since 46 > 20, and 46 > 42). So step 3: 46 ‒ 20 > 5, so I don't delete any points, and move on to the next point: 38,

**20**, 46, 42, 27, 29, 8, 41Step 1: I'm not at the first position, so I skip this.

Step 2: 46 > 42 > 27, so 42 goes

38,

**20**, 46, 27, 29, 8, 41Step 3: 46 ‒ 20 > 5, so I don't delete anything and move ahead (step 4):

38, 20,

**46**, 27, 29, 8, 41Step 1: Not at the first position, so not needed.

Step 2: 29 is the largest of 27, 29, and 8, so nothing to delete here.

Step 3: 29 ‒ 27 < 2, so I need to delete the pair

**27, 29**:38, 20,

**46**, 8, 41Step 4: we move back a point:

38,

**20**, 46, 8, 41Steps 1, 2, and 3 do nothing, so we move the point back up:

38, 20,

**46**, 8, 41Step 1: Not needed, since I'm not at the first position.

Step 2: there's no point

*d*, so nothing to do.Step 3: there's no point

*d*, so nothing to do, and I step forward:38, 20, 46,

**8**, 41I've got only two points left, so I'm done:

38, 20, 46, 8, 41

So I see I have a descent of 18, followed by a climb of 26, then a descent of 38, then finally a climb of 33. Total climbing = 26 + 33 = 59, while total descending = 18 + 38 = 56.

So that's it: really simple.

This example wasn't the best, because it failed to demonstrate why you need to step back after deleting point pairs. The reason is if you delete

*b*and*c*, then the old*d*becomes the new*b*, and it's possible the difference between*a*and*b*is now less than the*h*. If I step backward I'll catch that is step 3._{min}Here's the result for Mt Hamilton Road, using data I got from Garmin Connect:

km meters climbing descending 0 0 0 0 9.56243 461.522 461.522 0 10.8524 407.685 461.522 53.837 10.9968 413.452 467.289 53.837 12.418 362.502 467.289 104.787 17.5911 597.073 701.86 104.787 18.6179 541.794 701.86 160.066 18.7925 552.852 712.918 160.066 19.0447 533.623 712.918 179.295 29.6209 1161.39 1340.68 179.295

Here's these key points plotted on top of the original profile:

One comment on this algorithm: it's not totally rational. Consider the following points:

0, 1 .

Total climbing = 1, since I don't touch either the starting or the finishing point. But now I add an additional point:

0, 1, 0 ,

and now total climbing is 0. So adding a point reduced the total climbing. But what can you do? The alternative is total climbing minus total descending might not equal the net altitude change, so I'm willing to live with the anomaly.

## Saturday, November 13, 2010

### Calculating Net Climbing and Descending

The Alta Alpine 8-pass challenge is listed with 20300 feet of climbing in 198 miles |

You'd think it would be simple: for each adjacent pair of points

*a*and

*b*, if the route travels from

*a*to

*b*and

*b*is higher, the difference in altitude is added to total climbing. If

*a*is higher, the difference is added to total descending. Now

*b*becomes the new

*a*, the next point after the old

*b*becomes the new

*b*, and repeat.

Except this doesn't work. The reason is measuring altitude typically involves errors which are different from one point to the next. Suppose I have points every 10 meters on a perfectly flat 100 km route. Suppose 25% of these points tend to be measured 1 meter too high, 25% 1 meter too low, and 50% the correct altitude rounded to the nearest meter. Then I'll make on average an error in climbing of 37.5 cm in climbing and 37.5 cm in descending per point. By the end of 100 km, the accumulated climbing from these small +/- 1 meter errors will be 3750 meters. That's a huge total for 100 km.

Avocet 50: ahead of its time |

First, even in the absence of measurement error, there's a rational basis for such an algorithm. After all, each bump in the asphalt is "climbing" and "descending". Yet the bikes coasts over these small bumps, they aren't "climbed" in the traditional sense. So a decent criterion for what consistutes a significant change in altitude is how large an altitude difference can be coasted.

The solution is simple: the ratio of kinetic energy to total mass is ½

*v*², where

*v*is the initial bike speed. The ratio of potential energy to total mass needed to change altitude is

*g*

*h*, where

*h*is the height and

*g*is the acceleration of gravity. Coasting up a hill converts kinetic into potential energy: when all the kinetic energy is gone, the rider needs to pedal. So the height

*h*which can be climbed from coasting, neglecting rolling resistance, wind resistance, and mechanical losses from the bike, is:

*h*= ½

*v*² /

*g*

So consider an initial

*v*= 10 meters/second.

*g*is about 10 meters/second². So this gives:

*h*= 5 meters

5 meters is less than the 50 feet the Avocet might have used, but is pretty much the lowest value I'd personally use. 5 meters of elevation gain is really too small to contribute to a route feeling hilly.

So that's the goal: eliminate climbs and descents which are less than some threshold. But I'll apply another constraint: the starting and ending elevations of a "ride" are unchanged by the algorithm. In other words, while "total" or "gross" climbing may be subject to debate, "net" climbing should not. In particular it would be embarrassing if the algorithm decided climbing did not equal descending on a route which finished where it began.

So I'll describe my algorithm next time.

## Monday, November 8, 2010

### Mix Canyon Road

*Approaching Mt Vaca*

We're getting close to the end game in the 2010 Low-Key Hillclimbs and the mind naturally turns to the 2011 schedule. And there's one climb which I've wanted to visit ever since it was first suggested to me three years ago.

That's Mix Canyon Road out of Vacaville. From Vacaville, you follow Pleasants Valley Road to the north until Mix goes off to the left.

View Larger Map

The usual response is it's sort of far afield. Sure, from San Jose it's 91 miles and it's 86 from Palo Alto but from San Francisco, where I live, it's closer than three of the climbs on the 2010 schedule. I'm not too worried about the distance. It's worth a bit of travel for a very special climb.

After the turn the road climbs gradually but it gets steeper the further you go. Approaching the summit is a will-cracking 15% sustained, which means sections at 20% with "recovery" at around 10%. The combination of altitude gained, the duration of the steep portion, and the location of the steep segment within the overall climb is a combination which is really hard to resist.

The profile tells a big story, but I also like grade histograms. My favorite representation is in total climb at or above each particular grade after applying a smoothing function representing the general ability of a rider to "power up" short steep sections. Here's how Mix Canyon fares in this analysis, compared to Old La Honda road, the unit reference for cycling climbs:

There's some serious action there in the ≥ 15% range. Mix gives up nothing to Welch Creek or Bohlman-Norton up here. And Mix delivers its maximum grades in a bigger chunk, with more climbing already in the legs.

And the numbers back this up. The climb's rating is 245. That compares to 235 for Alba and 231 for Bohlman-Norton. Mix is rated higher than any climb Low-Key has done.

Anyway, it's definitely on my to-do list, one way or another.

## Sunday, November 7, 2010

### 2011 Cervelo geometry

The 2011 Cervelo bike pages are finally up. I find Cervelo particularly interesting because they are deliberate in their geometry decisions, designing their bikes with a coherent philosophy across the entire size range rather than the more ad hoc approach which generally preceded them.

Initially Cervelo came out with the R and S series which were designed to be aggressive. The idea was if you needed the handlebars higher, you could always simply add spacers. But riders don't like spacers: they don't "look pro". Of course if the rider in question isn't pro, "looking pro" shouldn't be a concern. But for some reason people want to "look pro" for their club rides. And indeed there's a limit to how many spacers you probably want. Better to have the frame design utilize available space a bit better.

So they came out with the RS, a more relaxed geometry. No problem: now they spanned the geometry range. Now if you wanted the bars sufficiently higher than the R-series, you could get the RS. The RS also had longer chainstays for a bit more comfort over rough roads.

Yet some people didn't want to be seen riding the "RS", obviously a "fatty master's bike". It didn't matter how few spacers you had: just the letters "RS" looked un-pro.

So Cervelo for 2011 simplified: increase the chainstays up to the RS-standard (and the standard of the rest of the industry: the 2010 R and S series chainstays were exceptionally short), and more assertively increase the head tube length with the frame size. As a result, the large and extra-large R-series frame this year is basically the 2010 RS.

But the key difference is the frame says "R", not "RS". So riders need not hang their heads in shame. They're on the "pro" bike.

They also reduced the pedal-to-front-wheel overlap on the smaller frames. Some riders didn't like it that if you turned your wheel away from your forward foot, the tire could strike your shoe. The obvious solution is to not turn your wheel into your forward foot: to turn the wheel that much requires walking-pace speed anyway. It's not hard to avoid. But people don't like it, anyway. Cervelo's view was that good handling at high speed (which comes with the shorter front-end) is more important than handling at walking pace (which is compromised by toe overlap), but they've finally backed down from that position.

Here's a plot of the head tube length plotted as a function of reach, reach being the horizontal distance between the seat post and where the fork steerer tube exits the head tube. You can see the 2011's are all-around taller in the head tube: fewer unsightly spacers! The difference goes from 8 mm in the smaller frames up to the full gap between the 2010 R and RS in the larger frames.

Initially Cervelo came out with the R and S series which were designed to be aggressive. The idea was if you needed the handlebars higher, you could always simply add spacers. But riders don't like spacers: they don't "look pro". Of course if the rider in question isn't pro, "looking pro" shouldn't be a concern. But for some reason people want to "look pro" for their club rides. And indeed there's a limit to how many spacers you probably want. Better to have the frame design utilize available space a bit better.

So they came out with the RS, a more relaxed geometry. No problem: now they spanned the geometry range. Now if you wanted the bars sufficiently higher than the R-series, you could get the RS. The RS also had longer chainstays for a bit more comfort over rough roads.

Yet some people didn't want to be seen riding the "RS", obviously a "fatty master's bike". It didn't matter how few spacers you had: just the letters "RS" looked un-pro.

So Cervelo for 2011 simplified: increase the chainstays up to the RS-standard (and the standard of the rest of the industry: the 2010 R and S series chainstays were exceptionally short), and more assertively increase the head tube length with the frame size. As a result, the large and extra-large R-series frame this year is basically the 2010 RS.

But the key difference is the frame says "R", not "RS". So riders need not hang their heads in shame. They're on the "pro" bike.

They also reduced the pedal-to-front-wheel overlap on the smaller frames. Some riders didn't like it that if you turned your wheel away from your forward foot, the tire could strike your shoe. The obvious solution is to not turn your wheel into your forward foot: to turn the wheel that much requires walking-pace speed anyway. It's not hard to avoid. But people don't like it, anyway. Cervelo's view was that good handling at high speed (which comes with the shorter front-end) is more important than handling at walking pace (which is compromised by toe overlap), but they've finally backed down from that position.

Here's a plot of the head tube length plotted as a function of reach, reach being the horizontal distance between the seat post and where the fork steerer tube exits the head tube. You can see the 2011's are all-around taller in the head tube: fewer unsightly spacers! The difference goes from 8 mm in the smaller frames up to the full gap between the 2010 R and RS in the larger frames.

## Sunday, October 31, 2010

### San Francisco D10 Board of Supervisor candidates

Here's how the D10 candidates responded. The first column shows the candidate name. The next five show whether the candidate supports each of the listed propositions, all considered key propositions in the election. Then I show the number of positions which match those I listed in this blog. Finally, I show the percent in agreement.

A lot goes into making a good candidate. For example, the San Francisco Bike Coalition supports Eric Smith for his strong support of the City bike plan. However, I think it's pretty clear from this Kristine Enea is going to appear on my ballot on Tuesday.

## Saturday, October 30, 2010

### Caltrain Eight-ride Addiction

I started a new job and any semblance of "training" has gone by the wayside, as I was first focused on learning my way around, and now on solving some of the problems I'm hired to deal with. I'll need to find more balance in coming weeks. But I digress.

A consequence of the new job is I can finally get Commuter Checks, which allow me to buy transit tickets with "pre-tax" income. Considering as well I'll be working from home a lot less (I was around 1-2 times per week) and at least initially riding SF2G in less, surely it's time for a Caltrain monthly pass rather than my usual practice of getting 8-ride tickets. It's time to become one of the big-boy train commuters, right?

The fare chart shows that a monthly pass for 3-zones is $159, while an 8-ride is $40.75. So the monthly costs as much as 31.25 rides. in other words, if you ride 16 round trips or more, the monthly is the deal.

Sure, there's other factors. For example the monthly ticket is more convenient: just carry it with you and forget about it until asked to show it to a conductor. But there's a downside as well. The 8-ride needs to be validated every day, so it's part of my routine. On the other hand, the monthly needs to be renewed on the first of the month. Since that comes infrequently, it's far more likely to forget to renew the monthly pass than it is to forget to punch an 8-ride. There's a grace period through noon of the first workday of the month, but still, the cost of error is potentially high: a citation in excess of $200. I've never forgotten to punch an 8-ride ticket.

Then there's loss. If I lose my ticket permanently, assuming loss is rate, it will on average tend to have around half its value at the time of loss. Half of a monthly is four times half of an 8-ride. The new Clipper cards change things here as well, however: Clipper cards may be replaced with a fee.

For temporary loss, if I arrive at a station without my ticket or Clipper card costs me essentially nothing. With a monthly, I've got to buy one-ways for the day, and that's a total loss. I don't lose tickets much but I have forgotten to take them with me to the station. Again this calculus may change with Clipper cards: I don't know how one-way and day passes for the single-time train rider (like a tourist, for example) will be handled. But even if I can't buy an 8-ride without my card and need to get one-ways instead, the cost is only the difference between the 8-ride and day-pass fare (two one-ways), while for a monthly, I eat the full day pass fare.

Okay, so I can deal with these. Maybe I expect to misplace my ticket one day per month. Then the break-even point becomes more like 17 days instead of 16. I'll not worry about the ticket loss factor, since I assume that becomes obsolete with Clipper.

Then there's sick days. Maybe I average one sick day every other month. So that's an extra half-day on the threshold: 17.5.

Then there's bike commuting. If I bike in one day per week, which I hope to do, that takes the threshold up to 18 rides.

November has 22 weekdays. Two, Thanksgiving and the day after, are company holidays. That leaves 20 weekdays. Twenty is more than 18, so I should get that monthly (barely).

But I haven't gotten any commuter checks yet. Unless I get it before the first of the month, I'll need to buy that pass on post-tax income. If I then get it during the month, I'll have lost the opportunity to buy tickets during the month pre-tax. On the other hand, if i get 8-rides, I'll be able to take advantage of the pre-tax fare within a week of receiving my commuter check.

So for November, unless I get that first commuter check within a few days (and I think I need to wait an extra biweekly pay period), I'm better off with 8-rides.

Then December. The company shuts down for two weeks in December (forcing employees to take vacation time if they have it, so I'll take time-without-pay, since I won't). So December is obviously a monthly pass loser.

January: January has a holiday, leaving 20 work days. So this is more than my 18 day threshold. Then February has 20 work days as well. So maybe I want monthlies for each of these months. March has 23 work days so that's really the first month which seems like a clear-cut win.

But suppose I'm on an 8-ride ticket for December, then January rolls along. I want to stop using 8-rides and start using monthlies. But it's unlikely that my last 8-ride will run empty just at the last commute of December. I'll likely have some rides left, and with Clipper, these rides aren't easily transferred (paper tickets with residual value I could perhaps sell, at least in theory). 8-rides expire after six weeks, so I can't save the unused rides for future use if I'm doing at least two consecutive monthlies. So at the end of December I need to decide that I'll go to monthlies in January and go to day passes or one-ways instead after my last 8-ride expires. This is additional cost.

It's all a complicated game. And the game becomes more complicated if you consider I'll need to take some business trips on this job. The monthly pass just doesn't seem like a financial win unless weekend service becomes useful (I've ridden the weekend train only a few times since they last reduced weekend service) that I can offset some of my monthly cost with weekend trips.

Really with Clipper this could all be solved with a simplified fare schedule: you first rides of the month are at full fare, then after you've done a certain number (for example four) you go to a discounted fare, then after you've completed a second threshold (for example 32) the rest of the rides in that month are free. Then there's be no need to play these probability games.

So how to set these fares? It would be fun to suggest some sort of exponential decay function for fares, but I defer to an attraction to mathematical simplicity. So I propose a 2-tier price system. First, riders pay a higher rate for tickets until they get up to the level of a typical day-pass rider. Then they get a discounted fare which results in a typical 8-rider passenger paying the same as they do now. Nobody commuting 20 days per month should pay more than the present monthly.

So let's say the average day-passer rides 4 round trips per month. Then 3-zone tickets should be $6 each for the first 8 tickets you buy: $48 total (two tickets/round-trip). Then let's say a typical 8-ride passenger rides 12 times per month, paying close to $120 for those 12 rides, or $72 for round trips 5-12, yielding $3/ticket. Then if you ride 20 times in a month, that's 40 tickets, eight at $6 each, and 32 at $3 each, totaling $144. If you ride 22 times per month that's $156. A monthly pass (3-zone) is presently $159. So it works out fairly closely.

So there it is: 3-zone tickets @ $6 each ($12/round trip) for the first eight tickets (four round-trips), then $3 each ($6 per round trip) after that. Much simpler. For other zones, adjust accordingly ($1.75/zone for first eight tickets, $0.875/zone, rounded, thereafter).

Surely they'll do something like this once liberated from the capabilities of paper tickets.

A consequence of the new job is I can finally get Commuter Checks, which allow me to buy transit tickets with "pre-tax" income. Considering as well I'll be working from home a lot less (I was around 1-2 times per week) and at least initially riding SF2G in less, surely it's time for a Caltrain monthly pass rather than my usual practice of getting 8-ride tickets. It's time to become one of the big-boy train commuters, right?

The fare chart shows that a monthly pass for 3-zones is $159, while an 8-ride is $40.75. So the monthly costs as much as 31.25 rides. in other words, if you ride 16 round trips or more, the monthly is the deal.

Sure, there's other factors. For example the monthly ticket is more convenient: just carry it with you and forget about it until asked to show it to a conductor. But there's a downside as well. The 8-ride needs to be validated every day, so it's part of my routine. On the other hand, the monthly needs to be renewed on the first of the month. Since that comes infrequently, it's far more likely to forget to renew the monthly pass than it is to forget to punch an 8-ride. There's a grace period through noon of the first workday of the month, but still, the cost of error is potentially high: a citation in excess of $200. I've never forgotten to punch an 8-ride ticket.

Then there's loss. If I lose my ticket permanently, assuming loss is rate, it will on average tend to have around half its value at the time of loss. Half of a monthly is four times half of an 8-ride. The new Clipper cards change things here as well, however: Clipper cards may be replaced with a fee.

For temporary loss, if I arrive at a station without my ticket or Clipper card costs me essentially nothing. With a monthly, I've got to buy one-ways for the day, and that's a total loss. I don't lose tickets much but I have forgotten to take them with me to the station. Again this calculus may change with Clipper cards: I don't know how one-way and day passes for the single-time train rider (like a tourist, for example) will be handled. But even if I can't buy an 8-ride without my card and need to get one-ways instead, the cost is only the difference between the 8-ride and day-pass fare (two one-ways), while for a monthly, I eat the full day pass fare.

Okay, so I can deal with these. Maybe I expect to misplace my ticket one day per month. Then the break-even point becomes more like 17 days instead of 16. I'll not worry about the ticket loss factor, since I assume that becomes obsolete with Clipper.

Then there's sick days. Maybe I average one sick day every other month. So that's an extra half-day on the threshold: 17.5.

Then there's bike commuting. If I bike in one day per week, which I hope to do, that takes the threshold up to 18 rides.

November has 22 weekdays. Two, Thanksgiving and the day after, are company holidays. That leaves 20 weekdays. Twenty is more than 18, so I should get that monthly (barely).

But I haven't gotten any commuter checks yet. Unless I get it before the first of the month, I'll need to buy that pass on post-tax income. If I then get it during the month, I'll have lost the opportunity to buy tickets during the month pre-tax. On the other hand, if i get 8-rides, I'll be able to take advantage of the pre-tax fare within a week of receiving my commuter check.

So for November, unless I get that first commuter check within a few days (and I think I need to wait an extra biweekly pay period), I'm better off with 8-rides.

Then December. The company shuts down for two weeks in December (forcing employees to take vacation time if they have it, so I'll take time-without-pay, since I won't). So December is obviously a monthly pass loser.

January: January has a holiday, leaving 20 work days. So this is more than my 18 day threshold. Then February has 20 work days as well. So maybe I want monthlies for each of these months. March has 23 work days so that's really the first month which seems like a clear-cut win.

But suppose I'm on an 8-ride ticket for December, then January rolls along. I want to stop using 8-rides and start using monthlies. But it's unlikely that my last 8-ride will run empty just at the last commute of December. I'll likely have some rides left, and with Clipper, these rides aren't easily transferred (paper tickets with residual value I could perhaps sell, at least in theory). 8-rides expire after six weeks, so I can't save the unused rides for future use if I'm doing at least two consecutive monthlies. So at the end of December I need to decide that I'll go to monthlies in January and go to day passes or one-ways instead after my last 8-ride expires. This is additional cost.

It's all a complicated game. And the game becomes more complicated if you consider I'll need to take some business trips on this job. The monthly pass just doesn't seem like a financial win unless weekend service becomes useful (I've ridden the weekend train only a few times since they last reduced weekend service) that I can offset some of my monthly cost with weekend trips.

Really with Clipper this could all be solved with a simplified fare schedule: you first rides of the month are at full fare, then after you've done a certain number (for example four) you go to a discounted fare, then after you've completed a second threshold (for example 32) the rest of the rides in that month are free. Then there's be no need to play these probability games.

So how to set these fares? It would be fun to suggest some sort of exponential decay function for fares, but I defer to an attraction to mathematical simplicity. So I propose a 2-tier price system. First, riders pay a higher rate for tickets until they get up to the level of a typical day-pass rider. Then they get a discounted fare which results in a typical 8-rider passenger paying the same as they do now. Nobody commuting 20 days per month should pay more than the present monthly.

So let's say the average day-passer rides 4 round trips per month. Then 3-zone tickets should be $6 each for the first 8 tickets you buy: $48 total (two tickets/round-trip). Then let's say a typical 8-ride passenger rides 12 times per month, paying close to $120 for those 12 rides, or $72 for round trips 5-12, yielding $3/ticket. Then if you ride 20 times in a month, that's 40 tickets, eight at $6 each, and 32 at $3 each, totaling $144. If you ride 22 times per month that's $156. A monthly pass (3-zone) is presently $159. So it works out fairly closely.

So there it is: 3-zone tickets @ $6 each ($12/round trip) for the first eight tickets (four round-trips), then $3 each ($6 per round trip) after that. Much simpler. For other zones, adjust accordingly ($1.75/zone for first eight tickets, $0.875/zone, rounded, thereafter).

Surely they'll do something like this once liberated from the capabilities of paper tickets.

## Thursday, October 28, 2010

### November election: San Francisco ballot propositions

Okay, last time I dispatched the state propositions. San Francisco loves propositions, as the voters have an alarming habit of passing things they don't understand, typically saddling the city with yet more debt. Here's my take on the latest bundle, on which I'll vote when I ride my bike to my local polling station (note how cleverly I slipped in the mandatory cycling content):

So there you have it. In summary on the state issues;

**Proposition AA (vehicle registration fee increase)**: The money could be for the Floyd Fairness Fund, for all I care. I support all vehicle registration fees. Yes on AA.**Proposition A (Earthquake safety retrofit loan bond)**: Every single election there's another bond with either "schools", "water quality", "fire department", or "earthquake" in the title. These almost always pass. Who can be against Earthquake safety, after all? But we are sufficating under our debt, and bonds are no small part of that. I absolutely refuse to rubber-stamp arbitrary dollar amounts because earthquake is in the title. The way to address earthquake safety is to do what Japan does: have strict safety code. Then you let property owners comply or sell to someone who will. It's simply too easy to claim improvements are "earthquake related". No on A.**Proposition B (increase employee contributions to the pension system)**: the pension system for state and local employees is a massive boondoggle which is contributing in substantial part to the bankruptcy of our fine government. The reality is public employees have better benefits and*much*more favorable pensions than most of us in the private sector. This helps close the gap, just a little, for city employees. Hardship? Sure. I hate to be rude, but welcome to reality.**Proposition C (require mayor to appear at meetings)**: This is silly. Sometimes, maybe rarely, maybe only once a term, there are other priorities than a BOS meeting. No on C.**Proposition D (allow non-citizens to vote for BOE if they have children in the schools)**: Maybe I'm violating my social liberalism here, but no. Do you think you can vote for BOE members in Mexico, whether you have children there or not? If you tried to vote there, they'd toss you out of the country as an illegal resident. Voting is a profound responsibility and it stands to reason we place at least minimal standards on those doing so. The standards of citizenship are a good start. No on D.**Proposition E (Election Day voter registration)**: if you haven't decided to vote by the registration deadline, you don't have enough time to become educated. Just look at these list of city and state propositions. And if you haven't educated yourself on the issues, I don't want you deciding them. No on E.**Proposition F (reduce frequency of Health Services Board elections)**; Seems suspicious. I vote no.**Proposition G (eliminate MUNI guaranteed highest average salary)**: The repeals Proposition A from a few year back whose proponants argued that by guaranteeing that MUNI employees were paid at least as much as the average of the two-highest comparable transit agencies in the country, this would provide more barganaining power to the city. Huh? I was against it then and I've seen nothing which convinces me that position was wrong. I'm for G.**Proposition H (ban city elected officials from serving on party central committees)**: The point of this measure is to prevent office-holders subject to campaign contribution limits from running for party central committee positions without them. The idea is that candidates can raise as much as they want from the PCC race, using that campaign to raise awareness for an upcoming BOS election. Donations to BOS members who are running for their PCC may come with the expectation of pay-back as part of the candidate's power within the BOS. the downside of this proposition is I may well want my friendly supervisor on the PCC to help drive the priorities of the party. I maintain a bias against propositions which fail to demonstrate a compelling need, but I think I'll vote for this one: the "follow the money" principle is too strong.**Proposition I (polling places open on Saturday for November election)**: I'm for Saturday voting. Some people, especially those with substantial commutes such as yours truly, have difficulty voting on Tuesday. But San Francisco has quite liberal absentee voting, for example allowing early balloting at city hall. My concern about this measure is the Saturday voting will be funded by private donations. Who's going to donate money to Saturday elections? Let's see, do Low-Key Hillclimb funds go this year to the Lance Armstrong Foundation, the Open Space Trust, World Wildlife, or Saturday Elections? I don't like it. Elections should be publicly funded. When government is stripped down to the last thread, it could be argued that last thread should fund elections, as elections are the foundation of a well-functioning republic. So I vote no on this one.**Proposition J (hotel tax increase)**: Increasing the hotel tax from 14% to 16% is incredibly stupid. Voters like hotel taxes because the voters get the services provided by the funds while somone else pays the bill. But what you learn in high school economics is that when you raise the price, demand drops. We*want*people to come into the city, for a variety of reasons, and unless they're pitching tent in the Golden Gate Park campgrounds, they're probably staying in a hotel. If I'm deciding where to put a convention (say, the International Electron Device Meeting, or the San Francisco Bike Expo), 2% extra (okay, 1.78% extra) tacked on to everyone's bill may well be the straw that broke Jan Heine's porteur.**Proposition K (hotel tax "clarification")**: This is a competing measure to Prop. J, to dilute the vote. I'm voting against this one as well. The idea is it would require brokers like Hotels.com to collect the tax. Let the BOS deal with this; it doesn't warrant a proposition.**Proposition L (no sitting or lying on sidewalks)**: If I'm out running and strain my hamstring and need to sit down to massage my leg, I don't want a ticket. If there's a parade and I stand but the 70-year-old next to me wants to sit on a chair, she shouldn't get a ticket. I vote no, and refuse to support any candidate for office who publicly supports this. It's a brazen attempt to target a specific population, but there's already laws on the books to deal with these other issues. Some argue police discretion will prevent abuse. I refuse to throw the our "free" society at the mercy of police discretion where it can be avoided. The police, in my experience, demonstrate little tendency to select discretion over expediency.**Proposition M (mandatory police foot patrol plans)**: I'm against this one simply because M is too high a letter or a proposition, especially considering AA. Okay, so while proposition fatigue set has started to set in, I did in fact read this one. Interesting little nugget at the end of the fine print says this measure would invalidate L. It's tucked away in Section 2a.89.6.2. Cute. I'm tempted to support it for that alone, but I will oppose it anyway: too many details, too many mandatory reports. It's hard to see how this sort of police micromanagement is productive. The way this sort of thing is to happen is to hire a police chief who supports foot patrols, then let him do his job.**Proposition N (increased transfer tax on high-valued properties)**: Arguably the most famous, or perhaps infamous, voter proposition in California history is Proposition 13 which passed in 1978. It was one of those "it seemed a good idea at the time" votes which are all too common, and an example of why throwing detailed legislation at voters who are bluntly unqualified and unwilling to carefully consider what's at stake is a bad idea. With so many properties paying tax at only a 2% inflation rate beyond 1975 rates, cities scramble to find new ways to extract revenue. A key result of Prop 13 is it discourages the transfer of property, because transferred property can be reassessed, and given that most property has appreciated far in excess of the 2%/year schedule allowed by Prop 13, property transfers can result in a huge increase in property tax rates, while the carrying cost of land can be relatively low. Therefore property may tend to be underutilized. Property tax is like a "membership fee" in a city, a "rental" for the position taken within the common society. None of us lives on an island, we live in a society where we depend on each other for survival. Property tax makes sense; it keeps properties active, provides incentive for them to be used efficiently. On the other hand, taxing transfers has the opposite effect: it discourages someone who has less use of the land from selling it to someone who has a better use. So I oppose this one.

How would Jan vote? |

So there you have it. In summary on the state issues;

**YES**: AA, B, G, H**NO**: A, C, D, E, F, I, J, K, L, M, N

Subscribe to:
Posts (Atom)