The Rental Home Efficiency Problem

There is a fundamental problem with energy conservation as it relates to homes that are rented from a central organization, such as a realtor. It’s a very simple problem: the realtor pays for infrastructure, but the renter pays for the utility bills. This setup makes sense at a first glance. Renters shouldn’t have to pay to renovate a home that they may only be living in for three months, and a realtor shouldn’t be held hostage to the utility demands of its renters. Hence the current arrangement seems reasonable.

But a fundamental problem here is that there is no incentive for a realtor to go to any great ends to make a home more heating or energy efficient, since they don’t have to pay any part of the utility bill. What does it matter to them if the home needs twice as much heat because the doors don’t have weatherstrips and the windows are single paned? And even the environmentally-conscious renter will be loathe to lay down a couple of thousand dollars on a home that they may soon be moving out of: even if they would like as many homes as possible to be inefficient, only the extremely rich can afford to pay for homes not their own to be retrofitted. So currently, most rental homes that I’m aware of are horribly energy inefficient, without an obvious remedy.

I therefore have a proposal that might be workable for all parties involved. Realtors should pay 10% of their renters’s heating and electricity bill. This is a small enough percentage that renters won’t feel compelled to go on an energy binge (10% off doesn’t exactly feel “subsidized”), but it is enough that the rental agency would feel compelled to assist the renters in making their homes more energy-efficient.

Most renters in the San Francisco bay area are making absurd profit margins anyhow, so 10% of a (max) $200 energy bill is $20/month of cost for them on intake of an average $3000/month in rent. This is hardly going to put them out of business, considering that it’ll be a chunk less than 1% of the rent income. A few hundred thrown in here and there on, say, weatherproofing, could pay off quickly. A few dollars more a lightbulb for fluorescents versus incandescents could quickly return the investment.

Without a co-pay structure like this that ties the utility costs in part to the entity responsible for structural investments, rental homes cannot help but remain horribly inefficient.

Rethinking Computer Interfaces

The first dot-com boom has come and gone and we now find ourselves
in the gritty interim period, where only innovation, cunning, and
realized profits will save us and bring us back to prosperity. Gone,
thankfully, are the lucrative days when anyone with a webpage could
call themselves an “ebusiness” and acheive an obscene level of
rapport with investors. A lot of new business models have been tried and
a handful succeeded, but most failed.

What I thought particularly sad about the situation is that there
have only been a handful of attempts to really think outside the
box in terms of the computing experience and a user interface.
Some of this is understood, since so much of that which was new was
being thought up on the web and it was natural to think of the user
experience in “HTML mode.” HTML being just designed for document
formatting (and not a user experience akin to a program), the set
of operations we could perform using web applications actually
has decreased. There’s only so much you can do in the way of novel
interfaces if they have to be in HTML and rendered in a web browser.
Some creative folks have figured out how to give us a bit more power
in these experiences, such as having popup calendars to fill in a
date, but support of even such basic interface interactions as “drag
and drop” or uploading a folder of information at a time are made
impossible with HTML. HTML centrism has really made narrow possibilities for user interfaces; the move to try and shift all local applications to web applications has reduced the potential richness of user experiences.

To its credit, Macromedia has done a lot to attempt to fight this
battle and enable genuinely new and interactive experiences on the web.
However, such experiences have been primarily focused on entertainment
and artistic applications. There aren’t very many practical
Shockwave or Flash applications out there. Part of this has to do
with the fact that although both enable rich interaction experiences,
neither is designed as a full API for programming fully functional
applications. They make it easy to display data, but not to obtain it
or to interact with resources present on the local computer.

Thankfully, there is at least one trend that may bring hope to new user
experiences: peer-to-peer (P2P) computing. With most P2P computing
models, a program has to sit on the desktop that gives you access
to information through a network of peers, as opposed to a web server.
Since each P2P network uses a different client access program (and
indeed some of these networks have multiple clients!), there are a
rich array of opportunities to create novel end-user experiences.
We can only hope that some companies will take advantage of this
opportunity to experiment with making novel interfaces.

Now what is it that I mean by “novel interfaces?” It’s not that I mean
novel in the sense of entertaining, but instead, novel in the sense that
an interface that is unique, innovative, and takes advantage of the
client platform.

Most user interface designs are static. They are taken from the
standpoint of the text publishing world – to have an interface that is nicely
laid out is a high compliment. This aesthetic, as mentioned above,
works well for web design, which is largely a collection of pages with a
certain static visual arrangement. (This site is no exception.) It is
what I will call a “paper aesthetic.”

But paper is not by any means a natural aesthetic. Whether you’re a
creationist or a scientist, humans weren’t built to stare at flat
sheets of paper (or, indeed, flat monitors) for days and days on end.
It just isn’t how we work. We were built to focus in on things that
need our attention, keeping aware of our peripheral surroundings for
changes, quietly noting subtle changes and becoming alerted
to objects moving quickly and visibly.

This concept of motion bringing attention has been used primarily in one place on the web: banner ads. Since in most cases they are the only thing moving on the page, our attention often drifts to them. Taking a look at‘s new user interface as applied to
particular pages, we see that they have restructured their site such that the only motion on the page is dead center and occupies a good percentage of screen real estate. It sits in the middle of the text, forcing your eyes to come to it as the text wraps around it. You can’t help but look at it: it’s their new series of Flash ads. Naturally, these ads pay much better since the use is forced to view them.

But what if we took some of these base concepts, these notions of
what attracts our attention, what distracts us, and what informs us,
and created a new style of user interface more closely adopted to an
ideal aesthetic – an interface that makes more sense for humans?
What would such an interface look like?

First off, in the real world, when we are working on something, we
bring it up close to us: it occupies the majority of our field of
vision. We get up and close to the screw needing tightening; we
push up close to the blueprint to consider each line. The watchmaker
does not engage his task from arm’s length. Consequent to this, a
good user interface will cause the primary activity to occupy the
majority of a user’s field of view and will clear away irrelevant
data to let us focus on the task at hand. While some applications
support a “full screen” mode that allows for focus on only the
task and nothing else (IE, MS Word, Excel, Adobe Acrobat, etc.),
the user experience should allow for every application to be a
central focus. Apple made a good stab at this by allowing a user
to hide all interface elements not pertaining to the currently run
application, including the finder. Most Windows users I know of
(including myself) have a very crowded desktop – while in one
sense our own fault, it ends up leaving a permanent and distracting
cruft on which I must perform all my work. In addition, most menubars
and icon trays are always visible and can’t be set to auto-hide.
This makes it considerably more difficult for a user to just focus
on what they’ve got to do. So the user must be able to concentrate
on a single task without being disturbed.

But what does it mean to be disturbed? One example of a bad
behavior in this example is Eudora’s default action when fetching
mail – if you’ve set it to go grab your email every five minutes
and put it in the background, when you do have mail, it whips you
out of whatever you’re in the middle of, takes you to Eudora, and
proudly displays a dialog box proclaiming that you have new mail.
This is obviously non-ideal behavior, but it’s also not a very
dramatic example simply because so many applications exhibit similar
characteristics. The classic thing that frustrates me is that if I’m
in the middle of wading through programs on the Start bar and some
program calls itself to attention, the computer drops the submenu
navigation (i.e., the Start bar menu goes away). The computer’s
designers are clearly making the statement “that which my program
has to say is more important than your navigation or input.” In
this modality, the computer can be considered an ouiji-based
entertainment center. You merely consent to the computer’s
understanding of how you should spend your time – it’s not designed
to take your interaction seriously.

A computer’s crisp, jerkily static imagery is poorly suited to
human interactions – we’re used to a world of “soft changes” and
things that move slowly, with a more or less smooth derivative,
instead of dialog boxes that jump out of nowhere and exclaim sounds
at us. We should come up with better interaction mechanisms for
keeping a user up to date with the status of a system without
disturbing them from a task. That is to say, an interface should
be designed such that with a cursory look, the status of the
system can be determined and changes actively reported, but not in
such a way as to be distracted. One way to do this would be through
color fades. For example, you could imagine a mail interface where
mail was sorted into various inboxes. Instead of having a list of
numbers of the quantity of unread messages in each inbox, coloring
each inbox in accordance with the importance of the unreplied
or unread messages in an inbox would allow for smooth color fades
when new messages came in. It would be subtle enough not to
distract you from a task, but by glancing over at the colors, you
could tell what you probably should look at next.

A database-backed file and email system is also called for. Instead
of putting a file into a specified category (i.e., folder), files
simply exist and can be filed under multiple categories (a job lead
from your friend Kevin about a Linux company could simultaneously
be filed under “job leads”, “Kevin”, and “Linux” without having three
copies). Data could also be retrieved via a variety of mechanisms
(looking for how recently it was composed, key words in its title,
its size, its type, etc.). As an added bonus, if implemented as a
“journalling” filesystem and coupled with a bit of data synchronization
sofftware, you could guarantee against losing documents and even
automatically have an infinite-level “undo” mechanism built in on the
filesystem layer, allowing you to version any files on your hard drive.
Modern hard drives are sufficiently large (80Gb hard drives now cost $300)
that there is no longer any feasible reason why textual, pictoral, and
even sound data should ever be deleted. Old copies should be automatically
kept around, but quietly so (e.g., they wouldn’t turn up in a general search or
clutter your views of which files existed, but you could still bring them
up explicitly).

The skills of television producers in drawing people’s attention to
information, actors, and scenes are particularly ingenius;
us programmers should learn from them and exploit the tricks they
have discovered. In the same way that many computer interfaces accidentally draw attention to the wrong things in the wrong ways (as mentioned above), we should
consider how we could purposefully bring the user’s attention to
certain things to guide and assist their focus. Apple had this in
mind when designing how alerts work in OS/X – they pull, transparently,
out of the top of a window. The motion lets the user know to look
there without surprising them, and the transparent but attached nature
of the alert lets the user know that the alert is tied to that particular

The main reason why these interface elements have not previously been
incorporated into common programs is inertia: with the current set of APIs, it’s much easier to piece together a program with usual menus, square windows, and a generally consistent feel relative to other applications (this is a good thing) than something really
groundbreaking. (Sonique, with its next-gen UI, has taken years to cobble together.)

Secondly, however, it’s only pretty recently that one
could assume that an end user’s computer would be fast enough to handle
smooth transforms, 3D effects, and fades in a complex environment. But
with even the cheapest of computers shipping with high-speed 3D accelerators these
days, that assumption is being shown to be false. But we haven’t really taken the
time to question the basic parameters by which we decided upon the current
mode of interfacing with a computer whose design really originated in the late 70’s
and early 80’s at Xerox PARC – and that actually originated from Stanford work
done in the mid-70’s! Clearly, (hopefully?) there is a larger set of possibilities
for human-computer interaction enabled by faster and more advanced technologies.
Now we only need to harness it and conceive of the next wave of interfaces, in
turn to be swept away in another 10-20 years when hardware will permit another
rethinking of interaction.

CES Quikie: XBox Preview

this quikie written for

Bill Gates opened the CES show floor on Friday morning with the second
keynote of the show. Bill brushed over the evolution of the computer and
his vision of “extreme entertainment.” During the speech, a pedestal with
a mysterious object atop was cloaked in black cloth. At the end of the
speech, Bill whipped back the cloth to reveal publicly for the first time
the Xbox’s actual design..

Those into gaming will know that the style of the box has no importance
relative to the quality of the hardware inside. So one of the Microsoft
presenters walked the crowd through two games in progress on a prototype
development kit. The first, “Munch’s Oddysee” was originally scheduled to
be part of the Playstation2 launch but was paid off by Microsoft to
release the game exclusively on Xbox. The demonstration was truly
impressive, showing extremely high quality animated figures in fully
textured, antialiased environments. Actions were smooth and believable.

The second demo, whose title and publisher are not recalling themselves
to me, showed a little girl a few inches tall that wielded a giant hammer as big as
herself in an environment full of giant insects. The gamer has a number
of very creative ways to use the hammer to squash, wipe, and mutilate the
bugs. This demo was equally impressive for its high quality realtime
rendering: the most impressive part of the demo was the ending bit, where
a giant robot comes to life that mimics the girl’s actions. The demoer
made the robot jump too high and consequently the robot’s head went
crashing through the ceiling with a resounding clang, leaving the robots
legs dangling helplessly below.

While I wouldn’t say that the graphics are at the point where they are
realistic – that is to say where screenshots could be mistaken for real
life photos, they are approaching broadcast-quality animation. The phrase
Bill used was “Toy Story-like quality” which really is pretty apt. I
figure in another five years, we’ll have movies with computer /
digitized actors that are mistakeable for humans, and another three years
after that will see realtime, interactive “reality,” at which point the
video game industry will be able to reach whole new audiences by creating
completely believable simulations and scenarios.

The Xbox will be coming out this Fall, pretty much at the same time as
Nintendo’s successor to the Nintendo 64, the Game Cube. Nintendo barely
had a showing at all at CES and has been keeping the hype meter pretty
low on their upcoming box, previously codenamed “Dolphin,” whereas
Microsoft has been running their press engine full steam. While it looks
on paper like the Xbox is far superior to the Game Cube, Microsoft has
been posting numbers that are aggressively optimistic theoretical
maximums (not really representative of in-game performance) whereas
Nintendo has been publishing guaranteed in-game minimums. The real
performance figures are rather close, according to sources at IGN and

Both boxes are considerably faster than the Playstation2 and sport more
features, but that’s to be expected from a platform that will have had
over a year extra to develop. The PS2 has gained itself a reputation for
being devilishly difficult to program efficiently – this may steer
developers towards platforms that are easier to exploit but it also means
that the current set of PS2 games are nowhere near using the PS2’s full
capacity; much more powerful games may be coming out before long. Even if
Sony isn’t good at making the hardware trivial to code for, it does have
a positive reputation in terms of developer support, so we should see PS2
games being cranked out at an increasing rate. One of the aspects of the
PS2 that makes it so interesting is its ability to play games designed
for the original Playstation as well as DVDs and CDs. Consequently, it
can today already engage thousands of games, which is impressive for a
newly-released platform.

Christmas 2001 will see an interesting, aggressive console market come to
maturation: the Xbox and Dolphin will be newly out, the Playstation2 will
have a full repertoire of games to play on, and the Dreamcast may also
remain a contender.

All four will offer some form of Internet connectivity with multiplayer
gaming, email, and web browsing. The PS2 will have acquired a hard drive
peripheral, and the Xbox will ship with one inside. It’s hard to say
where things will go from there or who will win the wars, but it’s
certain that Microsoft’s Xbox is to be taken very seriously – its
widespread support among game developers, Windows-based API, and
high-speed graphics architecture (courtesy Nvidia Corporation) will offer
a compelling array of games and services. Nintendo’s Dolphin, if it is to
succeed, must be able to match Microsoft’s hype with a fully loaded
system shipping with a large array of compelling games. PS2, in the
interim, will take the lead as the home entertainment system of choice.

CES Quickie: Intel Keynote

this quikie written for

Craig Barrett, the President and CEO of Intel opened CES on Friday night. The show opened in a jubilant and exciting fashion with the Blue Man Group: a famous trio of US performers with blue-painted faces whose antics have amused hundreds of thousands on stage and in recent Pentium 3 ads. Barrett himself emerged from a quivering mass of jelly, removed from his head by the blue men to the great amusement and delight of the audience. The presentation struggled from there on, however. The presentation tried very much to be “hip” and “cool,” even going so far as bringing Sinbad, the famous US comedian, and several inner-city teenagers on stage, but Barrett just deadpanned his role.

Barrett’s unsurprising vision for Intel, driven home repeatedly and unabashedly during the course of his speech, was that a powerful PC will remain the center of an ever-expanding universe of peripherals. In fact, he went so far as to call just about everything non-PC a “peripheral.” According to Barrett, PDAs are computer peripherals. In this world view, the more peripherals you have, the more you are empowered to do and the more value your computer brings you – he even had a catchphrase for it: PC^x. X, of course, being the number of peripherals you possess.

And devices that have traditionally been independent and analog will be transformed into PC peripherals, in Intel’s view. Ultimately just about everything in the home or office ends up, conceptually, a peripheral in the home. Which is great for Intel, since centrally managing a houseful of devices and applications would certainly take a pretty powerful processor.

To Intel’s credit, however, this model is actually pretty feasible. Once a consumer has bought a top-of-the line PC, it’d be too bad to let that compute power go to waste – leveraging this centralized power to enable simple, low cost devices does allow for a low-cost but rich experience. Intel showed off their simple digital camera, microscope, MP3 player (newly announced at the show: it looked pretty cool with 128Mb of memory!), a chatpad (for instant messaging from the couch) and sound recorder (for kids) as well as RCA’s eBook. These devices can be designed simply even while incorporating advanced features by assuming a connection to a fast PC with advanced software and a rich user interface.

Intel also showcased their support for P2P, even demoing a video filesharing application of their own, called Popster, and discussing new applications that allow a company to exploit unused processing and storage space on its desktops. Such applications are, surprise!, heavily compute intensive.

The show ended with a bang, literally, as the Blue Man Group reappeared and explosively showered the audience with paper rolls.

CES Quickie: MP3 Proliferation

this quikie written for

It was a scant two and a half years ago that the first piece of consumer MP3 hardware was released: a portable player called the MPMan was introduced at the First MP3 Summit in June 1998 by a small Korean company. Half a year later, Diamond Multimedia released their Rio player, validating the concept of consumer hardware for playing Internet audio. While there certainly has been an enormous rise in the number of MP3-enabled devices in the following two years, nothing could have prepared me for what I saw at CES 2001.

MP3 is everywhere. MP3 playback or encoding has been incorporated into every conceivable type of device, and then some. Nearly all of the products on the show floor either played MP3 or had upcoming versions that did.

There were the obvious MP3 players: Intel just released their 128Mb player and there were literally dozens of companies with CompactFlash or MMC-based players. IoMega had their new HipZip MP3 player that uses inexpensive 40Mb Clik disks to store music. Creative showcased their 6Gb hard drive-based Nomad player, while at least three other smaller companies also featured portable hard drive MP3 players (suspiciously, all of them featured 6Gb hard drives), one of which (Echo Mobile Music) even had an audio CD player built in: you could play from the hard drive, the CD, or, sans PC, encode the CD to the hard drive. Several standalone units also offered PC-free ways to enjoy the MP3 experience. Several companies featured portable CD players capable of playing MP3s on a data CD. These developments were fantastic, but not unexpected.

What what was unusual was the proliferation of MP3 into other devices. Harmon/Kardon had a audio/visual receiver with MP3 decoding. Casio had an MP3 watch (although the idea of a watch that needs recharging every day seems odd). Ericsson has had an MP3 attachment to their cell phones for a while in Europe, (over $200!), but Sprint recently announced a relabeled Samsung phone with an integrated MP3 player (it pauses the music when there’s an incoming call). Quite a few hardware vendors had integrated functionality into their DVD players to allow CD with MP3 files on them to be played back, including Samsung, Harmon/Kardon, Arcam, and others. Several vendors were showcasing “sound servers;” once installed in a home, they can pipe music to various parts of the home on demand. All of these servers used the MP3 format to encode CDs for playback. Many stereo CD players and mini stereo units featured MP3 CD playback functionality. A handful of car players also supported MP3 CDs, most notably Clarion. The Diamond Rio car player (Diamond acquired the British company empeg, which makes the box) has a built-in 10-60Gb hard drive array full of MP3 files – you can even wirelessly fill it with songs from your computer while it’s sitting in your driveway! Quite a few Internet music clients were at the show as well, including Kerbango (whose unit will be shipping Any Day Now) and the newly-introduced Audioramp (whose Windows CE-based demo unit had apparently crashed), both of which allow you to tune into AM, FM, or Internet channels in a cute standalone unit.

But the product that really took the cake in terms of absurd MP3 integration was Polaroid’s inclusion of MP3 playback functionality in one of their $250 digital cameras, apparently thinking that there is strong consumer need to listen to music on your camera; “but you can’t play a song at the same time as taking a picture,” the Polaroid representative warned, “just between pictures.”

Separately from specific consumer products, there was a myriad of companies that were selling MP3 encoding and decoding chipsets (including heavy hitters like Micronas and Texas Instrunments), MP3 set top box engineering reference designs, software architectures to enable the sales of MP3 content, MP3 kiosks, and more.

The clear message at CES 2001 is that MP3 and Internet audio are here to stay and are becoming an important part of everyday consumers’ lives. With such widespread industry support, 2001 will clearly see Internet audio made even more easy to use, fashionable, and widely available.

CES Quickie: 2001 Roundup

this quikie written for

This year’s CES saw largely incremental improvements of existing technologies, with only a handful of breakthroughs and genuinely new categories of products. In fact, most products at CES seemed to be targeted at a handful of very specific categories.

One such category is the “set top box.” There is a concept of a singular box that sits atop your television that is your gateway to information and entertainment. This box can connect to the Internet and let you browse the web and check your email. It can play DVDs, CDs, and MP3s. It can play video games and record television shows for you. It seemed that an unnatural number of CES exhibitors were all shooting to be the singular provider of such a box.

Microsoft, via their Xbox, and Sony, with their Playstation2, are attempting to have this box be first and foremost a gaming machine but with set top-like functionality added on. Tivo, Replay, and UltimateTV (also owned by Microsoft!) take the Personal Video Recorder (PVR) approach: take the box that is already processing your television signal (to let you pause live TV, record every Simpsons episode, etc.) and has a nice user interface, and add on web browser and email capabilities, etc. Samsung and other high-end DVD manufacturers are looking to turn DVD players into set top boxes: since a DVD unit generally has a graphical interface, a remote, some amount of processing power, and audio output, adding MP3 CD player functionality is easy, and turning it into a web browsing box takes only the addition of an Ethernet port to the back. A new company called nuon has technology to enable extended functionality on DVD players; you can even get video games for nuon-enabled devices: they’re called “DVD Interactive.” Harmon/Kardon, by licensing ZapMedia’s architecture, has souped up a CD/DVD player into a PVR/DVD/MP3/Web browser set top unit. Several other quiet startups, like Rearden Steel (founded by Steve Perlman, one of the primary architects of WebTV) are also attempting to put together a box that incorporates every function you could possibly need for entertainment and/or education, make it look slick, be easy to use, and inexpensive. It’s a gargantuan task, but with so many companies throwing themselves at this problem, we’re sure to see some interesting solutions in the near future.

Another area where we saw a lot of companies clustering and trying to provide similar products is that of the residential gateway. The idea is simple: homes with multiple PCs need a way to easily share their Internet connection and need a cheap box to do it. A large percentage of PC sales these days are to homes that already have a computer, so the market for such devices is growing quickly.

While these products started out as scaled down Ethernet bridges, additional functions were quickly added, increasing security (to protect automatically against Internet hackers), DHCP (to automatically assign IP addresses to computers), and NAT (to allow multiple computers to only use one IP address). New gateways additionally enable home phoneline networking and wireless Ethernet in the home. Since this box is guaranteed to be connected to the Internet and to your home’s computers, it becomes an interesting place to put Internet services, such as web and file serving. Next-generation residential gateways will allow users to access files on their home network from abroad without the need for external servers. Also coming is integration with home automation units, allowing you, through a password protected interface, turn on the lights in your home from work, or be emailed in the case of an intrusion with a video of the perpetrator.

But even though most of the technology at CES was merely incremental and in many cases had a “me too” feel to it, there are sometimes improvements to a product line that cause a product in a crowded industry to cross a threshold from merely interesting into fantastic. A superb example of this is Samsung’s 240T 24″ TFT LCD. Many companies are making large, flat, thin displays in all sorts of form factors and resolutions. So the concept of having a nice, large, light LCD screen is not out of the ordinary. But Samsung’s monitor made me gasp. The 16:10, 1920 x 1200 screen that could handle analog TV, HDTV, digital PC, analog PC, and S-Video was just so outstandingly crisp, clear, bright, and vivid that I almost had a heart attack when I saw it. Then I learned that the unit was only $7000. (For comparison, other LCD TVs are often tens of thousands of dollars with much poorer screens.) I told the representative that if I had had a gun in my back pocket, I’d have held it to his head and walked out with the 240T under my arm. The representative agreed: he wanted one as badly as I did!

So in one sense, CES was disappointing for a paucity (though certainly not absence) of genuine innovation and for its clustering around generic visions of the future. But in another sense, the continued development and improvement of existing technologies may allow for entirely new classes of devices and consumer experiences, as shown by Samsung. Perhaps the market slowdown and disappointing Christmas sales have somewhat influenced the consumer electronics market for the worse and temporarily slowed the pace of innovation. It stands to note that there were considerably fewer attendees and exhibitors at CES 2001 than at CES 2000. But the market will recover, and new electronics will continue to be invented. There will be a CES 2002 and it will probably be fantastic. So the future is bright, even if the present is a touch dimmer than usual.

Hire Me!

posted january 3, 2001
revised january 18, 2001

I’ve been working at home for the last two months on my own projects and doing a little consulting on the side. It’s been okay…

Actually it hasn’t been. It’s been awful. I am a team player. I love
working with other people. I come alive when on a project with others,
but when I’m on my own, I flounder.

You see, when you have 90% of your time structured, intellectually
meandering with the remaining 10% is a lot of fun, and can be very
productive. But when you’re meandering 100% of the time, you just get
lost, tired, depressed, and very, very bored. Which is about where I
am right now. I need to be utilized. I need to be part of a team,
organizing things, engineering, designing, thinking, laughing with
others. So I’ve begun a job hunt.

I’m not sure what my long term vision for the future is, but I’d like
to at some point start my own incubator, codenamed ‘Coceve.’ In line
with this, I’m going to need to find a good team of people and also
sharpen myself by being surrounded by brilliance. So I’m looking for
a fun, difficult engineering job within a reasonable commute from
San Jose, with a really good team, any size.

I program C, C++, Java, Perl, Lisp, FORTRAN, Visual Basic, PHP,
SQL, and just about anything else you want to throw at me. I do Linux,
have done MacOS, have written Win32 ditties, and have hacked on Solaris.
I write columns and analyses. I speak at conferences. I am your man.

If you’re working with a cool company, and you like what you
see on this site, here are some things to do: