Four Typical Post-Product Mistakes

You’ve built a great product. You’ve got a couple customers (okay, most of them are friends and/or free), a small but passionate team, and a product that demos well. Now you just need to turn it into a company!

MISTAKE: Waiting for resources

There’s always a good reason why that next step is elusive. You might be holding out for an investment round so you can really launch a big marketing campaign. Or hire that slick salesperson. Or become premium sponsors at that big conference that your key customers are going to. But you don’t have the money; so hey, what can you do?

ANSWER: Time to get creative!

The truth is you are always going to feel resource constrained. Even when your company is worth a hundred billion dollars! So you need to learn to hustle around these apparent barriers. Remember: necessity is the mother of invention. By being scrappy you’ll be forced to focus your time and attention on the handful of things that are going to make the biggest difference. I’ve often seen startups after raising a big round spend huge amounts of time and money on things unlikely to improve the state of the company, like fancy launch parties.

Let’s say there’s a conference you’re sure it would make a huge difference for your company but you can’t afford the flight+hotel+registration. Can you go to TaskRabbit or Craigslist and spend $50 on someone local who can print out a bunch of flyers for you? Can you have them drop off unofficial welcome bags for guests at the hotels used by the conference? Can you submit a talk and get the conference to sponsor your flight?

A good entrepreneur always moves the company forward, no matter how challenging things seem. But keep in mind — just because you hustle doesn’t mean you should cut ethical corners; making up fake reviews for your site to make it feel more vibrant could lead to major trust issues; but having a bunch of friends review products with you is fine.

MISTAKE: Over-technical product

Your super-smart engineers made sure that every possible field and flag in the database was exposed in the customer interface. They have some cool analytics packages and are using d3 to make pretty visualizations of the data’s trend lines. Every time you show it off you dazzle people! This product is really sophisticated.

Weirdly enough, you’re having trouble actually closing sales of this super-sophisticated service. Prospects are impressed but you just can’t seem to figure out what’s holding them back from signing up.

ANSWER: Compassion

You built a product targeted at small businesses but designed for engineers and data analysts. Do you really think a tie shop owner is going to have the time or training to slog through forty pages of numbers and acronyms to better understand where he should open his next store?

Think about the outcome that the end user wants and make it clear how your product or service will get them that. Make sure that everything your product does is focused on making sure they meet that success. This usually looks like showing less information, not more, and abstracting the controls to your database into actions that your customers understand.

Try sitting down with a new potential client who has never seen your software before and, without giving them any direction or coaching, have them complete a set of tasks. you’ll be really astonished and humbled to see all of those flourishes you put in just confuse and confound regular folks who haven’t spent months staring at these screens. Make sure you build what’s right for them.

MISTAKE: Selling on cost

You’re confident that you have an amazing product to bring to the market; the existing dominant player has a really crappy offering that costs four times more than what you’re planning on charging. Surely you will dominate this market given that your product is not only way better, it’s way cheaper?

ANSWER: Sell on value

If you’re selling on cost, you’re focused on how little money your customer will need to pay for your service/product and not how much value you’re going to be providing them. It’s much more inspiring for a business to be thinking about how much more money they could be making instead of how much less money they’ll be spending. Focus on making your customer more successful. If you can prove you’re growing their business you can capture a lot more of the value you’re providing them. If you’re just a cost center, you’re a commodity that’s part of a race to zero — some other startup will just undersell YOU next year. Don’t be a tool — be a partner to your customers.

This often can make sales easier as well — if you can make your sale a zero-risk proposition by only charging proportional to the new value you’re bringing the business, the customer has nothing to lose by trying your solution out.

MISTAKE: Rapid early expansion.

You’ve closed a few dozen customers in one market — while you’re only just beginning, you’re really excited by the idea of rolling this offering out to many more places and applications to prove how large your potential is and maximize your growth. You’re sure that this expansion will help trigger investor interest and will help you recruit too.

ANSWER: Nail it then scale it.

If your strategy isn’t working at scale yet for one geography or industry, expansion is not your best next move. Find a niche that you can completely dominate — by focusing on one set of people to serve it will keep you honest about what’s working and not and will help you iterate the fastest. (It should go without saying that wherever that geography is, that’s where you should be, too!) It will probably take you a number of failures and adjustments to really tune your model for your initial market. Once you’ve rocked a model and have near-saturated a market, it’s time to look at rolling it out in different geographies.

I know you want to prove that your idea is enormous, but if you try to “boil the ocean” before you have traction anywhere, you’re not going to get anywhere. Focus will pay you dividends.

David Weekly is a product manager at Facebook and is an award-winning startup mentor. He founded Mexican.VC, the first Silicon-Valley seed fund for Mexican tech companies (acquired by 500 Startups) and Drone.VC, the first drone fund. He enjoys helping build tech communities around the world and is is the founding Director of Hacker Dojo, the world’s largest non-profit hackerspace. Follow him on Twitter, Facebook, LinkedIn, or AngelList.

HDMI 2.0, 4K@60fps, and UHD Content

Current display technology, using HDMI 1.4, can only do 4K at 30fps. This has rather limited the adoption of 4K screens since there has been no way to get high frame rate content all the way to the display – and of course there’s no way to do 4K in 3D. (This is beside a point that there’s precious little source 4K content and the solutions from e.g. Sony have been whimperingly pathetic, effectively equivalent to attaching a large USB stick full of preloaded movies to your $10,000 television.)

HDMI 2.0 changes this, doubling the cable bandwidth from 10gbps to 18gbps (!) over the same cables, allowing for 4K at a full 60fps, which will from a technology perspective unlock the full potential of 4K as a medium, though there are still real content availability issues. We’ll see the first HDMI 2.0 TVs and projectors come out this year – Yamaha already has some relatively inexpensive (~$300+) receivers that incorporate 2.0 set to come out in a few months. This should give a gentle prod to content licensing for 4K and would hopefully imply a Chromecast/Roku/AppleTV trio capable of 60fps 4K by the end of the year. Sadly, this final technology push to enable 4K appears to have come too late for the major consoles, which have now locked and loaded with HDMI 1.4 and a fixed 1080p cap. But there could be a surprise here yet – Sony is selling 1.4 4K TVs with a promise of a field upgrade to 2.0 later this year, which implies a possibility of the PS4 getting a field upgrade to 2.0 as a nice present. (Though 4K games on the PS4 are unlikely.)

The South Koreans have been on a tear with HiDPI displays, pushing the price point of 4K displays down from many thousands to a few hundred dollars. (These displays often connect via DisplayPort only; DisplayPort 1.2 offers 18gbps as well.) Combine that with OS/X 10.9.3’s support for 4K 60fps (using a 30fps desktop monitor is really weird/awkward) and there will finally be a sensible 4K work environment by the end of the quarter. I predict a huge wave of 27″ & 30″ HiDPI monitor purchases by workplaces to improve productivity as workers can finally fit more information onto a panel without compromising on cost, quality, or framerate. (My hunch is that productivity drops off around 30″ or so as you actually start having a serious issue with non-FOV information; I experienced this when I had a pair of 30″ monitors and would sometimes “miss” incoming messages because they were showing up too far away for me to see them! I ended up downsizing.)

So I think this year we’ll finally cross the hump around adoption of 4K, but there’s still this big hole around content. The first to market with a great HDMI 2.0 solution with gigE+802.11ac 3×3, hardware H.265 support, a solid local cache to help w/buffering, and lots of licensed content (partnership w/Netflix & YouTube?) will probably do quite well.

Life@FB: Last Day of Bootcamp

9am, Facebook Headquarters: Cafe 18. My laptop is open next to a plate of kiwi and mango. It’s Friday and my last day of “bootcamp” at Facebook. Every person in the engineering organization – VP to fresh-outta-college – has to go through four to six weeks of fixing real bugs from all kinds of nooks and crannies of the product. I’d never done any Android programming at all but yesterday I had to figure out how to add a new feature to what might be the world’s most popular Android application. I was up until 1am getting it put together and then submitted my code for review. So I’m a little groggy.

Tomorrow morning I embark to Seoul where I’ll shortly be on stage in front of a few hundred developers explaining how to make their games more social. First, I’ll need to understand what the heck I’m talking about (eep). They really do throw you into the deep end here. Afterwards, I’m headed to Hong Kong, Taipei, and Manila to meet up with developers there. It’ll be very quick travel – I’m literally spending 20 hours in Manila, including sleep, but due to some lucky help from folks like Michelle Santos I’ll have a pretty packed itinerary to make best use of the time. And I’m packing some Ambien to catch some sleep on the plane.

I’m excited and terrified and perhaps feel a little out of my league but maybe that’s just right. Here goes.

Science and Religion

How do people come to believe a thing? They can believe a thing because of reason, or take something on faith.

The realm of reason is systems of falsifiable facts, which is to say facts that can be shown to be true or false with certain levels of accuracy. Any person (or system!) that can reason and observe outcomes can come to agreement about a system of beliefs resting upon tested and falsifiable facts. One cannot have wars over the value of pi; the facts will speak for themselves.

Things taken on faith by definition cannot be proven. Critically, this means that they cannot be disproven. A foolish, if typical, defense of religion rests on this lack of disprovability, treating it as a strength and not a weakness. While it’s true that one cannot prove that God does not exist, there is an infinite set of absurd beliefs that also cannot be disproven; for example, that the world was created six seconds ago exactly as it is now with, of course, your memory of the past being invented by a metaphysical entity. If one were to believe all things that were non-disprovable, they would find many would contradict each other – and without a system for proving or disproving which beliefs to hold and which to discard, they’d have to hold all of these impossibly conflicting beliefs at the same time, or discard an arbitrary set of them.

Since articles of faith are not arrived at by reason, the path to emotional or spiritual belief differs from person to person and consequently is difficult to transfer via discourse or objective demonstration. Non-disprovable beliefs cannot be constructively argued — while acknowledging that history clearly demonstrates that the futility of the matter has not kept people from attempting to argue religion.

Emotional beliefs often transfer with passion – when we see someone deeply enthralled with an idea, that excitement can become infectious. Likewise, when someone who we admire believes certain things we tend to want to think like they think and believe what they believe regardless of rational inspection of their beliefs. This is why the role of the charismatic preacher is so often important to a fast-growing religion and also why parental indoctrination is so critical for these beliefs.

Much of this is an understandable shortcut; it would be impossibly exhausting to personally verify all of the facts that one believes, so there are some areas in which we have to rely on the reason of others. The degree to which I can share a system of reasoning with another is the degree to which I can trust a commonality of judgement and expectation. This commonality is the basis for systems of trade and standardization. I contrast the value of a house sold to me that a shaman insisted the gods would protect from collapse with a house sold to me inspected by a licensed civil engineer. It is possible that the shaman is right and that there are gods actually defending my house from harm, but unless I share the same set of beliefs it is difficult for me to accept…and impossible for me to verify.

It is difficult to measure the forward progress or development of religious beliefs. While we can certainly observe changes and schisms in movements, there is no clear bar by which we can say that these changes constitute improvements. Indeed, it is a very common thing for a longstanding religious movement to attempt to “return to its roots” and explicitly revert to an earlier belief system. It would be hard to argue clearly for or against such a reversion without a metric.

In contrast, falsifiable ideas can through experiment be observed to be true or false with varying degrees of certainty. Once a result is examined and agreed upon, future experiments strive for additional precision of certainty or to test new ideas. This allows for the intergenerational accumulation of a body of knowledge of ideas along with data from experiments that variously illustrate their truth. Every generation can then know more than the last.

Not infrequently, as we improve our precision of measurement, we discover that an idea that was thought to be verified is actually not entirely correct – it was merely sufficiently correct to be observed as such by a less precise experiment. Hence the value in re-running old experiments when new equipment or techniques can allow the experiment to be run with greater precision. This answers why scientists may dig into an experiment with such relish where they are expecting to observe a certain value (with greater precision) even when a few years ago an experiment to measure said value succeeded (with lower precision).

Skepticism is bred in the public when they observe as a consequence of the above that scientists seem to believe one thing (when an experiment is conducted that appears to show that a thing is true when observed at a certain level of accuracy) and then later not (when a further experiment measuring at greater detail observes a subtly but importantly different result). Without taking into account that the precision of the data measured is increasing, it can feel like scientists are vacillating, in the very same ways that religious movements can vacillate. This doesn’t look like progress.

But the difference is that models are discarded principally because they have been falsified; the falsified model will not be returned to later. It was discarded for a better model that, in turn, may be discarded but only when a superior model in turn arrives. As a species, our understanding of the universe advances and each generation is made more knowledgeable, powerful, and aware.

This process may never end because we know that measurement with perfect precision is an unattainable goal. Consequently every generation can only hope for superior precision to the last in testing our ideas. There are also not a finite number of ideas to be verified.

But what is the origin of these falsifiable ideas? How do we know what to test, and how? Some of these beliefs can be axiomatically built one upon the other, but many are arrived at through intuition, randomness, and/or accident. This means that half of science – the generation of hypotheses – is not itself arrived at through scientific process itself. It is held accountable to logic while not stemming from it.

Logic, therefore, is the course by which we may direct the spirit’s passions towards progress. With only reason, we would be able to observe and test but never discover new ways to observe or ideas to test. With only emotion, we would have many conflicting ideas and no way to advance. With both, in balance, we can continue to develop our species with new, better models of how the world works.

How To Handle Recruiter Calls

Unsolicited calls from a tech recruiter are one of the banes of existence of a technologist with a LinkedIn and/or GitHub profile that has been anything close to meaningfully filled out. Or a startup founder. Both sides get hammered with calls. And emails. All. The. Time. If you haven’t been on the receiving side of these, it may be difficult to explain why these calls are so infuriating. After all, isn’t it nice to hear that someone thinks you’re employable or on the other side that someone has smart people who might be interested in working for you?

The thing is, most recruiters do not come from the field for which they are recruiting. They don’t know whether or not you are a good fit for a given job because they don’t know what a high-concurrency Erlang WebSocket specialist is. So they try and find those words on people’s resumes, even resumes that say “please don’t contact me” and they plow right ahead on and call if they find them. Why? Contingency recruiters will typically earn 20% or more of a placed engineer’s first year salary as their commission. So for, say, a $100k mid-level engineer, they would get paid a $20,000 earn-out!

These economics make them unstoppable: If they do 200 phone calls of half an hour each to find one engineering candidate willing to work with them and 200 phone calls of half an hour each to find one company willing to place that engineer, they are still earning $100/hour, less a dollar or two for their phone bill. Like spammers, recruiters are not charged money for wasting other people’s time, like those 199 engineers and 199 companies who were not a good fit. The possibility that you might be interested is just too tempting. So they call. And call. And call.

It’s not atypical for a good engineer in a hot area – perhaps a senior iOS developer in Silicon Valley – to get several phone calls and emails a day. I’ve seen recruiters try calling with blocked numbers or from local area codes. And when they connect, the quality of the calls is, by definition, pretty terrible. A contract recruiter can’t name the company they’re hiring for, lest you just apply directly and circumvent the recruiter. So they’ll wax on with vague aphorisms like “The Next Facebook”, “totally on fire right now”, and other equally meaningless terms without telling you about what the company is or does.

On the employer side, it’s no better, with recruiters assuring you that they have an engineer who is “very experienced, great stuff” who is “very excited to work with your company” while providing no validatable details, lest you reach out to the candidate yourself directly. This is akin to playing telephone between two Russian speakers via someone who does not speak Russian. Neither party can meaningfully vet the other through a recruiter who does not understand the work the company performs nor what the candidate actually does for a living.

Now, while I am not a lawyer – and this doesn’t constitute legal advice – in recruiters’  reckless pursuit of placement, they may overlook the law. They make unsolicited commercial calls placed to individuals’ mobile phones in likely violation of federal law. Many don’t check the Federal Do Not Call registry, a further and separate violation of federal law. If they don’t respect requests to be removed from their call lists, as many fail to do, it is yet another violation of law.

So what can you do about this?

  • Make a note of the time and date of the recruiter’s phone call. Make sure you record their caller ID and write down any information concerning their firm’s name and the caller’s name. During every call, clearly communicate that you would like to be removed from the recruiter’s call list. Keep a record somewhere, like Google Docs.
  • If you’re not already, join the Do Not Call Registry. If you are already in DNCR, any telemarketing call you receive can be reported to the federal registry as a violation.
  • Additionally, whether or not you are on the Do Not Call list, any unsolicited commercial call to your mobile device may be reported to the FCC electronically.
  • If you want to get really aggressive, the Telephone Consumer Protection Act of 1991 allows you to sue for up to $1,500 per violation, which you could file for in small claims court. There are some fun stories of people finding success in this approach.

There are good guides out there, like How to Sue Telemarketers in Small Claims Court. Be bold! If only a small fraction of recipients of unwanted recruiter calls “strike back”, it will make it substantially less economically desirable for recruiters to mass-call without checking Do-Not-Call lists.

If you’re less of the lawsuit-wielding type but still want to do the world some good, instead of filing claims against the unwanted recruiter, you can point them at places like the Hacker Dojo‘s page explaining the right way to find awesome developer talent. These sponsorships have been critical in helping us build the world’s largest non-profit hackerspace. (You should come by! We’re open 24/7!)

Epilogue: Lest I be perceived as hating on a whole industry uniformly, there are some good recruiters out there who have a good understanding of the market for which they are hiring and know how to not harass potential recruits or businesses. These recruiters will often work in a full-time position at a company for a period of time to help them spool up a team. They reach out through social networks with carefully researched individually-specific messages showing thoughtfulness and an understanding of a match between the candidate and the company. These people are great, and they convert really well. Treasure them.

Discuss on Hacker News and season to taste.

An Overview of the Web

Written as a Thanksgiving present for Kate Compton


A device you access the Internet with is a computer. Even if you think of it as your phone or your tablet or your laptop or your desktop, all of these things are actually computers. Your computer, when connecting to the Internet, is called a “client”. Your client, running a program that acts on your behalf (called a “user agent”) sends messages to another kind of computer called a “server”, that does its best to answer your client’s questions. The language the user agent uses to talk to the server is called a “protocol”. When you’re requesting a web page, your user agent is your web browser – such as Firefox, Chrome, Safari, or Internet Explorer – and this user agents uses the HyperText Transfer Protocol (HTTP) to ask questions of a “web” server and receive a response. There are other protocols, like the File Transfer Protocol (FTP), the Simple Mail Transfer Protocol (SMTP, used for email), Skype, and many others. Many downloaded multiplayer video games use their own special protocols to talk to game servers. Let’s dig in a little more into what happens when you fetch a web page.

Your computer is connected to the Internet most commonly via WiFi or the cell network. Your computer has an antenna inside that communicates with a nearby special computer called a “base station” that also has a similar kind of antenna in it. Your computer says hello to the base station and asks for an Internet Protocol (IP) address using a protocol called DHCP; the base station responds with an IP address assignment, let’s say, and some other information, including a Domain Name System (DNS) address. Your computer is now ready to try and send requests to a server over the Internet.

When you type in an address into your web browser, like “”, your computer first needs to look up the IP address for that server so it can send the request. To do this, it sends a DNS request to the DNS server returned by the base station. In this case, the base station looks up what server is responsible for all of .COM, asks that server who’s responsible for, then asks that server what the address is for Having obtained the answer, the base station then responds to your client with the IP address for, which is Most IP addresses look like this: four numbers between 0 and 255. You can also force your client’s operating system to use a particular name server – OpenDNS (, and Google (, both offer high-quality free DNS resolution to the public which is often much better than you’d otherwise get and allows for some protection against malware and phishing.

(SIDEBAR: There is an updated version of IP called IPv6 that uses sixteen numbers, but it’s not very broadly used yet – because we’ve taken so long to adopt IPv6, we’ve basically run out of IP addresses in the current scheme, IPv4.)

The server needs to know what protocol we are trying to use to speak to it, so we pick a “port number” that indicates that we’d like to talk to the server over HTTP. HTTP uses port 80. (That number is somewhat arbitrary but is assigned by an international committee on a per-protocol basis.) HTTP sits on top of a lower-level protocol that makes sure that information arrives in the proper order and can recover from interruption – this lower-level protocol is called the Transmission Control Protocol, or TCP. TCP uses a “three way handshake” to make sure that the client and server are actually talking to each other before any higher level communication occurs. It starts with the client sending a hello to the server’s IP and port – this greeting is, for historical reasons, call a synchronization packet, or SYN. If the server is reachable and is running a program that is listening on the given port, the server will attempt to respond to the client with an acknowledgement, called a SYN-ACK. Finally, the client lets the server know that the SYN-ACK was received successfully with an ACK. At this point, the client and the server are now ready to begin communicating and the connection is said to be “open”.

In HTTP, the client speaks first (this is not the case with all protocols!). The client starts with saying what kind of request it is making, the request itself, the version of HTTP it is trying to speak, and several other request headers. In some cases, such as after submitting a form or when uploading a file, there’s also data attached after the headers. These request headers include the name of the user agent, any cookies associated with that website, and other information.

The server then issues an HTTP response starting with a response code – a three digit number that indicates success, a request for a redirect, or different kinds of errors. You may have seen “404 errors” on the web before – that is the code to indicate that a resource could not be found on the server. 200 is the code that indicates that the request was successful and the response will include the result. (There are many other codes.) The server response also has headers and then data.

To save time, the client is then free to make another request to the server over this connection without having to complete another TCP three way handshake. This is called “keep alive”. If, after a certain period of time, the client hasn’t yet made another request, the server will close the connection (by sending a TCP FIN packet to the client).



Now let’s take a look at that program that runs on the server that listens for HTTP requests. The server must be able to piece together the request, figure out what to do, and transmit a response. If the request is for a file, say an image or logo, this is pretty straightforward – we check to see whether or not the file exists. If the file doesn’t exist, we send the client a “404” and we’re done. If the file does exist, we send the client a “200”, the length and type of the file, and then the file itself. This is called a “static resource” – anyone who requests that image will get the same image. More difficult are responses that are different depending on who asked – for instance, going to results in different information being displayed depending on whether you or your friend is currently logged in. Some code logic separate from the basic mechanics of a web server needs to evaluate “hey, is this person currently logged in, is it valid, and what information am I going to need to show to them?” This code is called a “web application”. Web applications can be written in many different programming languages, but it is popular to write them in environments where it’s harder to write bugs that can crash your server. (Server crashes can not only make your service unavailable they can sometimes be used to break into your server and steal information.) This is called a “managed” environment.

People get into religious wars about what server programming language is best to use, so suffice it to say there are several choices, each with their attendant pros and cons. In my experience, there are three general camps: the first is .NET programmers, who write in a language called ASP.NET or another called C# (pronounced “See Sharp”). C# in particular is viewed as a very nicely designed language but it only runs well on Microsoft servers, which in turn more or less require all of your servers to be running on Microsoft. This is not very popular in Silicon Valley.

The second camp is Java programmers. Since Java has been around for over 15 years now, there’s a lot of history and a big community around the language, with many different and sophisticated techniques for programming server applications. Culturally, however, it’s rare to see hip startups using Java – most of the Java code I see these days is at larger and enterprise-oriented companies. Part of this is that there has been time to develop careful engineering methodologies for building and testing Java code using large teams.

Finally, there is the camp of those who are using interpreted server languages, such as PHP, Ruby, Python, or Javascript. I’ll break each of these sub-camps down, though I should note that these camps share more in common with each other than they do with Java or .NET.

A decade ago, a language called PHP was pretty popular, and it is still in very wide use. Facebook, Wikipedia, Yahoo, and Zynga use PHP to write most of their server code. There are a lot of people with PHP programming experience, so it’s pretty easy to hire them or find code examples. That said, there are many people who really dislike PHP due to inconsistencies within the language (which grew very organically from cramming many different libraries together) and find it inelegant and difficult to use in very large projects.

A very clean and sophisticated language called Ruby was created way back in 1995 by a Japanese programmer, but it didn’t become very popular until 2005, when a Chicago design agency published as open source a clever way of writing clean, powerful web applications in Ruby. They called this set of code and libraries “Rails”. (A combination of code and libraries that offer a way to program an application is called a “framework”.) Rails introduced a number of concepts that make it fun and easy to program. Many of these concepts have since been copied into other languages and frameworks.

There is another very nice programming language called Python that is very popular inside Google. It is a nice mix of being relatively easy to learn but sophisticated at the same time – many of the smartest nerds I know prefer to write their most interesting code in Python.

Finally, it has become very popular in the last two years to write server applications in Javascript, using a framework called Node.js. Javascript is actually a totally different language from Java. It has no relationship to Java at all! Code for Node.js is written in a special way that forces you to not cause the server to twiddle its thumbs needlessly waiting (called “blocking”). This is very different from how applications in many other environments work, where it is very difficult to avoid accidentally blocking. As a result of this, it’s possible to write code that serves tens of thousands of people at the same time on a single machine, whereas that is hard to do with e.g. PHP.



Once you have written your application, you generally need to be able to store data somewhere. If people can sign up for your service, you’ll need to save people’s username, password, profile picture, and basic information. The place where you keep this information around is called a “database”. It used to be very popular to use one particular kind of database called a “relational database”. Almost everyone in the startup community used a database called MySQL, though some (like Instagram!) used another one called Postgres; people at larger companies used relational databases from IBM (DB2), Oracle, or Microsoft (SQL Server). There’s also a very tiny SQL database called SQLite that’s easy to embed inside other programs or for very lightweight needs and is popularly used in that way.

There is a Standard Query Language (SQL) that one uses to talk to a relational database. You can INSERT information into the database, or REPLACE or DELETE it, or if you’re looking to ask it questions you can SELECT information from it. Information is stored in TABLEs that you can think of as Excel sheets – there are labeled columns and any number of rows of information. You SELECT what columns you want to fetch (or * for “all of them”) FROM which table and then WHERE certain criteria are met. Let’s say you wanted to get a list of all of the names and email addresses of the thirtysomethings using your service so you could send them an email about how to avoid getting gray hair. You’d “SELECT name, email FROM users WHERE age >= 30 AND age < 40”. Fun fact: the airport in San Carlos, CA – just south of Oracle – has the airport code SQL.

There are other ways to store and manage information, however. Sometimes you don’t need something as fancy as a relational database. For instance, you might need to check whether or not a user is currently logged in – you need to be able to check this quickly or every page on your site will be slow. You might store a “sessionID” and when it is “validUntil”. When a client presents you with a sessionID, you just need to check to see whether you know about the sessionID and whether it’s still valid. If it isn’t, you send the user to a login screen. Because you want to be able to check this information very quickly, it could be a good idea to store it in memory without writing it to a hard drive – memory is often thousands of times faster than hard drives. Of course, if the power goes out, after the server reboots everyone is going to need to log in again, but this is not a really catastrophic failure. This kind of setup is served with an “in-memory key-value store” such as memcached or Redis. A database like MySQL spends a lot of its time “parsing” SQL, but a simple database like memcached can use a much simpler protocol and therefore run much faster than talking to MySQL via SQL. Consequently, databases that use simpler protocols than SQL are popularly called “NoSQL”, also implying that the database is likely not relational.

Also popular are “document oriented databases” which have much less rigid definitions around how the data should be stored and organized. MongoDB is one of the most popular of these, though CouchDB, and Cassandra are both in common use. The kind of flexibility afforded by document oriented databases can make it easy to rapidly develop sophisticated applications, though there have been complaints about how some of these databases perform when under very heavy load (e.g. if your website suddenly becomes very popular).



All of this above was written assuming that you are running your own server. This used to be pretty common to do. The most popular kinds of servers look kind of like extra large pizza boxes. They are exactly 19″ wide,  little more than an inch tall, and quite long. They were this funny shape so you could stick a lot of them on top of each other, bolting them to vertical rails 19″ apart. You’d put a rack full of servers in a place with a really fast Internet connection where other people would also have racks full of servers. Since you are all putting your servers in the same place, these places are called “colocation centers”. If you are a really enormous Internet company, you need so many servers that it makes sense to build your own facility, or “datacenter”. Apple, Google, Microsoft, and Facebook have all built their own datacenters, generally where electricity is cheap and plentiful, like near hydroelectric facilities.

But it’s hard to manage your own servers, particularly if you suddenly need a lot of them. So a few companies allow you to quickly ask to have access to new servers, use them for a bit, and then stop using them, and only pay by the server-hour. This ability to rapidly scale up and down the amount of computing power you need is called “elastic computing”. Amazon has the most popular cloud offering, but many other companies offer elastic compute services, including Microsoft and Rackspacetags.

These elastic compute services are still quite low-level, however. You usually start off with what appears to be a very bare operating system installation and have to install all of the programming languages and modules and environments you are going to need to actually run your application. While in reality there are many of these “virtual machines” running on a single server, a special operating system called a “hypervisor” manages running all these operating systems and makes sure that they can’t see what each other are doing. You’ll generally get a randomly assigned IP address for your instance – if you want to build a service that’s exposed on the web, you’ll probably have to register at least one static IP and point the name you want at that IP. This usually costs a very small amount extra.

It’s surprisingly hard to usefully add dozens of servers to speed up your service. If an application is performing slowly because it is trying to write to a database a lot, adding more computing power is not going to help it. Surprisingly, even adding more database servers won’t necessarily help, since if you’re bottlenecked on writes and all the servers need to stay up to date, they’re all now going to be bottlenecked on writes. You need to “partition” the problem, so that some writes go to one database server and others go to another. Figuring out how to smoothly partition data in a way that you can easily add and remove servers to make your database more or less powerful is a bit of an art. As a quick sidebar, because these are intellectually exciting challenges, many people immediately want to jump in to figuring out how to make an application “scalable” without first ensuring they are building a popular application – i.e. one that will need to scale. An old saying is that “premature optimization is the root of all evil”. Instead, scale your solution as you need. Two great reads on this are the LiveJournal presentation about how Brad Fitzpatrick scaled LiveJournal from one script run on a shared server to a sophisticated multirole cluster (building now-vital parts of Internet infrastructure like memcached and mogilefs in the process), and the Instagram presentation about how two guys with no backend experience built a service used by tens of millions without hiring very many people.

Some services like Heroku and CloudFoundry take care of some aspects of scaling for you, helping make sure that your operating system and software is up to date and secured and that you can seamlessly “dial up” and “dial down” computing resources without having to be a whiz. Naturally, these services are more expensive – many themselves run atop elastic compute services like Amazon’s.



Now back to the client. We’ve got our data from the server that we asked for…what now? On the web, the data that we’re expecting to get back for a web page is a text document formatted as HyperText Markup Language. On most web browsers you can “View Source” as easily as right-clicking on a web page to read through the document that the server returned to you. This document contains a header and the body of the content. By default, whatever is in the body of the document will be displayed in your web browser as text. With HTML you can “tag” certain chunks of text. Tags have a beginning and an end. If a web browser has no idea what a given tag means, it will just ignore it. Tags are generally structural (here is a new paragraph), images (display a picture that you should get from this location), stylistic (make the following words bold), or script (run the following Javascript program in the browser).

  <title>A Web Page!</title>
   <p>This is a <b>web page</b>, wow.</p>

The web browser parses the document “tree” (tags can be nested inside each other) and all of its tags and stores it in an in-memory database of its own, called the Document Object Model (DOM). The browser then figures out how to style the information to be presented in the DOM and then renders it to the screen. Browsers also know how to run Javascript code given to it by the server. This code is run in a very limited environment – for example, it can’t look at what’s on your hard drive and it stops running immediately as soon as you close the web page. Modern web browsers have competed vigorously on how quickly they can run Javascript code and the result has been an astonishing improvement in only a few years. Very sophisticated programs can now be run in the browser that would have been unthinkable only five years ago. Browsers continue to rapidly grow more sophisticated, in the last two years allowing access to web cameras, phone accelerometers, GPS, and 3D hardware. The code itself can be included inline in the HTML using the <script> tag and you can also include whole scripts to be fetched from other locations.

Javascript code running in the browser can create new connections back to the server to ask for updates or to let the server know when the user has done something like click a ‘Like’. If new information is available, the script can add it to the DOM, which results immediately in a re-render and the user seeing new information on their screen. This ability for the code running in a web browser to communicate with a server without needing to change web pages (called “AJAX”) revolutionized web application design circa 2005.

Javascript is an unusually changeable language. You can change how ALL objects act, for instance, or just one particular object. As a result, some libraries so fundamentally change how Javascript works that after including the library the way you program is almost like using a different language. One of the most popular client Javascript languages is called jQuery and it very much changes the language in ways that make it very easy to perform powerful DOM queries and manipulations and AJAX queries. Code that would have been lengthy and error-prone in “pure Javascript” can often be done simply and concisely with jQuery.

Finally, the browser needs assistance in figuring out how to render the DOM – what font should be used for the text? How much whitespace should precede a paragraph? What color should the background be? While browsers have defaults for all of these, a web page can provide a so-called “Cascading Style Sheet” (CSS) to inform these rendering decisions. CSS is a text language of its own, different from HTML and Javascript, that specifies parts of the DOM to match and the appropriate rendering rules to use for those matching parts. More specific rules will override less specific rules. CSS has also continued to get increasingly sophisticated, now including animated transitions between parts of the page and 3D actions.

An old adage is to “separate code, content, and layout” for cleanliness. The modern web allows us to do this by separating Javascript (code), the DOM (content), and CSS (layout).



One of the reasons why Macs are so popular in the Silicon Valley programming community is that under the hood they are running a kind of operating system called Unix, which is the same kind of operating system most servers run. You can open up the Terminal application and you have got a real, honest-to-goodness command line! This means that you can actually run the very same software that would be running on your servers right there on your lap. This has become a very common way for people to code. That way you can write your server code very quickly, without needing to be connected to the Internet at all – once you are confident that the code changes you’ve made are good, you “commit” those changes to your repository and “push” the new code to your server(s).

It’s been best practice for some time to use a code revision control system to make sure that if you screw up, you can go back to a version that worked, and to allow many people to work on the same set of code at the same time. While many free and commercial offerings for revision control are in use, these days, a system called “git”, created by the inventor of Linux to help manage the development of Linux, has become king of revision control systems. An unaffiliated commercial web service called GitHub makes it easy for people to share the code they are working on with others and to store a copy of your code as you work on it. GitHub is so popular that it’s common for companies to look up engineering applicants’ accounts on GitHub to review their submissions.



If you want to learn to program for the web, you should probably find a web host that works for you and learn a server programming language, HTML, CSS, and a client programming language. Given that you can basically only use Javascript as your client programming language, I might personally suggest node.js as your server programming language since that way you’ll only have to learn one language. This overview is probably not enough to get started but should hopefully give you enough of a feeling for how the whole thing works that you can orient to the materials you’ll need!

See further discussion on Hacker News. Begins

One month ago, we started working on our newest idea, connecting extended families through sharing. We’re now proud to announce the name for our endeavor:

Ohana is Hawai’ian for “family”. Indeed, anyone who’s watched Lilo & Stich will be familiar with the concept. We want to make it easier for extended families that are geographically spread to be closer together by sharing photos, videos, updates, and gifts with each other.

We’re about to enter extreme early alpha in the coming week or so. If you’d like to try it out, please tweet at us! 🙂

Third Time’s The Charm!

As posted on the Gaston Labs Tumblr:

After firing our first idea two months ago, we decided to start exploring an enormous opportunity in the finance space: fixing the checkout problem. You see, in the United States, we have enormous credit card penetration (over 300%, meaning the average American has more than three credit cards), so it’s no big deal to ask someone to enter a credit/debit card at checkout. Even for those without credit cards or even bank accounts, services like GreenDot exist, letting you stroll into 7-Eleven, plonk down some cash, and walk out with a prepaid debit card that works online.

But it’s not that easy in other geographies. As a founder of Mexican.VC, Silicon Valley’s first discovery fund for Mexican Internet entrepreneurs, I got to know that market much better. And with less than 25% credit card penetration rate, checkout is a serious issue. Especially since banks issue customers Visa Electron debit cards which mostly don’t work online. Coupled with low PayPal adoption, Mexican consumers complete their web purchase experience by printing a receipt, walking to their local OXXO (basically, 7-Eleven) and paying in-person. If you believe that e-tailing is going to be a popular global phenomenon, you probably believe that consumers would find an easier checkout experience appealing.

So Nathan and I dug in and started researching, neither of us having a background in payments. We attended The Future of Money and Technology Conference. We sat down with Greg Kidd who helped fund Square and worked at the Federal Reserve, and Gene Hoffman of Vindicia who walked us through how Visa actually works. We bought a small library worth of finance and banking books. We met with TurkCell to learn how mobile banking works in Turkey, learned how Mokipay is rolling out NFC payments in Lithuania, and talked with SingTel about payment services across Southeast Asia. We learned a lot.

And the conclusion we came to was that the right solution would probably be something like rolling out disconnected debit to the Mexican market, issuing “real” Visa debit cards and cleared realtime against a user’s existing bank account. The only issue is that to do this, we’d need to parter tightly with a local bank, acquire a bank, or create a bank.

It became clear that most of the issues in this arena are not of a technical nature but are relationship and regulatory driven. Nathan and I are a pair of nerds, not bankers. We decided that while there’s a huge potential there, that we might not be the right people do tackle this market. So for now we’ve decided to table working on this more.

One interesting possibility did jump up, however, so here’s a freebie idea to throw out to an aspiring entrepreneur: privacy laws generally prevent telcos from handing over customer data directly to banks to assess creditworthiness. However there’s apparently not much that would prevent the telcos from handing over the data to a third party that could munge those records into a credit score that then could be sold to the banks…coupled with social media data, this could enable personal credit rollout in geographies where credit has been hampered due to lack of risk data (i.e. there are no longstanding credit agencies).

Three weeks ago, Nathan and I started digging into our third idea, connecting extended families by letting them share and gift experiences – letting you be the “Good Uncle” or “Good Auntie” you’ve wanted to be. Did you remember your nephew’s birthday is coming up? (Know what to get a six year old who likes rock music?) Wish you could chip in to their college fund with a click? With our platform, you can give a perfect gift every time. AND get adorable “thank you” pictures.

This, we actually know how to build, though it’s plenty clear we need to hire a designer. 🙂 (Know someone? Please write me!)

Stay tuned!

Four Archetypes Startups Need To Succeed

Much has been said about the classic “Hacker + Hustler” dynamic, but I think a lot of the discussion misses the subtly of what both bring to the table and some of the other critical archetypes that need to be filled in a successful startup like the Designer and the Operator. I don’t think people need to be pigeonholed into a given role – everyone will find they resonate to varying degrees with each role – but if a team lacks competence in a critical type, they’ll likely need to fill it quickly to succeed. Here’s my take on the core personalities needed for success:

The Hacker

Look, if you’re building a technology business, you are going to need a technologist. And here I mean someone who loves learning and building technology. You can learn how to code. It’s actually going to be pretty important that you do have some grasp of coding regardless of your particular role at a company, so you should probably learn to code this year. Napster was Shawn Fanning’s first Windows program – he was teaching himself how to code on his uncle’s couch so the early betas had lots of atrocious bugs. The company didn’t need a longstanding Windows expert to put the tech together, it needed someone who was willing to put in the elbow grease to figure out how to do it. In other words if you’re looking for a technical cofounder consider becoming one. It’s just too hard to find random technical people who don’t know you, are highly competent, and are happy to work for no pay and very little equity on your idea. (surprise!)

The worst interview question I was ever asked was at a tech job fair in college; a recruiter enquired if I was the sort of person who loved to be locked in a dark closet for days on end with pizza shoved under a door. Seriously. For non-technologists, the important thing to recognize about the Hacker role is that when deployed well they are not just a code monkey who can take specifications and implement them like some kind of digital bricklayer. They can shape the product to do things that didn’t even occur to you to ask for because you didn’t realize they were possible. This is why it is critically important for your Hacker(s) to understand what the end user is actually trying to do. Otherwise you will get a very elegant but useless system.

A great example of this is PBworks’ CTO, Brian Kirchoff who two years ago dropped in drag-and-drop file uploading with inline image support into our editor one afternoon because he thought it would be cool. Nobody would have considered asking for this because the assumption would have been that it was not possible in a web browser but Brian knew it was not only possible but a Great Idea. So he did it. Brian’s a great Hacker.

The Hustler

If the Hacker role is misunderstood by business folks, the Hustler role is not merely misunderstood by engineers but loathed and derided as slimy, sleazy, lying, ignorant, petty, foolish, and to be avoided at all costs. It’s really astonishing.

What a Hustler brings to the table is a story. And this is critical because stories are how humans understand things. A story explains to a customer how the product is useful, to recruits why the company is awesome to work for, to employees what the company is trying to achieve, and to investors what the company stands for. If you are not able to tell a compelling and memorable story about your business you’ll find yourself mystified as to why the “idiot masses” are having trouble understanding your amazing technical invention or throw up all over it (as happened to me when I launched my first service).

The Hustler also brings a network to the table. Many Hackers understand Networking as “that sleazy thing that sleazy people do to pretend to connect with each other”, which was basically how I saw it post-college, too, until I realized I had built a real network by focusing on helping the people around me (e.g. by starting a nerd non-profit colo, suing bad guys, helping start hacker parties, and building a hackerspace). Helpfulness is actually the best currency; nobody cares about someone who just throws their business cards around, but if you pay into the karma bank by investing in the community around you, the community will take care of you. That is True Networking.

The Designer

If you are going to produce something that humans are going to have to look at and use, then you are going to need someone who can design a high quality experience. There are three important parts of providing good design – in larger companies these will often get broken out into separate roles but in a startup you often have to make do with these being mashed into one person:

  1. The User Experience Designer (UX)

    To successfully spread via word of mouth, your product must get a user to the point where they say “Ah ha! I get why this is useful!” with as little effort or time as possible. Pay careful attention to cognitive load – the new terms or ways of thinking you ask a user to understand before they can reach that Ah Ha Moment. Your UX person will think through the paths that people follow in using the product (the “flow”) and what information to show where. Output is on whiteboards, pen and paper, and wireframes.

  2. The Graphic Designer

    The product should also be attractive and un-intimidating, helping you focus on the task at hand and making clear through the use of font, color, and texture what information is most critical and what is secondary, and what the controls are distinct from content. The outputs of a graphic designer tend to be Photoshop mocks – “fake screenshots” if you will. Sometimes these folks work with “slicers” to be able to output static HTML and CSS.
  3. The Interaction Designer

    Modern websites and mobile applications  are much more than brochureware or even statically rendered database outputs; they are richly interactive experiences. So while previously you’d hand the “sliced” HTML+CSS to a backend coder to wire up to the real database inputs, modern applications require substantial amounts of frontend coding, elegantly walking the user through large sets of highly dimensional data in an effortless fashion, handling corner cases and errors in a clear and empathetic way. Autocomplete boxes, pickers, and drag behaviors add up to more than can be fully spec’ed by a graphic designer, so the interaction designer must be able to take the spirit of a graphic design and realize it with highly efficient Javascript. You could also call this role a UI Hacker.

The Operator

Like protons in a nucleus, the creative forces of the hacker, hustler, and designer (often at odds with each other) will by want to explosively fly apart. It is the role of the operator to provide a binding force by “keeping the trains running on time”, ensuring the right paperwork is filled out, people get paid on time, the books are properly balanced, and the business operates in an orderly fashion. This person has exquisite attention to detail. Early on at PBworks we had an excellent operator; one day we got a new TV to display service metrics – she hunched over the display to count the stuck pixels. An ideal operator acts in part as project manager, making sure agreed-on meetings happen on time, getting project estimates and progress updates from people and holding them accountable. A pattern I’ve often seen is for this person to start off as an office manager and then take more and more responsibility for the running of the business (accounting, HR, interface to legal, etc) that they effectively become the COO/CFO/President. For sexist reasons I don’t fully understand, this role is usually played by a woman, and I’ve heard it more than once referred to as the “Mama Bear” role, perhaps as a tip of the hat to Den Mothers. It’s one of the least talked about and least public roles in a startup, but it’s every bit as critical as the others, e.g. Sheryl Sandberg at Facebook.


Take a good look at your founding team. If you’re missing competence in one of the roles above, you should see if you can bring on some help – or at least advice – to fill in the gaps, or see if you can grow yourself to better fill the role until such a time as you can hire someone to take care of it.


Liked this? Share it, follow @dweekly on Twitter, & read my Guide to Stock & Options!

21st Century Manufacturing

Here’s a bold bet for the century: manufacturing will return to the U.S.

The 20th century was largely about realizing the vision of the Industrial Revolution: a world of plenty, where goods could be cheaply manufactured and efficiently distributed to consumers. We’re entering an era where those problems are largely solved thanks to the magic of Capitalism and global trade – the world is not lacking for Stuff. Even the poorest in the US don’t lack for T-shirts or underpants. We don’t need cheaper goods.

So the 21st century consumer isn’t just looking for Stuff, they’re looking to express themselves. Consequently we’ve seen an evolution of Brands from “Our Stuff Is Good,” implying you shouldn’t buy the possibly-shoddy stuff sold by other vendors, to “You Should Identify With Our Values.” People don’t buy Nike because they think non-Nike shoes are bad shoes – Nike marketing doesn’t even try to touch that – people buy Nike because they want to be and be seen as the sort of go-getters who Just Do It. Modern brands are about expression more than quality.

But expressing yourself as a brand’s identity is an abstraction – how much do you really understand me just because I am wearing Nike shoes? When the brands have small constituencies, identification is more meaningful, but without broad recognition, identification is much more challenging. Namely, it’s cool that you wear True Religion jeans, but until they become well-known I don’t know what that means — and by the time True Religion jeans become popular, it by definition means less to associate yourselves with them. This is part of the reason why we see “hipsters” always seeking to identify with a brand before it’s popular and move on once a brand “sells out” or becomes mainstream. While many people just dismiss hipsters, it’s legitimate that they’re looking to express themselves and their “brand churn” demonstrates that brand expression is inherently ineffective because it’s a generic intermediate, a poor proxy for values. Which is to say that no brand can actually represent you.

Consequently, the natural conclusion is that your only brand is yourself and your direct expressions. Online platforms like Twitter, Facebook, LinkedIn, and WordPress allow the individual to push their unique thoughts and tastes to a wider audience, but they still don’t cover the world outside of the computer. As people hunger to legitimately express themselves in person, they will want goods they identify with and that uniquely and directly express their values, without intermediaries. An increase in the sophistication of just-in-time custom manufacturing and the need for rapid turnaround and shipping will mean that “synthesis factories” in the US will be able to turn out large quantities of custom goods for consumers. Waiting for things to ship from China will just take too long, and lower labor costs will be obviated by automated machinery. Combined with readily available crowdsourced pools of designers who can help individuals create an maintain a personal aesthetic, by the end of the 21st century, most Americans’ clothes will be bespoke and manufactured here.

The same goes for custom skins for electronics, photographs, and other touches that help personalize a body or space. While bulk manufacture of electronics and other long-turnaround goods will remain overseas for some time, much of what is produced – like flash memory or displays – will be commoditized, much like importing raw materials. The actual synthesis and creation of value to pair a product with a consumer and put it in their hands will be done near the consumer.

Many thanks to @agentfin for a #brainbreakfast where we fleshed out some of these ideas.

Getting My Feet Wet Again…

A funny thing happens to technical founders: as the company you built takes off and a proper Engineering Team develops, you find yourself doing less and less code. You need to spend your time managing the business, recruiting new talent, setting direction for the product, prioritizing tasks, and the like.

As the percentage of time you’re spending coding drifts from to 50%, you’re surprised to note that you’re now only a quarter as effective; the context switches just kill you and the team is evolving new best practices and tools and building out the system’s complexity fast enough that it takes at least ~20% of full-time just to keep up.

Consequently, you hit the point where, even though it’s your company, your team gently asks you to stop checking code into production. There’s just no way you can make helpful contributions when only 5% of your time is spent coding – you’re using last year’s syntax, you forgot to create unit tests, you didn’t hook into the new functional test framework appropriately, and you totally horked the new Javascript minifier. You lose your commit privileges.

This has happened to nearly every technical founder I know – the only recourse I’ve seen is when, at some point, they give up managerial control and go hole up in a dark corner again to come back up to speed for a few months.

So I’ve found myself delighted to be back coding again, figuring out the state of the art for 2012, wrapping my head around jQuery, GitHub, node.js, SASS, Compass, HTML5 Boilerplate, MongoDB, and all these other things the cool kids have been playing with for the last five years while I was busy doing businessy things. 🙂

Got some pointers on what technologies I should be playing with?

My LASIK Experience: Intralase & Wavefront


I grew up nearsighted pretty badly in my left eye (20/200, or -3.0) – my right eye too, but to a much lesser degree (20/35 or -1.0). I hated the idea of sticking stuff in my eye every morning and glasses didn’t appeal to me much as a kid, so I just went around with everything kind of in 2D. Sports like shotput and wrestling were obviously not too badly affected, and soccer even was not that bad (the ball is big and slow enough that parallax and ball size can give you enough cues with one eye to know where the ball is), but tennis didn’t work out so well for me. Around college I started wearing glasses as an experiment, mainly encouraged by girls who thought they framed my face nicely. And so it went for years – which mainly worked. But sometimes it’d be challenging – like when scuba diving, skydiving, snowboarding, or going to 3D movies and having to wear two sets of glasses at once. Sometimes I like to just let my nose rest and take my glasses off my face. So I started thinking about getting laser eye surgery.

Continue reading “My LASIK Experience: Intralase & Wavefront”

Philosophies on Living

Because stories are told from a first-person perspective, they concern themselves with the subjective truth of the observer. different observers of the same factual events are recorded as different stories with different truths. many conflicting stories can be created from the same factual observations depending on the perspective of the observers. differences cannot always be resolved through dialogue, because the difference of opinion may not result from factual disagreement but rather from observer bias.

Anger resolves little. we get angry because it suits us to do so; it allows us to express our feelings and we temporarily feel better about having a candid exchange. but angry speech does not concern itself with being understood or with conveying ideas other than pain and guilt. angry speech concerns itself only with inflicting pain. regardless of motivation or provocation then, the presence of angry speech should always be relected as a weakness of the speaker, a lack of ability to seek an appropriate resolution. anger means you’ve lost.

Assume the other is willing to listen, can be convinced, and is willing to change. Assume the other means well and wants to be a positive influence on the world. Assume everything’s going to be okay. Assume you can understand things well enough to make a difference.

Speech for speech’s sake is intellectual masturbation. Do not talk for the pleasure of talking; speak to be understood and have your ideas acted upon.

Seek to have your hypotheses invalidated – ask in all things “how am I looking at this wrong?” and quest for your foolishness as eagerly as hunting for gold. If you look for confirming evidence, you will find it, even if it is weak. If you seek to have your ideas overthrown, however, you will quickly grow in wisdom. If you aren’t regularly seeing what a fool you are, you are probably just not looking hard enough.

You Must Remember

you must remember

that every action matters

that few act deliberately

that much is decided by those who wish to decide

the world is as small as you make it

life is as pliable as you let it be

you are every bit as much a victim as you wish to be

you make a decision to be happy

an absolute decision with real impact on those around you

as a result of a relative feeling


you will find everything you look for in this life

you will find beauty and tragedy,

hilarity and joy and loss and bitterness.

and what you see is true, all of it,

but what you choose to meditate on,

what you choose to observe

makes the universe more of that,

so if you see the universe as cruel and immovable

 then it becomes a little more cruel and immovable

 not just for you

  but for everybody

and if you choose to see the wold as lovely and full of promise and hope

then so shall it be



You have an obligation then to see all things

  (so as not to make decisions in ignorance)

 but to be particular in the matters you reflect upon

 for it is in these things

  that you form the universe whole

   in your mind

    and make your dreams and nightmares a reality.

Premature Thoughts on Weight Loss

I’ve been losing weight for a month and have managed to shed around 12 pounds; I’ve still got a ways to go, but I’ve been happy enough with the results and have received enough good advice about the topic that I thought it would be worthwhile to share. These thoughts are “wildly premature” because hey, I haven’t lost a huge amount of weight yet or kept it off, etc, so you’re free to wholly discount everything here. I’ll post again as I’m further through the process.

Continue reading “Premature Thoughts on Weight Loss”


The experience itself has a small value, providing tidbits to recall later context for other experiences.

But raw experience cannot be shared effectively – the realtime essence of another consumes and overwhelms the viewer, requiring complete attention and subsumption into the experience of the other.

So there is value in compressing the experience for digestion by others and truthfully most of the real-time is low bandwidth.

One can have several different takes on this.

One is to focus on making the realtime higher bandwidth, to have more vigorous adventures, to have deeper and more moving moments and interactions, to be – in a sense – living a movie.

But this is too much for many.

Another is to consume compressed experiences, to read and listen to stories and process the concentrated essence of the lives of others.

Another option is to accept the low bandwidth nature of life.

Finally, a Zen option is to see that which already exists with in life and pay more attention to it, to perceive it in higher bandwidth.

David’s Two and a Half SSD Bets for 2010

I’ve been following with keen interest the development of solid state drives (SSDs), which are basically really fast and reliable flash memory to use instead of the current rotating magnetic drives. Why does this matter? Well, first and foremost, they don’t rotate, which means that they don’t have any moving parts, which means they could last much longer and consume less electricity. But most importantly, the computer does not need to wait for the read/write head to either pivot to the right place on the disk or wait for the right bit of data to rotate underneath it. These two wait times are usually combined into an average “seek” time. This “seek” time has only very marginally improved in the past 20 years. It’s clear how to improve the seek time – make hard drives rotate faster (lowering average wait times for a piece of data to rotate underneath the head) and make the disk smaller, reducing the distance the head has to travel to get to a piece of data. In the past 10 years, hard drive seek times have gone from ~9ms to ~6ms while storage sizes have gone from 20GB to 1.5TB. So, 30% faster seeks and 100x more data. So we’ve hit a bit of a brick wall in terms of how long it takes to get a piece of data from a magnetic hard drive.

The real answer is to not spin, but it has been just so darn cheap to make high-density hard drives that the cost-per-byte of other solutions has not been able to hold up. And it won’t for some time to come. But, fascinatingly enough, that may not matter. Because about five years ago we hit a magic tipping point where people (generally) stopped filling up hard drives. It seems around 100GB is the magic limit for most regular computer usage. With the demand curve on storage size tapered off, it became inevitable that the solid state solutions would start catching up. And that brings us to today. Or rather, to the end of 2010, which is what my two and a half bold, related predictions address:

1) Hard drives will be gone.

Excluding backup devices, consumer computer devices will not come standard with rotating magnetic hard drives by the end of 2010.

Why? Hard drives will still be larger, but it won’t really matter for the vast majority of people, who won’t use more than about 100GB of data and don’t want to worry about losing it. Like tape, hard drives will still be around as backup media, since our last-mile broadband issues won’t be solved by 2010. At least in the US. (Backups then as now won’t commonly be done to the cloud. Even assuming regular homes will have 2mbps upstream [optimistic!] backing up 100GB of data will still take 5 solid days to complete, versus a USB 3 hard drive which could do it in 17 minutes.)

2) Windows 7 will boot in seconds.

Microsoft is secretly developing an SSD-optimized (log-structured) filesystem for Windows 7 that will allow it to boot in seconds. This will be the principal selling point of Windows 7.

Microsoft has been very clear that speed is a primary goal for their next operating system. Experience accelerating Vista with hybrid drives has given them the start of the technical chops they need to be able to deal with the unique properties of flash memory. Their touchpoints with enterprise customers and storage vendors give them clear visibility into the developments happening in the space where the inevitable domination of SSDs should be obvious. Furthermore, Microsoft would want to keep these developments quiet to avoid spurring on currently-immature Linux flash filesystems like logfs. That way when Windows 7 launches in early 2010 there will be a large performance differential between it and any other desktop operating system. The marketing message will be simple: “The power of Windows, up and running in seconds.” This will be the last straw that gets people to upgrade from Windows XP.

2b) SSDs will come bundled with a Windows 7 Upgrade.

If bet #2 above holds true, since most of the performance advantages of Windows 7 will only be realized on a computer that has a solid state drive, to upgrade effectively requires you to also swap out from XP and your magnetic drive to Windows 7 and a solid state drive. This will be a HUGE driver for SSD upgrades when Windows 7 comes out in early 2010, helping bet #1 come true by driving quantities of scale. Because Microsoft will recognize the importance of SSDs to WIndows 7’s success, they will partner with vendors to offer an affordable “upgrade bundle” that combines an XP->7 upgrade with an SSD and costs less than $500.

Conclusions from this? Short hard drive companies that don’t have an SSD play, go long on the SSD manufacturers, and expect Microsoft to drive an unprecedented number of upgrades to Windows 7 in 2010, blowing the pants off of a (let’s be frank) incredibly lackluster Vista launch.

The 10 Levels of Modern Communication

With so many different ways to communicate; which ways are more important and meaningful than others?

My coworker Joël and I were today discussing the different ways we can communicate and how “serious” each was. From lightest-weight / most innocuous to most intimate and serious, we came up with the following:

  1. Facebook poke, friending someone on Facebook
  2. Twitter @person, Facebook wall post or picture comment, MySpace comment
  3. Twitter direct message, Facebook message, MySpace message
  4. Instant message
  5. Email
  6. SMS / phone text message
  7. Attending the same event
  8. A phone or Skype conversation
  9. Meeting one-on-one
  10. Handwriting a letter
This is hopefully a helpful guide for people deciding how meaningful a communication with another person is. It’s sorted almost directly by emotional weight as well as exposure and intimacy with the other – for instance, knowing someone’s handle doesn’t expose much, but their phone number is a more personal thing (and harder to control), and a home address even more so.
It’s amazing to me that handwritten letters are effectively the most esteemed and valuable forms of communication to our generation. I probably receive a full handwritten letter in the mail about once every other year and it’s always a profound experience. In high school, I’d send and receive several a week.
It may be an interesting reflection to note that I can barely write legibly and my hand cramps now after about a paragraph. I just never use those pen muscles anymore.