My Letter To Clifford Nass

Hello Professor Nass. My name is David Weekly. I am the student who approached
you after class just now to ask you what would happen if I disagreed with you. I
am a junior in Computer Science here.

I realize you have hardly had time yet to raise objectionable points, but what I
am disagreeing with is the tack that I see you (potentially) taking in the
class. I think it is a bad idea for technical and sociological reasons to try
and make computers interface to people in a human-like way. That’s not to say
that I do not think that computer interfaces should not be ultimately intuitive;
quite the contrary. But I raise that intuition is cultivated in the environment
and with habit. The mouse, for instance, was an entirely unintuitive device;
people stepped on it, raised it into the air and (in one of the Star Trek
movies, if you remember) spoke into it before it was clear what it did. And yet
the vast majority of Americans today can use a mouse with alacrity, almost as an
extension of themselves. The point here is that the mouse had little direct
analogy to anything commonly human and was quite unintuitive from the
start…but the mouse was designed in such a way as that people could adapt to
it quickly.

So my argument is that we need to make interfaces that one can adapt to rapidly
but are not neccessarily intuitive or human-like from the start. Having made
this point, I will go further to say that I do not believe that making computers
human-like is wise. As Luddite as it may sound coming from a CS major, dumping
humans to be replaced by automatons in human-human interaction scenarios (e.g.,
a restaurant, ticketing, phone operators) will rarely IMHO make the world a
better place. It will be a more efficient place, but one replacement leads to
another and as sure as day we will make this world a miserable and lonely
location if we only think about efficiency. Please note that I’m not talking
about factory work here, or non-human-human scenarios.

If we interface to the computer as we do to reality, our perception of reality
is altered. The consequences of this must be examined before we rashly rush in
with the latest AI, 3D, and multiplayer technology to produce immediately
compelling and intuitive interfaces. If we can talk to a computer like we talk
to our friends than we may start talking to our friends like we do the
computers. We become frustrated with the limits of reality and stop being able
to truly appreciate it. (How many times have you been annoyed at not being able
to ‘grep’ a book in your hands?)

I am aware that many of these statements are broad and overarching, perhaps
generalizing a little too much and overlooking important exceptions. At the same
time, I believe that there is a fundamental truth to them that needs to be
considered. I see the computers of the future optimally having interfaces that
we today might find complex and alien, but which adapt well to how humans act
and think and also have an awareness of the skill, preferences, and emotion of
the user. I do not see ‘robotic pals’ even ‘virtual pals’ (s/pals/agents/ or
whatever your preferred word is) as being a desirable future. As cute as the
Office Agents are, I despise them for trying to be lifelike. Consistency should
be king in interfaces and having a help system inconsistent with the rest of the
operating system to provide a feature that the vast majority of users dislike
seems to have been a poor decision on the part of Microsoft.

<vent>
And what’s with having that little moving pen at the bottom of a Word document
while you type? Were they too dumb to realize how distracting that is without
providing any useful functionality at ALL?
</vent>

Well that’s my $0.02. Maybe I’m just another naive student, but I feel my ideas
deserve at the least a solid rebuke before I’ll back down on them.

Yours,
David E. Weekly

The Multicast User

what is multicast?

A very sleek, hip, and powerful word. Multicast technology, about a decade old, enables a computer to “broadcast” a single stream over a network and have the network copy the stream as required. So for instance, if I was broadcasting a concert from San Francisco and had 50 people listening in from the Netherlands, I’d just send out one stream. The routers would pass along this single stream across the US, under the Atlantic Ocean, through England and France and and only split the stream to make copies for each listener when the data had actually reached the Netherlands. Here is an illustration blatantly ripped from Jon Crowcroft’s excellent online reference Internetworking Multimedia:

[multicast diagram]

The network itself handles the distribution.

The current model is much less efficient: the server is required to push out a unique stream itself — the routers only know how to carry a stream to a single endpoint. This is bad for everybody. Take a look at the trans-Atlantic link in the above example: it is now carrying 50 copies of the exact same stream at the exact same time. This is a waste! With multicast, a computer only needs to push out one stream, and the routers, while more intelligent, never need to move redundant data.

You’d think that it would be obvious that people broadcasting content to lots of people would want to use multicast; and you’d be right. The problem is that multicast applications have not developed much since their inception in the early 1990’s. I had to look pretty hard to find multicast-compatible programs, and the ones that I found were primarily designed to run on Sun Solaris and were poorly implemented for Windows. They ran awkwardly at best and simply didn’t operate correctly the vast majority of the time. I downloaded a plugin for Winamp to allow me to listen to multicast MP3, but it would only play for a few seconds (once as long as a minute) before choking and crashing Winamp.

Multicast capability exists on the vast majority of routers out there today but is simply not turned on. ISPs cite a lack of compelling multicast applications, and application developers refuse to integrate multicast because of a lack of ISP support. Ultimately, multicast capability must be integrated into rich-media clients and servers in order for it to take off.

Unfortunately, there is another barrier: there are only so many multicast addresses right now. Making the analogy to radio broadcast, there is only so much space on the dial: multicast uses an IP address, just like a web server, and there are a limited number of IPs avaialble for multicasting. The obvious solution is to make a different, much larger address space, and this problem has been duly solved by IPv6, the successor to the current Internet machine address mechanism used today, IPv4. You may have seen IPv4 addresses. They look like127.66.32.14 — four numbers between 0 and 255. While this may seem like more than enough possible addresses (4 billion, in fact), there are getting to be a lot of computers on the Internet, and there are many companies that have allocated large, inefficient chunks of addresses to make things easy to manage. (Stanford for a time owned all addresses starting with36.!) IPv6 solves this with an address space that is much, much bigger. IPv6 specifies not four, but sixteen numbers between 0 and 255. This would let us address 3.4 x 1038 unique addresses — more than enough to give several thousand IP addresses to every gram of matter on this planet. IPv6 also allows for Quality of Service (QoS) that lets the network treat urgent data (e.g., a teleconference or remote surgery) differently from less urgent data (e.g., email) and operate more efficiently.

Much to everyone’s dismay, IPv6 has been very slow to roll out. Defined years ago, we’re only now beginning to see the infrastructure for doing IPv6 emerge: the only way to get IPv6 to work under Windows is to download an alpha driver from Microsoft Research — and this only works under Windows NT! IPv6 addresses only started to be allocated months ago. Clearly this technology is off to a slow start.

But all’s not yet lost: encourage your ISP to enable multicasting and adopt IPv6-capable routers. Encourage OS & application vendors to include IPv6 support (Linux has had it for a while!) and to explore multicast alternatives to unicast applications. And maybe, just maybe, we can start using the power of multicast to empower small broadcasters.

ADDENDUM: some readers commented that scaling multicast to deliver reliable performance can be difficult: if you are broadcasting to 100,000 users and 1% fail to receive each packet correctly, you have to deal with 1000 retransmissions for every packet of data you send, introducing inefficiency and overload. This problem has been solved elegantly by several groups. Interested readers may want to investigate SRM (Scalable, Reliable Multicast)RMTP (Reliable Multicast Transport Protocol), and Search Parties. A comprehensive list of Reliable Multicast papers can be found here, although many of the links are broken. The basic concept in many of these papers is to do some forward error-correction (to allow for error recovery on the receiving end without having to request more information from the server) and combining NAKs (namely, requests for more information from hosts who did not correctly receive it) in subband retransmissions to just multicast the correction to that group of users. These mathods have proved to be quite effective at allowing for both scalable and reliable multicast in a manner far more efficient than unicast.