Monday, December 26, 2005

Magic-update

Tim Bray: "There are two kinds of software: the kind that offers to update itself when appropriate, and the kind that's broken." Ideally, once users select a trade-off between features and code maturity (the range is from "bleeding edge beta" to "proven and stable"), the software is then updated automatically. An option to easily revert to a previous version should be provided if feasible.

This goes for both web-based and installed software: the deployment models of all software systems are converging.

Tuesday, December 20, 2005

When will then be now?

Even though the question of whether art imitates life or life imitates art is a cliché, and a Black-or-White Fallacy, it can lead to interesting inquires. Consider the art of science fiction, which has a complex relationship with the life of technology. Sci-fi writers build the foundation for their work by observing existing and emerging technology. This foundation prevents their imagination from taking over completely and ending up in the fantasy genre. Individual writers may see technology progressing in different ways; yet common technological trajectory stands out among the art of writers of a single generation.

Scientists, engineers, and inventors are influenced by that trajectory. The little kid that lives inside each one of them thinks it’d be really neat to turn fiction into reality. Some dedicate their life to doing just that. Examples of fiction made into reality are numerous. The bigger ones are Čapek’s robots, Wellslasers, and Clarke’s geostationary satellites.

While cyberpunk and its corresponding science of computing is still in its teenage years, (complete with rebellion against The Man or The Machine), they enjoy the same relationship. So, what is the vision of the future of computing as described by today's cyberpunk art? What is that common trajectory? Stephenson’s Metaverse. Gibson’s Cyberspace. Even Wachowskis’ Matrix. Is there any doubt as to where The Net is going to end up?

Monday, December 19, 2005

Breaking news

"The Internet Is Broken" is MIT Technology Review's current cover story. "We are at an inflection point, a revolution point," Internet elder statesman and onetime Chief Protocol Architect David D. Clark now argues. What's next? "It's time to rethink the Internet's basic architecture, to potentially start over with a fresh design." A few more high-profile revolutionaries—and people will start paying attention.

Tuesday, December 13, 2005

The sky ahead

Computing revolutions are driven by economic factors. Computers and networks continuously become faster and cheaper. These quantitative changes periodically create qualitative improvements for users and developers.

Personal computer revolution was brought on by dropping prices on microprocessors: individuals could now afford computers. Economies of scale could be used for hardware and software, and the shrink-wrap software industry was born: software became a product.

The Web revolution happened because of faster and cheaper networking. Internet infrastructure exploded, and users could access their data and applications from any place with a Net connection. Developers could deploy the software in minutes to the audience of millions, and software became a service.

What's next? With falling prices on hardware and bandwidth, management costs start to dominate. Many of the problems that today cost hours of human attention and a lot of money are preventable: incompatibilities and interoperability issues, software catastrophes (serious bugs and viruses), hardware catastrophes (lost data due to hard drive crashes).

PCs and the Web will evolve into the Cloud: a collection of computing resources that presents a uniform user experience and minimizes administration costs. Power grid has enormous complexity, but it is all hidden behind the interface of a power outlet: the right interfaces to computing will help simplify it and decrease the total cost of ownership.

Once the management problems go away and the programs can be hosted in the Cloud, the barrier to entry in software field will be dramatically lowered. Given the right tools, the right programming model and the right economic model, the market for small components is going to take off. The line between advanced users and software developers will be erased transforming software into a medium.

Wednesday, December 07, 2005

Simply better

Personal computing and network computing each have two major advantages over each other.

For personal computing (Windows) it is:

  • Disconnected operation. Your laptop is still useful when the network is inaccessible.
  • Performance. Local applications don't have to transfer data back and fourth over the network, and therefore can be much more responsive.
The advantages of network computing (the Web) are:
  • Synchronized state. Your data and your applications are centrally managed, and they are up-to-date no matter where your access them from.
  • Ease of deployment. An application on a website is deployed to millions of users who just need to follow a link to access it.
There is no reason why you can't have a system that offers all four. For example Mac OS X with its .Mac and Software Update components simplifies synchronization and deployment in some applications. There are obvious limitations: you can't take a worldwide search engine with you into the offline world, but you can keep a copy of your mailbox in case your webmail is inaccessible.

Maintaining synchronized state is a tricky problem in the programming models popular today, but the right framework can make writing "autosynced" applications easy. In combination with the framework for deployment similar to Java Webstart, a platform can offer the best of both worlds—"either or" trade-offs will become a thing of the past.

Friday, December 02, 2005

The three-letter word

The new top-level domain, .xxx, is in the news. But is its addition technically necessary? Branding arguments aside, why can't .xxx be replaced with a subdomain, such as .xxx.us?

The domain names are used for global identification of resources and navigation. Since they have to be typed only once, three extra characters don't make a significant difference. For machine-generated content, there is zero difference: after the site is bookmarked, it doesn't matter how long its URL is. There will be no perceivable loss in productivity if the address is slightly lengthened. (This is especially true in the context of .xxx domain.)

There is no point in abolishing .com today, but there is also absolutely no good technical reason to add new generic top level domains. Introducing .xxx will not make filtering adult content any easier, since adult sites will continue to operate under .com, with new ones springing up daily. What it will do is create a windfall of profits for registrars as another land grab starts—and not surprisingly it's the registrars who lobby ICANN.

Real progress on restricting children's access to adult content can be made if robust age verification becomes part of the protocol, which requires solving the Identity paradox. As for battles over the root zone, they can be resolved by simply freezing it in the current state, removing unnecessary central control and thereby starting to address the Responsibility paradox.

Thursday, December 01, 2005

Flat-fee world

Market research shows that people want ad-free services, but won't pay for them. They will pay for quality content instead: "People are not buying HBO because it doesn't have ads, they are buying it because they want to watch the Sopranos."

People buy World of Warcraft not because it doesn't have ads, but because it has quality content. Since Blizzard gets paid, it can hire professionals to develop quality content and publish it without ads. There is a feedback loop here; the main problem is bootstrapping—achieving a critical mass of subscribers. Bootstrapping can be accomplished given enough initial investment, assuming people indeed want to play the game.

Blizzard has a monopoly on authoring content for WoW; but there is no reason an open system can't work, with multiple competing content providers. A neutral party can collect money from subscribers—a flat fee for example—and distribute it among content providers according to usage patterns. As long as the fee distribution method is fair, this model is attractive to both users and publishers. The main problem, again, is bootstrapping and gaining a critical mass of users: this can be done by producing or licensing quality seed content.