Thursday, October 27, 2005

Five paradoxes of the Web

The Web is a great platform for delivering content and services, but it is showing its age. The fundamental design choices that were right at the beginning of the Web are starting to backfire now. This is an attempt to identify what is impossible to fix within the current Web.

The problems with the Web are readily apparent; they are simply taken for granted. In the last month, as a Web user I had to deal with spam (in my webmail inbox and on my blog), denial-of-service, and identity theft; not to mention user interface bugs. As a Web developer, I had to work around browser incompatibilities and was forced to expend an effort disproportionate to the complexity of tasks I was trying to accomplish. And it keeps getting worse.

A lot of resources are spent on combating the problems of the Web; but they do not solve the principal paradoxes of the platform. Whole industries now prosper solely because of imperfections of the Web. This is a good indicator that computer user community should invest in a new global infrastructure and address fundamental problems with fundamental solutions.

  • Everything is free, yet nothing is free. (Compensation paradox)

    Many Web services are free for users, because charging for them is impractical—but providing these resources costs money. This makes direct business models unsustainable, and requires metering of resources. Even without explicit caps, limitations of provider's hardware and bandwidth lead to denial of service to users during peak times—or during attacks. (solution)

  • We don't know who you are, yet there is no privacy. (Identity paradox)

    There is no universal identity mechanism: a website can't greet you by name, unless you filled out a form beforehand. Identity management mechanisms are clumsy, leading to identity theft. At the same time, there are various covert ways of invading privacy which are transparent to a user: IP addresses, cookies, Referer headers, one-pixel GIFs in emails.

  • Write multiple times, yet it still doesn't run everywhere. (Compatibility paradox)

    Writing advanced Web applications requires sacrificing one of three important components: capability, compatibility, or speed of development. Testing on all browser flavors and versions is a luxury few can afford. It doesn't matter if one browser is more standards-compliant than another; in practice, you have to support multiple clients or lose users. (solution)

  • Code goes over the network, yet it's not mobile. (Boundary paradox)

    Web is asymmetrical: there's a client, and there's a server. The client speaks one language (JavaScript), the server speaks another (usually not JavaScript). To cross the boundary between the client and the server, the code must be translated into a different language. No matter how fast the network is, the mobility of code is limited by the speed of a programmer's manual conversion between client-side and server-side APIs.

  • The Web is not decentralized enough, yet it is not centralized enough. (Responsibility paradox)

    The DNS is centralized; certificate authorities are essentially centralized too. Centralization gives monopolies to organizations in control, while at the same time creating global vulnerabilities. But there's no one to appeal to if an entity is misbehaving (e.g. spamming), since the Web authorities do not accept responsibility for the platform's citizens.

Identifying problems is the necessary first step. This blog will explore possible solutions, often more radical than not. Thinking outside the box of the Web is the only way to make real progress. Odds are, the next winning platform will solve all five paradoxes of the Web. Will the first platform to solve all five praradoxes of the Web be the next winner?

13 Comments:

Anonymous Anonymous said...

"The client speaks one language (JavaScript), the server speaks another (usually not JavaScript)."

Does this statement make sense? Get your basic facts correct first. Javascript is executed client side but the server is still the one that controls what Javascript will be executed since it is the one that sends the page with javascript in the first place.

This is more like a problem of unqualified web developers who don't know what javascript really is , what is should be used for and when to use it.

4:11 AM  
Blogger jrp said...

Anonymous,

Say you want to write a web-based mail client and use Java on the server side. You got the HTML version working. To get an AJAX version working, you have to go through the effort of translating the code that was written in Java into JavaScript, even if the functionality is about the same. It's true that your server controls the client code that is being sent over—but you still have to convert between two very different APIs, and if Web supported truly mobile code, that translation wouldn't be necessary.

5:55 PM  
Anonymous Anonymous said...

The particular opinion: it seems that doesn't interesting for me such thing as "mobile code", absolutely no. On the current Web I already do have got zillions of "true mobile code" in form of clever backdoors etc.

It seems for me, only thing in XXI century I really need is *the Info* (that is King) and nothing more except passive and mobile "Information flows" (html, xml..), not "mobile API" nor AJAX.. It's pity, but many of programmers don't think so yet.

12:40 AM  
Blogger jrp said...

Anonymous,

The line between "passive" and "active" information is a very fine one. Best presentation of "passive" information often requires "active" code: consider, for example, Google Maps. The more interactivity you need, the more important mobile code becomes. 21st century media will be more interactive, not less.

Security properties of the system are often independent of functionality and are influenced more by development practices. You can have a well-designed mobile code system without security holes, and you can have security holes triggered by "passive" data. Examples of the latter are bugs in zlib and libtiff.

11:07 AM  
Anonymous Anonymous said...

JRD,

>> Best presentation of "passive" information often requires "active" code:

1): Imo, much better would be to say "often accompanied by", why "requires"? And second objection: "presentation" doesn't look like a primary goal in XXI, imo. Thesis: when your goals are "be informed", the code is somehow ancillary, not your sister.

As for GMaps: (one argument, but not a single): Often (imho), it would be much better for someone to reassign all routine GMaps' kitchen to some "e-agent" and only say him: "go-and-bring... the laconic-and-clear answer to my concrete need". In such (the better) case not a person, but *e-helper* goes-does interactions with Google and with allkinds' maps [and where he traveled else, he knows]. But now: the agent--GMaps interacts somehow is another look: a kind of *communications* (in xml, for example) rather than "interacts". Is it tomorrow way? - who knows. Interactivity with my "e-helper"? - Sure, but only "in my own rules"! Direct interactivity between me and GMaps in their rules? Jscript? Definitely not sure ;-)
sometimes, may be...

>> you can have security holes triggered by "passive" data.

2) Ye, the area isn't simple, of course... But the less of (any or superfluous) holes the better! isn't it? The question in form: "what is the best price of X in city Y?" equals to passive thing (me think so). Does my intention include "to do Interactions"? Thousands of interactions? By hand? Why do I need a "power" computer on my desk then? The life is very short.
The thoughts..

3:01 AM  
Blogger jrp said...

I am not going to get into the human-computer interface discussion now, but we can continue the security thread.

When you upgrade your OS and install the latest service pack, you bring in megabytes of code—essentially, mobile code—that runs with high privileges. A single bug can expose your whole system to attackers. Why doesn't that scare you? (It certainly scares me.)

You have to trust some code. With current platforms, you have to put your trust in a huge codebase: it's huge because of the required backwards compatibility support. With a new platform you can make the trusted codebase very small, since you start with a clean slate. The new platform—if it is well-designed—can be much more secure, even if it has advanced features like true mobile code.

4:08 PM  
Anonymous Anonymous said...

>> With a new platform you can make the trusted codebase very small..

Is this a theory or a guess? And "full-bugged" OS disappear? Does someone have a gift - small nuclear for interpreting 'mobile codes' as *absolutely* safety? - the last offer would be a commendable "true break" in IT, but seems that current (real) software says something another: zlib - triggered; js - triggered; java.. dot.net - triggered.... all expose holes..

about trust: personally, to some extent I can "trust" my code-base" (the OS) _especially_ because it was keeped unchangeable more then 5 years and to just now I've caught certain knowledge about it / its holes. In one thing I absolutely agree: service-packs scares me ;-) But to be in sound mind and to entrust to arbitrary code, that wish running my computer (yet server?) from arbitrary site? Welcome to expose "killer' secure machine".. and let's see.

2:37 AM  
Blogger jrp said...

Anonymous,

A new computer communications medium often brings new risks. If you are worried about worms, your best defense is to never connect your machine to any network. Obviously, you are connected to the Internet now—that means you have evaluated the risks and rewards offered by the Internet and found that rewards are greater.

I cannot rationally convince you that rewards offered by a hypothetical new platform will exceed the risks, simply because the platform doesn't exist even in blueprints yet. All I can say is that by following best development practices—having an open implementation and a lot of peer review—it is theoretically possible to have a platform that is significantly more secure than the current industry benchmark (Windows), while offering state of the art features.

A prototype will give you a better idea of the reward/risk trade-off. I hope you will evaluate it when it becomes ready.

7:34 AM  
Anonymous Anonymous said...

T"he client speaks one language (JavaScript), the server speaks another (usually not JavaScript)."

Does this statement make sense? Get your basic facts correct first. Javascript is executed client side but the server is still the one that controls what Javascript will be executed since it is the one that sends the page with javascript in the first place."

Actually it does make sense, because the server side may deliver the particular text, but doesnt know what it is or what it does. The server-side execution will be in another language like PHP. Java is the closest to the universal language (javascript, java Beans) but JSP is *not* pure Java and not a server side template language.

"It's true that your server controls the client code that is being sent over—but you still have to convert between two very different APIs, and if Web supported truly mobile code, that translation wouldn't be necessary."

Right. I agree 100%.

6:10 PM  
Anonymous Anonymous said...

"It's true that server controls the client code.." Really ?

For example: a) The RSS-pulling - that is my best current preference and lovely "mode of reading" for now.
b) And what's the client? The "client" is a small code (written by me), rather than standard firm's-crafted browser at all.

The consequence: The 'client" only read (pull) the text and mark-up.. absolutely zero jscript or "any CODE".
To be able "do pull" information from I-net with aid of client, rather than "be able accept codes"... -
that is where the big and greatest achievement of XXI century! ;-)

Which type of "control" do you mean?

11:30 AM  
Anonymous Anonymous said...

Everything is virtual, yet everything is real.

12:20 AM  
Anonymous Anonymous said...

with this post you make clear some of my own thoghts i've come to in past years - one thing is more than for sure - the web as it is now is way too obsolete. it needs new principles to be built on.
and yeah, as you said "Whole industries now prosper solely because of imperfections of the Web." - this will be one of the reasons why the next/different version of www will delay for years, cose the companys are more interested in the way things are now, not the way surfers would like to see it. unfortunately, money not only makes the progress, it also decides "when" to make this progress (in our case - when current web woun't be profitable anymore, than a necessity for new one will arise).

2:37 PM  
Blogger Camilo Sanchez said...

The web problem is really a challenge to creativity, because the internet has become for the first time the reflection of ourselves. The worst and best things about humans can be found here. I think, there is no really a problem with the web, but there is so much power in it for so little that we are all tempted to think how to make money out of it. The identity problem I think will be fixed via openID or facebook connect. Centralization of certificates is important for security reasons (imagine if just anyone can get one). I think that not having an established business model is better because it means many can be developed. So in the long run, I don't think the problem lies in the web as it is, I think the problem lies in how to combine the intangible side of the net with the tangible side of real life.

10:58 AM  

Post a Comment

<< Home