/sys/doc/ Documentation archive


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Lucent Technologies & Sun Microsystems



>>>>NCs are not likely to be any more "mobile" than a notebook or palmtop PC is.

>>>I realize that the current NC architecture doesn't really solve the
>problem,

>>Right... it doesn't change much of anything at all.

>The hardware architecture of NCs will be significant only if it acheives a
significantly lower price.  

Right, and frankly I don't see any convincing argument why that would be the 
case.  Current PC chipsets are really pretty cheap (partly due to the enormous 
production volumes) and peripherals aren't likely to get a whole lot simpler 
either just because of computer architecture changes (look at the incredibly 
simple controller card and drive electronics the new Quantum BigFoot drives 
use... only three good-sized ICs and a circuit board of maybe four square 
inches.)

>The real potential for change is in the
software architecture.  If the money backing NCs make a better OS such as
Inferno commecially viable, then NCs will have an important effect on our
information economy.  If NCs just try to be a cheaper way to run standard
PC applications, then you are right, they won't change anything.

I think that it's a mistake to try to align Inferno with the NC, since the NC is 
doomed in the marketplace IMO and if Inferno is seen as connected to it, Inferno 
will go down the tubes along with the NC.    I think it makes a lot more sense 
to use Inferno to connect other types of things (and PCs too) with PCs and other 
devices, and to enhance that kind of universal connectivity.

>>>but there is an important issue that everybody seems to have
>missed.  One of the promises of ubiquitous network computing has always
>been availability of information.  Needing to have your palmtop in your
>hand to find the information you typed into it yesterday is annoying.  The
>information should be available wherever you are using the nearest
>available computer HI.

>>Yeah, but when you're on an airplane at 36,000 feet the palmtop in your
>hand is
>a HELL of a lot more accessible than the information safely locked away on an
>(unreachable) server system somewhere else.  I've owned dozens of calculators
>and almost as many pen-and-paper personal notebooks... and although it's
>probably the LEAST easy to use for either purpose, the one I still use the
>most
>is the one built into my watch... simply because I *always* have that one
>with me.

>You are making my point.  Ready availability of information is crucial to
its value.  Our network infrastructure is moving toward data networking
everywhere -- even in an airplane flying at 36,000 feet.  Wouldn't it be
better if the information you enter into your watch was also available at
your desktop, only with a better UI?

A lot of that depends on how such remote access is priced.  At the price of 
(even say) $15 an hour to keep connected to my data while I'm flying 
cross-country... then no, I think it makes more sense to work locally and upload 
the results when I get to the other end.

>>>Timesharing systems implemented much of the HI
>processing on the box that stored the information.  PCs do the same thing,
>only they move the control directly into the hands of the user.  NCs hold
>the promise of finally beginning the separation of HI processing from
>information storage management.

>>Rubbish.  LANs have allowed just such separation for a long time already.  In
>fact, Datapoint's Datashare "DSnet" facility offered stuff like you're
>talking about "finally beginning", but twenty years ago.

>A great many things have been technically feasible for years and may have
been implemented years ago, but the topic is the mainstream of computing
and what shape our information technologies will take in the years ahead.

Well, I still object to suggesting that such separation is new and revolutionary 
when it's been around in one form or another for at least two decades.

>XWindows running on X terminals is another example of the separation I am
talking about, but X put the division of labor between client and server at
the wrong place in the overall system architecture.  

In fact, I think X terminals is damn near NO division at all, since the local 
machine is relegated to being simply a highly expensive, basically-stupid 
graphics terminal.  It left essentially all the processing where the disk was 
(which limits fanout a lot more).  I favor a design philosophy where AS MUCH OF 
THE PROCESSING AS IS FEASIBLY POSSIBLE is offloaded to the workstations (the 
parts of the system which are intrinsically added with added numbers of users, 
which makes sense!!!) and the absolute minimum of processing must be done where 
the crucial, must-be-shared data is located.  This maximizes the possible 
fanout, and allows you to cluster the greatest possible amount of lowest-cost 
processing resources around the company's crucial shared data resource.  It also 
allows less-crucial, not-necessary-to-share data to be distributed out to closer 
to where it's needed, and where access to those files can go in parallel and 
thus with a radically higher aggregate bandwidth.  This is exactly the design 
philosophy that I used in designing the software architecture of the Datapoint 
ARC System, 21 years ago.  And I still think it's the way we should be doing 
things today.

>Sun's NeWS came closer to the right architecture, but never caught on.  

Nothing Sun has basically EVER done has ever really caught on.  I don't think 
that's completely by accident.  :)

>Even X never made it to
the mainstream.  As I said at the beginning of my message, I don't think
the current NC architecture is the right one so they, by themselves, won't
get us there.  But, the fact that people are seriously thinking in this
direction and putting money behind their words means that we are ready for
a shift in basic architecture.

I disagree... there are always people ready to throw money away on some 
tomfoolery.  The other fellow who wrote the comment about that being the way we 
can get these bozos fired had the right idea!

>The Web has started this shift.  Information is kept on servers, browsers
provide HI processing running on clients, and the same information is
available from any computer HI with an internet connection and a browser.
But the Web has it's limits, performance is a problem, information capture
isn't simple enough, and management of personal information isn't there
yet.  The next big step is to make information entered anywhere available
everywhere.  This is where a secure, distributed OS such as Inferno can
shine.  With the right information managment architecture, devices running
Inferno combined with ubiquitous networking can get us there.

Obviously I'm interested in Inferno, otherwise I wouldn't be here.  ;-)  But 
let's not throw the baby out with the bathwater.  Let's see what we can do to 
work WITH the unprecedented installed base, and what unique features (otherwise 
nearly impossible to implement well) we can do with Inferno newly in our 
toolkit.

>>>This will allow the information to be
>available regardless of the state of any particular HI device.  The worst
>feature of current generation PCs is that they can be TURNED OFF making the
>information they contain not readily available.

>>Any computer system I'm aware of can be turned off.  Servers included.
>And just because you COULD turn a system off doesn't mean you DO.

>PCs typically are turned off, or at least disconnected from the net.  

That's more a function of current ISP pricing and resource policy than because 
they *have* to be removed from the net.  Quick-to-establish ISDN-style 
connections that can be requested either client-side or net-side would make 
dedicated connections less necessary and increase availability.

>We need a computing infrastructure with repositories of information that are
typically not turned off or disconnected from the net.  

Again, changing the pricing and accessibility policies can solve this problem 
just fine.  If the pricing and operational characteristics (e.g. connection 
time) were set suitably, this might be part of what would make the new AT&T 
wireless local connections that have been talked about rather compelling (I hear 
tell about 2 voice lines plus a 256kbit/sec digital data channel, all wireless, 
for about $15 a month).  Especially if that digital data channel were 
intrinsically TCP/IP (or whatever the Internet uses by then) and 
unused-until-it's-needed, but available instantaneously on request 
24-hours-a-day from either end... and unmeasured flat-rate service... this would 
be cool.

>This is why a shift from current PC software (and maybe hardware) architecture 
is important.

Rubbish, there is **absolutely** nothing about the PC software architecture (or 
hardware architecture either, for that matter) to support reaching such a 
conclusion.

Gordon Peterson
http://www.computek.net/public/gep2/