Java, Language of Tomorrow

Yeah, so I've gotten really sick of everyone's telling each other how great Java is. The university-types have been slurping up this hype like pigs at a trough. In one class we had to read this huge book with coffee-cups next to the page numbers, written by some Dutch Pollack, Piotr van der Stooje or something. It was a good book, but it had these jokes at the beginning of each chapter, and for the next six months everyone I knew was telling these jokes to each other. Of course all of us had already heard the jokes since we were forced to read the same book. It was nauseating, because either myself or the other guy would start out, ``I heard that . . .'' and by the time we finished it would be so obvious exactly where we heard it. It's the same problem that us second-rate family-snapshot photographers with SLRs and more than one lens have with Philip Greenspun's pages: the pages are good, but guess what. I also heard that Minolta makes the best bodies and Nikon makes the best lenses. so tell me a new one. There is too much room in the world for only one Philip Greenspun.

Java actually isn't a terrible language, compared to something like Perl for example, but there are some tremendous problems with it that I think don't get discussed openly enough. I think Java is a bad choice for many of the projects that have tried to use it. During its hype phase, Sun promised us and we in turn promised each other certain things about Java, with incredible vociferous and emphasis (not unlike Linux promises stability and security), and we have dismally failed to coax Java into delivering them. Now that Java is old enough to document its own failure in copious detail, it is not okay to keep using it in new projects as if it were fresh, unproven ``technology'' with great promise, but a lot of kinks that we will have to tolerate as the price for working on the bleeding edge. It's too late to be so forgiving and optimistic. It is time to honestly survey what has happened with Java, and why.

Even when Java was new, its hype came at the price of an honest survey. Had the Java-hypers bantering across neighboring coffee tables been a little more conscious of their own history, we would probably still have been excited about Java, but we would not have been so surprised when things didn't pan out so well. More importantly we would not find ourselves so discouraged by the failure of the only-game-in-town that we would continue with this pathetic denial, pretending Java is still being massaged into usefulness well after it screwed us over so decisively.

Let's consider some much older languages that delivered on some of Java's promises.

emacs
The revolutionary idea of this editor is its ``extensibility'', which it accomplishes through elisp. elisp files end in .el instead of .java, and the elisp compiler turns them into .elc files instead of .class files. .elc files are smaller and execute more quickly than .el files. elisp programs can only run inside their runtime environment, the emacs editor.

emacs has a web browser written in elisp---it displays images and, in my experience anyway, is much more useful than Hotjava. The best news and mail reader in existence is called Gnus, and is also written in elisp. Gnus is good enough that many people patient enough to learn it choose to use it instead of a plethora of traditional machine-binary mailreaders on their platform-of-choice, which is more than I can say for Java news and mail readers which generally only see use because their victims are banished to some kind of web ``kiosk'' and need to use the office's expensive proprietary webmail software.

emacs runs on more platforms than Java does.

elisp programs, in my experience at least, do not suffer from the portability problems that Java programs encounter when moved from one JRE to another. The same emacs codebase builds into the emacs editor-and-runtime-environment on all the platforms that emacs supports. the elisp environment is consistent.

As far as I know, emacs does not have a sandbox-jail for running untrusted programs on web pages. Neither does Javascript, but let's consider Javascript a marketing artifact with nothing whatsoever in common with Java. so, I must grant that emacs doesn't attempt a sandbox, and Java does with at least some meaningful success. emacs also lacks threads.

However, elisp is a more portable language than Java, and elisp has more user applications with relevance to the ``web revolution'' or whatever than Java does. emacs is more convincing than Java as a successful ``middleware'' platform.

Pascal
The following comes from 0 <0@pigdog.org>.

You don't mention one of the first and most successful systems with bytecode, the UCSD p-system.

It was largely responsible for making Pascal THE language to know in the late 70s and early 80s, and gave us the term ``p-code'' which is now used generically for any interpreted bytecode.

Microsoft programs were compiled to p-code into the late 80s, and the MS compilers would emit p-code up until the mid-90s.

I think the main thing he left out, though, was how not-fun writing code in Java is.

so. Java borrows its syntax from C and its runtime architecture from Pascal. These are the two most prominent so-called ``high level'' languages since punch-cards. What architectural basis could possibly be less ``innovative'' than a synthesis of the oldest and topmost two players?

Perhaps an architecture based on obsolete compilers from Microsoft, the kings of BASIC? Oh, wait. Java is that, too.

The article that <0@pigdog.org> cites goes on to explain that p-code was necessary because of the primitive state of compiler tools at the time. Reusing p-code back-ends produced an inferior compiler, but it saves work for the compiler developers compared to a monolithic design. The p-system defines rigid technical specifications at the boundary between the compiler's front-end and back-end. This means the Pascal--to--p-code compiler can get its funding from a separate group than the p-code--to--native-insn compiler, thus penetrating the market with less investment and maximizing ROI. The rigidity of the p-code is what protects business investments in the p-system. All this comes at the expense of modern compiler design, which uses abstract and constantly-advancing intermediate represenations, often less brute-force literal representations based on strongly-typed functional languages like Haskell.

Now that the market for C compilers is mature, it's possible to fund a compiler without resorting to p-code. Java still splits the compiler investment: for example, Sun funds development of 'javac' while Digital funds the Alpha/Tru64 runtime environment. Two companies pool investment to get Java servlets running on the Alpha, and their responsibility splits along the boundary of the simplified, inflexible p-code.

Sure, there are other reasons for p-code, but Pascal's history shows how, in a more realistic context, decisions Sun tries to pass as clever architecture fit better as cynical bows to unpleasant business realities.

OpenFirmware
Since some moment shortly after the release of the Sun4, Sun has been putting a Forth runtime environment into their machine's firmware. The firmware is called Openboot, OpenPROM, or OpenFirmware. PeeCee zombies will recognize it as analagous to ``The BIOS'', but Sun firmware does not serve exactly the same purpose in the same way as the PeeCee junk.

An OpenPROM user can write small Forth programs at an interactive prompt and execute them immediately. One can also write OpenPROM-Forth programs to run at boot. These 'nvramrc' programs contend with other information for 4kB of nonvolatile storage on the clock chip. OpenPROM is a complete minimalist operating system including device drivers for all interesting hardware in the system, memory management, a text editor, and a debugger. The OpenPROM operating system can even load and run programs stored on the disk. Solaris is one of the most popular programs for OpenPROM OS.

Solaris does not include hardware drivers for anything where an OpenPROM driver will suffice instead. The OpenPROM Forth runtime environment is slow, so, for example, it's totally unsuitable for disk access. Solaris includes SCSI and IDE drivers for direct hardware access to disks.

However, when Solaris users login to ``the console'' without the X Window System, it is the OpenPROM framebuffer driver that paints characters onto, scrolls, clears, or flashes the screen. Solaris passes small Forth programs to OpenPROM for execution under its runtime environment. The console on Suns is notoriously slow, but one can boot and halt Solaris without clearing the screen. OpenPROM drivers can also stand in for non-critical-path functions in real drivers which must be fast. For example, an Ethernet driver needs to access hardware directly for sending and receiving packets, but the OpenPROM network driver might be an appropriate way to load the hardware address into the Ethernet chip. X needs to write directly to the framebuffer, but it doesn't need to know about monitor sync timings, or how to program resolutions and bit depths into the framebuffer's registers.

SBus and PCI expansion cards for Suns contain a ROM with OpenPROM drivers burned onto it. At boot, OpenPROM enumerates all the cards and links in their drivers. The drivers are stored in ``F-code'', a ``compiled'' version of OpenPROM Forth. There is no SPARC code stored on these ROMs, so Sun could theoretically pull another CPU-switcharoo without updating the OpenPROM drivers on their old expansion cards.

Obviously, it doesn't make sense to write general-purpose applications in OpenPROM Forth, but its existence somewhat knocks down Java's claims of revolutionary novelty.

OpenPROM is a particularly ironic example of how little new ground Java has broken, because the same company that claims to revolutionize the industry with fresh ideas like bytecode and The Java OS has been using these same ideas for the last ten years with no fanfare. Why have ideas that used to be taken for granted suddenly become franticly-hyped propaganda?

Even more laughable, OpenPROM compatibility is a catastrophic disaster---far worse than Java---ever since Apple and Firmworks began releasing hideously bug-ridden implementations. I have seen nothing so awful in a Sun machine made in the last ten years. Apple is the kiss-of-death for OpenFirmware. but, you must admit, the unrealized idea is kind of neat---PCI framebuffers and disk controllers might have worked for console output or booting in PCI-based machines from both Sun and Apple. oh well. In the mean time, at least all the old SBus cards and machines seem to work together fairly well.

The Newton
Apple's Newton PDA, which is currently discontinued, remains a generation ahead of any currently shipping PDA. The most powerful and expensive Newton, the Newton 2100, contained only 4MB of RAM, which it used as ``heap'' for its NewtonScript runtime environment. Do not think, however, that the Newton was similar to other 4MB machines. It had a StrongARM CPU similar in speed to the iPaq, a gigantic screen, and two PC Card slots with support for Ethernet and TCP/IP, and a web browser called NetHopper written in NewtonScript. Show me another device, of any kind, which can claim this with 4MB of RAM.

NewtonScript addresses the cost and power economies of PDAs better than any existing PDA. Both RAM and stable storage are relatively expensive, and the former is power-hungry as well. Fast CPUs, however, can be cheap and use little power. The executable images of NewtonScript programs are very small.

One writes his or her NewtonScript program on a Macintosh, ``compiles'' it into a binary image with the NTK (Newton Toolkit), and then sends it to the PDA. Compiled NewtonScript is not an arm32 executable, but rather requires the NewtonOS as a runtime environment. Sound familiar?

I already mentioned the web browser written in NewtonScript. There are also FAX programs, email user agents, web servers, and NewtonWorks---the word processor. Newtons have keyboards, printer drivers, and support for modems and networks. Newton software comes in shrink-wrapped boxes and is still for sale. Newtons are computers. The only thing missing is the concept of a ``double click''. All Newton programs must be written in NewtonScript.

now, where is the Java PDA? The Java word processor? The Java web browser? The Java printer drivers? Where can I buy them? And who has written the JRE that can run all these programs in 16MB of FLASH and 4MB of RAM?

And, if someone does some day manage to bring these things into existence, I must still ask, where is the recognition that the Newton did it first and did it better?

Oberon
This one is a bit dubious, because these researchers are presenting Oberon as a response to Java, not a predecessor. It's possible they started their work before Java, but as it stands now they are clearly offering a Java replacement.

Also, ObjectPascal strikes me as somewhat of a pet project. I'm not sure there is a legitimate reason for perpetuating it, and suspect instead that certain people may have spent ten years writing Delphi programs, and then got sad when Borland stopped releasing new versions. These Oberon people are clearly on their knees begging us, implausibly, for attention, which is not a good position to adopt while advocating some broken pet technology just because you grew up with it and wish everyone else would start using it, too, to save you the bother of learning new and redundant things. I don't like getting into ruts. It's bad enough that Java is disgustingly C-ish, without being disgustingly Fortran-ish, Pascal-ish, or VMS-ish. I also think their dwelling on load-time and compile-speed is ridiculous---improving upon Java does not mean adopting the same nutty rhetoric.

Anyway, Oberon sounds kind of neat. I've never tried it, but, as we will discuss later, they claim compiler technology which is decisively a generation ahead of Java, and their small binary size goes well with things like celfones and Newtons.

Java is an old idea, not a new one. It may seem new to people who are astonished to find that languages other than C can do useful work, but everything supposedly revolutionary about the Java architecture is a rerun of well-worn ideas. and not a particularly good implementation, either. Let's start looking at ways that Java fails, both compared to its own promises and compared to some of the alternatives above.

Myth: Java programs are portable

This is, of course, Java's central claim. Everything else is either gravy or appologetics.

I've heard all the Java hype about portability and abstraction. I've heard the claims about why nothing short of Java can ever be truly portable across CPUs, because of the sizes of variables and pointers or exposed-endiness or something. Then I wrote a Java program or two myself. The first things I noticed were:

If NetBSD 1.5 can run a ten-year-old NetBSD binary compiled into machine code, why can't new Java runtime environments execute Java bytecode compiled with older development kits? No, don't just brush this off. Old programs exist. I can run old C programs. I can even run old C programs that have been compiled into machine code. In fact, I can run old C programs compiled into MIPS machine code on Digital's ancient ``Ultrix'' Unix for their discontinued ``pmax'' workstations. I can run these programs on my NEC MC-R700A ``MobilePro'' PDA running NetBSD, which just happens to have a vaguely-related CPU core even though it is a totally different type of device. These programs were written and compiled long before Java was even proposed, on a completely different discontinued machine, with a discontinued C compiler, compiling to a two-generations-older version of the MIPS core, and a variant of Unix that's two or three generations old. Why can I run them alongside modern C programs on a modern ``PDA'' laptop, but I can't run a Java program written six months ago on any current machine anywhere?

Why do we see all these ``Best with MSIE 5.0'' banners on web pages, and find that the ``Pure Java'' applets contained therein do not work in any other JRE? Why does the warez-pushing Java applet on MyDisk.com work on PeeCee's but not Macintoshes? I might be inclined to hear the usual Microsoft conspiracy theories if I didn't have all the same problems myself switching between Netscape 4.6 and the Sun JDK.

Okay then, it must be ``Because the developers didn't write portable Java code.'' They used constructs in their Java programs that weren't portable. Excuse me, did I hear that right? That is precisely what Java's portability promises claim is impossible to do. It is exactly the supposed portability problem with C programs. Java has it, too, and worse than ever. ``Portable code''---Java clamed to be a portable language, and simply isn't. Don't condescend to me with vague wizard-behind-the-curtain whitepaper essays about endyness and the size of pointers and integers. The Java API is not consistent. The Java API is not as consistent as the C API.

Fortunately, Sun is on top of this problem. Their latest ``Web Start'' wrapper claims to facilitate arbitrary released versions of the JRE on a single machine, so each Java program you run can demand its favorite revision of the JRE and your Windows machine will automatically download and install the right JRE release from Sun (at 50MB of disk and who knows how much core, each). Thanks to Web Start, it's slightly more practical to run Java programs, so long as you're running them on Windows. If not, you'd better hope your vendor offers not only the latest JRE, but backports of every JRE that Sun released, including a separate port of every incremental JRE release Sun has ever made. And you'd better be prepared to handle the revision management yourself in case your vendor hasn't designed a ``Web Start'' framework that exactly mimicks Sun's.

I don't get it. C has various standards---ANSI C, Posix, periodic releases of the STL---but I have never found a C program that only works with a C library that implements on an older Posix standard.

The whole reason we were told Java's portability was so important is that it's supposed to run applets inside web pages, and without good revision management only a small fraction of the theoretically very diverse browser installed base would work. If Java was designed from the ground up for use in web pages, why can't anyone predict how an applet will behave without testing it in all the relevant browsers? It should be enough to test in the JRE's appletviewer. Why are standalone Java programs so much easier to write than applets? And why don't all standalone JREs ship with a working copy of Hotjava?

If Java was designed to be portable, why is it so much easier to port C programs to different Unixes than it is to port Java programs to Java Runtime Environments on different Unixes? I've heard people complain they cannot get the Freenet distribution (an anonymous file-sharing and publishing architecture written in Java) to work on JRE X, so they are trying JREs Y and Z instead to see if problems are less catastrophic there. If Freenet were a C program, it would have been picked up by all the Unix package collections by now, and would be just as easy to install as lynx or mutt. Since it's written in Java, it's a portability nightmare, and only a small inner circle has gotten it almost-working. Java's decoy claims of portability have in effect killed the Freenet, and dragged the Freenet architecture down to the same level of broken fantastic promises that Java makes. ``The mythical Freenet about which we have heard so much.''

If Java itself is portable, then why isn't there a portable way to install and run a Java program without dealing with spaghetti .class-files, setting CLASSPATH, and referring to arcane modules contained within .jar files? Why do we have to use a Unix shell script to start a supposedly-portable Java program? C defines source code, header files, libraries, object files, and fully-linked programs. This includes a pretty clear concept of what is a program, and a uniform way for starting programs.

emacs is a little bit freer about its namespace and cooperation between programs than C, and there could be some dispute about where one elisp program ends and another begins. But, it's portable. Installing an elisp program involves adding some path and hook code to your .emacsrc. Using an elisp program happens with one or more M-x commands that start the program, and exactly one node in the Texinfo documentation library. The same elisp installation method, entry points, and documentation library work with all ports of emacs.

Java defines nothing past .class files. The way class files find each other is unportable---it's left up to each JRE, and there are usually several options: .class file in the current directory, .class file in the CLASSPATH, .class file fetched from a base URL, .jar file, .zip file, .cab file. Nor is the entry point to a Java program portably defined---no .class file is marked specially as startable, so it is hard to even identify how many startable programs you have just installed. There is no singular documentation library, nor is there a uniform namespace for starting programs as provided by M-x. MindTerm, for example, is either an Applet or an Application, has several entry points, and is distributed as the same set of .class files redundantly smushed together in three different ways: two .jar files and one .cab file. Which entry point and which .class file collector actually works for running the program depends on your JRE.

If Java was designed to be portable, then the design failed. Yes, Java sucks. But surely this is just a matter of the tower toppling because of the mad, insatiable pace of economic development. The foundation of Java is good. Java broke new ground for the industry. The next version, or some differently-branded but fundamentally similar successor, will undoubtedly deliver what was promised. All Hail, The Next Version! Java merely needs to ``stabilize.'' And, in any case, the Industry has Learned Things from Java, which will positively influence other unrelated Technologies.

I wonder.

The Lore The Web Browser precipitated a Language Revolution by allowing programmers to write code in notC and actually run it somewhere. Finally we have escaped from the C shackles of legacy Unix-inspired OSes. Without Java in the Web Browser, the only way for a programmer to reach the masses was to write in C++ and build his programs for the PeeCee or the Mac, because all other development environments on those two platforms are of low quality and hopelessly out-of-date. Java is thus the only realistic alternative to C for programmers who want to code for more than academic curiosity.
The Reality Unix precipitated the Language Revolution. All sorts of wacky languages make sense for writing CGI scripts, and all these languages run great on Unix. Because Web Browsers connect to Web Servers, ordinary users have an opportunity to run their programs on Unix application servers. Now that practically everyone has access (albeit, only through this twisted ``CGI'' metaphor) to a Unix runtime environment, a lot more people can enjoy programs written in notC by running CGI scripts, even though these people can't yet benefit from the uniform and flexible Unix environment by running it, and the notC programs, on their personal computers.
The Lore Java has an excellent and well-documented standard library, which makes it attractive compared to other languages.
The Reality Most programming environments have large libraries. Fortran has huge chunks of code available from its scientific supporters. Similar arcane mathematical routines save MatLab programmers lots of time in making graphs or drawing inferences from datasets. C and Perl have gargantuan freely-available libraries. emacs, by being an editor, gives programmers who want a simple user interface an excellent head-start through the concept of a ``buffer'', which is eerily similar to a ``web page,'' even though using the buffer as a generic user interface predates the web. Why is Java's environment different or better than every other language which also includes lots of tools and libraries and simplified user interface kits?

Because it's ``standard''? This actually makes some sense at first---everyone has this huge library available all the time, so you don't have to bother with finding, fetching, and installing just the libraries you need. However, it creates the ``version skew'' problem. If I require an updated gzip inflate-deflate library, I'm supposed to get it by upgrading to the latest JDK. New JDKs are not compatible with old Java source code, so before I can use the new gzip library, I have to port my program to the new JDK. This annoying enough in itself, until it quickly leads to the nightmare of having several JDKs installed on a single machine in support of a single project, with some vague long-term hope of eventually porting all the modules to one JDK so that the program can actually run.

Languages like C, Perl, and elisp permit their programmers to upgrade components incrementally, while maintaining a uniform interface to the overall collection of code and documentation. With C, one can solve the find-fetch-install problem with something like NetBSD's packages collection, which makes adding a popular C library as easy as adding one dependency line to a Makefile. Perl and elisp tend to slowly absorb popular libraries into a Java-ish standard collection, but the collection is not as opaque as a JRE.

Well, then, Java's standard library is better because it is well-documented? Please! I far prefer Unix's man and texinfo pages, because they work on my unconnected laptop, load quicker than web pages, and look nicer on paper. Others are welcome to disagree, but one thing is beyond dispute: there is nothing revolutionary about putting documentation on web pages. man, texinfo, and Perl ``pod'' documentation can all be automatically translated into web pages, but most people seem to prefer reading them with some ``native'' interface.

oh, but Java lets you interleave code and documentation in the same file---that way, the documentation doesn't get out-of-sync. Pah! How ridiculous that Java, with its .class spaghetti, should suggest that ``files'' are somehow sacred. Most programs are made of more than one file. In large jobs, the files are connected to each other and kept ``in sync'' by revision control. Why shouldn't documentation be in a separate file? If you want to edit both at the same time, open two windows in your editor. ``Literate programming'' sounds like a nice idea because of all the fancy words---like, ``if only I were smart enough to envision how these apparently-amazing ideas change the way a Real Programmer thinks and works, I'd understand why literate programming is earth-shattering. Only because I lack Vision, a Ph.D., and Experience, and thus am not a Real Programmer, does it seem rather unremarkable to me.''

Anyone who tries ``literate programming'' will soon discover that English and Java are two very different languages, and writing your program twice in both languages doesn't do much to help someone who can already read and write both languages. someone like a Java programmer, for example. What I actually do find helpful is BSD's practice of putting C source code for libc in /usr/src/lib/libc, where anyone can look at it. This is actually a useful suppliment to the man page. Why doesn't Java's hyperdocumentation let you look at the source code for the routine you're reading about? It's ironic that Java combines documentation and source code in one file for no other reason than to later facilitate separating the two and storing them as far apart as possible with no connection between them---even so far as including half of each file in your JDK and banishing the other half to a web page---lest a developer accidentally take advantage of this intentional proximity by reading both documentation and source code at the same time! Sun's latest 1.4.0 JDK (s'cuse me, ``JSDK'') perpetrates the ultimate irony of literate programming: the documentation and source code are released on different dates and under different licenses.

The Lore Java's bytecode is the astonishingly clever and totally unprecedented key to Java's amazing portability.
The Reality The notC languages discussed above---elisp, Pascal, OpenPROM Forth, and NewtonScript---all feature an intermediate representation reminiscent of Java's bytecode, so it is hardly unprecedented.

However, Java's bytecode is distinct from these three in an interesting way. Compiled NewtonScript, .elc files, and F-code are all logical representations of the original program that maintain most of its linguistic structure, and do not at all resemble a machine language that could run on a real CPU. Java bytecode is the machine language of a stack-based CPU which doesn't exist, but is very similar to other already-existing or imagineable CPUs. Compiling Java into bytecode throws away more information than compiling elisp, OpenPROM Forth, or NewtonScript. Java bytecode is also unusually large compared to these other languages' compiled forms.

Sure, there is some association between the imaginary JavaCPU and the Java language. But there is also some association between the real MIPS CPU and the C++ language: modern CPUs are overtly designed with the foreknowledge that they will be judged based on how fast they execute algorithms written in C++. Is the nonexistant Java CPU really uniquely adept at running Java programs, like a Symbolics ``Ivory'' CPU is uniquely adept at running Lisp programs? No.

This observation makes me wonder if we wouldn't be better off throwing out Java and writing programs in C, then cross-compiling them for the VAX CPU. Instead of a JRE, we could simply write a VAX emulator for all interesting architectures, and put virtual-VAX sandboxes inside web browsers. VAX insns would become the Language Of The Web. We could standardize a crippled miniature virtual VMS called WebVMS for building into web browsers, to give all these VAXlets access to the network, the local filesystem, a GUI toolkit. We would have VAX-compatible smartcards. There is no reason a VAX emulator can't translate VAX instructions into native-CPU instructions just like a JRE's JIT does. This can be done quite well---like I said, the VAX emulator for the Alpha is faster than any physical VAX CPU. I suspect VAX machine-code would also be similarly compact to Java bytecode, since machine-code-compactness was the biggest priority when the VAX was designed. There is no missing piece to invent or design: we simply agree that, henceforth, all applications will be VAX applications so as to be equally inconvenient for everyone. Voila! Portability!

Why is the Java CPU a particularly good intermediate representation? Why is it better than the VAX, for example? I think it isn't. F-code, .elc files, compiled NewtonScript, are all defendable representations. What's Java-bytecode's excuse?

How is a Java program more CPU-independent than a C program that we promise will only be compiled to run on the Java CPU, or the VAX? C is also CPU-independent, if you promise that you will never compile it for any CPU but one.

The most important feature of the Java CPU is that it doesn't exist yet, but it could. Part of Sun's plan to make money from giving away Java was to build Java CPUs. This was a stated plan of theirs, not my speculation. The following is much more speculative.

Sun reasoned that if they could construct computers that ran Java bytecode natively, everyone else would suddenly realize that their JREs were CPU emulators for the computers Sun was selling. I suspect the fastest JavaCPUs would always be emulators, just as the fastest VAXen are emulators running on Alphas, but perhaps Sun could make some slow computers out of physical JavaCPUs that cost less to make than competing emulators of similar speed. The JavaCPU is thus extremely simple, so as to be implementable with minimal silicon and minimal research. Compared to a modern architecture like the Alpha, the JavaCPU looks like something an undergrad dreamed up in the men's room based on the mathematical elegance of two urinal cakes, one stacked upon the other. The JavaCPU idea gives Sun a brief window for a hit-and-run industrial subversion. If Sun can open hundreds of these tiny windows-of-opportunistic-slack, maybe a few of them will pay off. Something like the Alpha matures slowly, with many years of compiler research and successive core revisions abstracted by PALcode: the Alpha is not hit-and-run, and indeed it looks like Digital didn't survive long enough to finish cashing in on the Alpha. A low-end infantile design like the Java bytecode makes business sense in an industry where everyone with a big plan or a big research investment eventually gets screwed over by speculative neck-tie damage and goes bankrupt.

Sun did make some ``Javastations'', but they fufilled the JavaCPU dream in name only. They were merely proprietary SPARCstations running JavaCPU emulators just as regular workstations do. NetBSD/sparc runs on Javastations, but I don't think Solaris does. For better or worse, it looks like the speed-price point where the JavaCPU makes sense, if there is one at all, isn't very interesting. Maybe it's good for a smartcard or something---the JavaCPU can keep track of your celfone number or what time you entered the parking garage.

In that case, shouldn't real applications use an intermediate representation that is more compact and permits higher-performance JREs, like the other notC languages that preceeded Java?

The Lore Java is uniquely suited to the explodingly important embedded market because its security architecture and reference pointers allow buggy Java programs to run safely on a primitive OS with no memory segregation, and its bytecode and standard class library make it perfect for inexpensive low-power devices which usually have plenty of CPU but minimal RAM and stable storage.
The Reality In the desktop market, we can sell more computers by claiming that ``one misbehaving application will not bring down the entire system.'' Desktop users purchase ``applications'' on CDs, install them, start and quit them, and switch between them. Sometimes it is a little bit easier to get work done with a high-quality OS and low-quality applications, than if both OS and apps are of low-quality, but of course neither situation is ideal.

The ``uniqueness'' of the embedded market is that embedded devices usually draw no clear distinction between the operating system and the application. The programmer might draw such a distinction, but the user does not. Indeed, to the user, there is no distinction between the device and the software.

I have a Motorola i85s celfone, which has some miniscule Java-branded runtime environment in it. When I complain to someone that ``my celfone crashed,'' or say that Motorola's stuff has ``crappy software,'' they usually sneer at me and deliver some speech about not being technically-inclined and not ``understanding all that,'' but that they also own a celfone, and they can make phone calls with it. There is no distinction between software and phone.

K, so JavaOS can more cheaply provide the feature, ``if one program misbehaves, it won't bring down the whole operating system.'' Pardon me, but if embedded-device users won't even admit that the device has software in it, how will it pacify them to learn that their celfone's operating system didn't crash---it was just the ``phone application'' that crashed? I'm betting they won't care.

There is no need to segregate applications in a delivered embedded device. The device needs to not crash, at all. The embedded market makes all this nonsense about memory protection and kernel integrity irrelevant to everyone except the developer.

Java's ``standard'' class library presents another problem for embedded devices. It is gigantic. Every embedded device that wants itself stamped with ``Java'' has to choose a different incompatible subset of the standard library to include. This decision has two slightly different scenarios.

A ``closed'' embedded device is one scenario. We want to design a sealed product and use Java for the benefit of our engineers, who like and know Java, or who feel that this particular OS kit has some attractive characteristic. Here, the standard class library is annoying because it is too tightly integrated with the JRE. We buy an embedded Java kit from someone and are forced to install the entire kit onto the device. Some standard classes in the kit will take up space in the device's ROM without getting used, while other classes that we might have appreciated will have to be implemented by hand. A C-based kit like vxworks can provide a huge standard library and statically link in only the symbols (functions/methods and variables) that our C application uses. The finished ROM image will contain only code that actually gets used. It will not contain pieces of the standard library that never get called. In fact, it won't even contain symbol names: the textual names of variables, methods, classes, and so on. Compiled Java .class files must contain these because Java's analogue to the C ``linker'' is part of the JRE, which puts us in an absurd position where using short, inscrutible variable names will actually reduce the size of the ROM chip we must put inside our VCR. This factor naturally pales in comparison with including standard library code that is never used, but it illustrates how Java's inflexible runtime linking is problematic for embedded devices.

The second scenario is something like a Java celfone which allows users to download tiny Java craplets over i-mode or WAP to customize their phones. In this situation, we must rigidly define the ``standard'' API so that craplet publishers can run the same image on everyone's phone. Here, the question becomes ``what is Java?'' Do we mean application-Java, web-page-Java, Keitai-Java, or smartcard-Java? Motorola/Nextel Java, or NEC/NTTDoCoMo Java? The existence of a ``standard'' class library that isn't standard, but rather is individually customized and abridged to match the scale of each endeavor to which Java is applied, becomes very confusing. I know my i85s phone is a ``Java'' phone, and as a consumer I recognize the ``Java'' brand. but does this mean I can run the same applets on my phone that NTT DoCoMo subscribers are running on Java-branded phones in Japan? Based on our experience so far with a grab-bag of incompatible Linux-on-i386 JREs, I can't imagine there is any consistency between phones except that which is forcefully imposed by OEM distributors like NTT DoCoMo, onto their walled-garden subscribers only. If I write a Java applet for celfones, who is my audience? All Java-branded phones? I don't think so. As a phone user, the basically untainted DoCoMo brand becomes more important than the scattergun Java brand. As a developer, this is many times more annoying than the well-known incompatibilities between JREs in web browsers, because

The mindblowing complexity of Java's ``standard library'' creates this confusing problem. First, there are scores of slightly different Java-ish languages out there, all calling themselves Java. The standard library, isn't. Second, it is trivial to add IJG's libjpeg to my C application written under vxworks, but if I want to add JPEG decopression to my Motorola-i85s-Java application, the tight integration of JREs means there is no meaningful guarantee that I am technically able to plagarize this JPEG code from some other, larger JRE---much less that I can do this legally.

The Lore Java's innovative ``Just-in-time compiler'' solves ``the interpreter problem.'' Other interpreted languages lack just-in-time compilers, and thus are too slow to be attractive compared to C.
The Reality There is no ``interpreter problem.'' We generalize computing machinery by describing computations in programming languages.

However, machinery differs in its adeptness at solving various problems---for example, different machines have different kinds of floating point units, or none at all. Some machines are good at vector math like Cray's YMP CPU's, or running massively parallel instruction streams like clustered PeeCees or Cray's air-cooled Alphas. Symmetric shared-memory multiprocessing currently starts diminishing once you have about eight CPUs. SGI has multi-CPU towers where several chunky-desktop-size cases are connected by a high-speed, low-level bus called HSSI such that groups of eight CPUs inside each case have an affinity for their internal block of physical memory, but all CPUs can transparently access all memory---Irix supposedly has some rather impressive distributed computing features so that, if you write programs that match their internal behaviour to this architecture, Irix will properly align threads across the cunky cases which make up the tower. Most modern CPU instruction set architectures have branch-prediction for the indirect function calls that object-oriented languages use. Some, like the Symbolics Ivory CPU or the IBM AS/400, have special memories that can do hardware type-checking. Like I said, machinery differs in its adeptness at solving various problems.

Similarly, languages differ at their adeptness and convenience in varying kinds of description. XSLT or XSSSLT (or whatever it's called) is a functional language that is very good at describing how to translate a single proprietary XML description of content into several nonproprietary markup languages like the web's HTML, WAP's WML, and DoCoMo's cHTML. Perl is ``good with text files'', but XSSSLT is more adept at describing this particular translation.

It is up to us to figure out how to execute a program efficiently. We might choose a specific machine which is adept at the task. We also have a bag of well-known tricks. C uses one such trick: we can use a very expensive translator called ``the compiler'' to translate the program into a different language whenever it changes. Since the program changes very seldom, this is efficient. However, there are drawbacks to this trick: it makes some algorithms inconvenient to describe in C, and it makes some ``optimizations'' impossible.

It is said, ``those who do not study Lisp are doomed to reimplement it. Poorly.'' The popular C compiler 'gcc' uses a Lisp-ish language internally to represent the program being translated. The fact that the C compiler must do this, and must be itself written in C, is probably part of what makes the C compiler itself so slow.

The JIT's idea of translating one machine language into another at runtime in a way that takes advantage of linguistic construcs describing repetition like ``loops'' to save translation-time is not new. Apple translated m68k code into ppc code this way when they replaced their old Macintoshes with ``Power Macintoshes''. Digital did the same thing translating VAX insns to Alpha insns. The ``Executor'' product for running Macintosh programs on PeeCees translates m68k code into i386 code this way. These ``emulation'' tools are short, hand-optimized routines for translating from one relatively simple imperative language into another. Their ambition pales when compared to a modern Lisp runtime environment.

Lisp is an interesting language to bring into our Java discussion because it is one where investing huge amounts of work into the runtime environment can produce a huge execution speed increase. Popular C compilers and CPU emulators, by comparison, have progressed past the point of rapidly-diminishing returns. Modern Lisp tools, just as Java's JIT does, avoid translating code from Lisp into the target machine language multiple times when that code doesn't change. Note that when I use this particular language to describe how good Lisp environments work, the behaviour is identical to a C environment. Concepts like ``loading'' or ``running'' a program, or ``rebooting'' a machine, are fragile and subject to re-interpretation. If you are going to pontificate about ``The Interpreter Problem'', you cannot presume that .foo files are compiled into .o files and then linked into .app files. The Newton PDA doesn't even have ``files''.

Suppose my Lisp Machine lacks the notion of ``loading'' programs. All programs are always loaded. I merely decide which ones I want to run, and when I run them, they are translated into machine language whenever necessary. When I want to power down or ``reboot'' the machine, the state of all these loaded programs is saved to disk. Suppose this ``state'' includes the machine language versions of anything that got translated. Now, this may not be a particularly desireable way to build a Lisp Machine, but performance-wise it seems comparable to the way popular machines accomodate C programs. The only difference is that you have to write ``Makefiles,'' and I don't. When I say ``programs are compiled whenever they change'' all boils down to the same thing in any language, I'm serious.

Programming environments also perform optimization. Optimization is the idea that a language's runtime environment should understand the meaning of things written in the language enough to draw simple logical conclusions about equivalent programs, written in the same language, which will run faster. In C, optimization happens only in the translator. C compilers perform ``static'' optimization, meaning they must operate without ``running'' the program. This, in itself, is a fragile statement because even a human who looks at source code and figures out what it will do is ``running'' the code inside his or her brain, and in that sense static optimizers may ``run'' parts of the program they're optimizing: stated less ambiguously, a C compiler is not allowed to change its optimization decisions unless you change the program. A simple static optimization might be to realize that (x * 128) is equivalent to (x << 7).

While Fortran is usually considered categorically inferior to C, the language permits more static optimizations. Math routines will often run faster if written in Fortran than if written in C, given a good compiler---particularly on SIMD CPUs like Cray YMP or PowerPC Ultivec.

Lisp runtime environments are allowed to change their optimization decisions while a program is running. This is dynamic optimization, and can give Lisp code speed advantages over Fortran and C. It is not Lisp which makes dynamic optimization possible, but the traditional architecture of Lisp runtime environments: Lisp programs are not irrevocably translated into machine code once like a C compiler does, but rather translated as needed while they run. We could run C programs this way, or using a tool like 'stalin' we can obliterate the dynamic optimization opportunity by irrevocably translating a Lisp-ish program. The Lisp runtime architecture has the freedom to change its optimization decisions after profiling the running program. Running the progam on the CPU can reveal its nature in ways that sort-of-running it on the static optimizer cannot.

The idea of ``just-in-time compiling'' reflects a fundamental ignorance of quality Lisp runtime environments. It is a C-centric attitude: naturally, thinks the C bigot, everything must be translated into machine code by a ``compiler''. In Java, we will place the ``compiler'' in a slightly different spot, but it will still perform only static optimizations like its Cish inspiration. Lisp runtime environments include JIT functionality as a matter of course. Any well-designed runtime environment will conserve its translation effort. Pushing the envelope in a Lisp runtime environment is thus about more complicated issues than the fragile and ambiguous word ``compiler'', issues like designing more aggressive dynamic optimizations. Java has given a name, ``JIT,'' to a practice that, before Java's invention, was considered mandatory.

It gets worse. Java's intermediate ``bytecode'' representation obliterates much of the linguistic structure that exists in Java programs before compiling them into ``bytecode''. The intelligence of the predicate-based-AI used for optimization is very limited compared to the intelligence of the humans that created the linguistic structures (in Java) which are obliterated by 'javac' translating Java into bytecode. Destroying this linguistic structure is harmful to optimization. To see why, consider the static optimization I described earlier of (x * 128) into (x << 7). The 6502 CPU in the Apple //e lacked a multiplication instruction, so integer multiplication in C might be translated by ``the compiler'' into an additive loop in 6502 assembley. Which of the following patterns is easier to recognize and optimize into (x << 7)?

C language
   answer = (x * 128);
6502 assembley pseudocode
   accumreg = x; ireg = 128;
   for (ireg--; ireg != 0; ireg--)
       accumreg += accumreg;
   answer = accumreg;
I assert that it is easier to write correct static optimizer patterns that will recognize the first case as a shift-left than the second case. If it's unclear how a computer would do static optimization, try writing down some AI rules in English for a static optimizer that runs on Grad Students. Which optimization is easier to program the Grad Student to perform?

That's why translating from Java into Java bytecode makes optimizers less effective. C does not have this problem, because the optimizer has both the high-level language and the target CPU's characteristics available. OpenPROM F-code, NewtonForth bytecode, and Oberon slim-binaries also do not have this problem, because their bytecode preserves the high-level language's linguistic structure better than Java's bytecode.

It is hard to make any clear statements about ``optimizing'' without getting bitten by a practical exception, especially with my limited knowledge, but the points are: the language in which a program is written affects how it can be optimized, and optimization is something that works best with high-level languages.

I understand that the ridiculous compiled vs. interpreted argument is no doubt popular among former GWBASIC programmers who learned that C programs are faster than GWBASIC programs because C programs are ``compiled ahead of time'' into ``the very same instructions that run on the physical CPU.''

Hopefully it is now clear why this is infuriatingly narrow reasoning. All programs that run on a CPU are eventually compiled into ``the very same'' machine code. Otherwise, they couldn't run. Victims of Microsoft ROM BASIC can use whatever words they want to interfere with their understanding of this fact, but there is no formal distinction between interpreting and compiling. There are all sorts of wacky schemes for executing programs written in programming languages, and some of them are more clever than others. I assume the internal design of GWBASIC is astonishingly unclever, but we will probably never know. The strategy most C environments use to execute programs may be more clever than GWBASIC, but it's still a very simple strategy. I believe the performance reputation C enjoys is not a consequence of C compilers being the fastest way to execute any given algorithm. Rather, I think the speed reputation is primarily attributable to our collective inability, so far, to implement complicated language tools and make them generally available to everyone for free. I have a copy of gcc for my NEC MobilePro, but a copy of Allegro CL, I do not have.

Contrary to what marketers would have us believe, the Java architecture actually snubs the lessons of state-of-the-art notC runtime environments that preceeded it. Not only is the JIT neither new nor impressive, but Java's bytecode architecture precludes the use of well-known optimization techniques for designing fast runtime environments.

The Lore Thanks to Sun's benevolence, Java runtime environments are available for free on almost all architectures.
The Reality NetBSD 1.5.2 runs on 21 architectures. 2 of them have JDKs. Two. Sun's licensing is a big part of the problem. Both of the two NetBSD JDKs (i386 and ppc) use Linux binaries. Neither comes with source code. Neither permits redistribution, even of the binary version, meaning that it's only legal to fetch the JDK from the site publishing it, which can arbitrarily stop distributing it after you've already written an app that uses it, or force all new users to upgrade to a newer JDK which is incompatible with your app. Java licensing is a disaster.

I pity the poor suckers who contribute to porting Sun's JDKs, and then end up forbidden from redistributing their own work. NetBSD contributors are fed up with politically discriminatory licensing exceptions. Java-compatible environments fork like the low end of a broomhandle over licensing arguments. You are better off writing programs in Modula2 than Java, in terms of available Unix runtime environments---cvsup seems to be working a lot more often than Freenet. Perhaps it has to do with the Modula2 team's allowing contributors to freely redistribute their work, instead of claiming ownership and attempting to profit from the improvements that others contribute.

The Lore Java has amazing scalability. It's a viable language on everything from 8-bit <$20 smartcards to fullsize desktops costing thousands of dollars and running complicated Office Productivity Applications.
The Reality While C can run across this large spectrum of CPUs, no single Java-branded language can. A language named ``Java'' does run on everything from smartcards to desktop machines, but it is not the same language. There is, at least, smartcard ``JavaCard'', celfone ``J2ME'' Java, and original Java---all similar, but both source- and binary-incompatible with each other. Original Java requires several megabytes of RAM and even more stable storage. Celfone Java seems to be around 1MB for the JRE, and right now Japanese carriers have settled on maximum 30kByte per application. Smartcard Java, in the case of the Dallas Semiconductor stuff of JavaRing fame, is a sealed JDK full of proprietary iButton-only classes for which the latest version is only available on MS Windows.

The difference between variants of Java is more rigid than between C development environments, because a Java developer is typically compelled to use a JDK that matches the JRE, implying a specific set of development tools and libraries. A C developer must also use certain tools to match the target environment, but since the C tools are not so tightly integrated at least he or she can reasonably expect to keep a favorite text editor and 'make' tool across platforms, and with a little luck might not be compelled to use an MS Windows host like J2ME and JavaCard developers targeting the ``smaller scale'' environments often find themselves compelled to do.

While neither C nor Java's situation is ideal, it's important to realize that Java's ``scalability'' is more a matter of market penetration than actual uniformity.

The Lore Java has a novel architecture, in that it has source code and portable bytecode, a development environment and a runtime environment, and that the latter runtime environment can be embedded into a larger application. Previous languages lacked these features---thus, Java contributes an architecture which is uniquely suited to the present age of Internetworking, closed-wall open-standards Intranets, Connected Computing, Thin Clients, and so forth.
The Reality It seems to me the ``new economy'' that started with the Netscape IPO and ended in April of 2001 was mostly about squeezing productive work out of very poor programmers. The most important languages were thus the ones with the least rigor, the lowest barrier to entry, and trial-and-error debugging methods: Visual Basic, Perl, and Javascript. Compared to these languages, Java failed to meet whatever ``unique business needs'' defined this era of atrociously prolific low-quality programming.

However, I wouldn't want that to count against Java! I only want to point out that this type of thinking led to diminishing standards of quality and eventual catastrophe in the Industry community. Maybe these herdlike criteria, that promote momentum at the expense of meaninful analysis, are not good ones for choosing a project's language.

Earlier I pointed out with irritation that Java is not novel, but perhaps its more important to realize that novelty alone isn't a virtue.

Wouldn't it be nice if we could choose a language that is good for writing programs, rather than one which is ``good for thin clients''?

I'd also like it if designers of future elaborate database programs would look at ``the latency problem''. This click-[wait]-scroll stuff really is kind of irritating, and the Java sandbox and web integration might have addressed it, but ultimately didn't. For example, if I read Usenet news under emacs and Gnus, the portable Gnus program (written in elisp and distributable, in compiled form, across all emacs architectures) will pre-fetch the next ten or twenty news articles that it thinks me likely to read. If I read news with DejaNews a.k.a. Google Groups with an ordinary web browser, each news article is fetched only after I select its HTML link because there is no sane way to describe the prefetch logic to my web browser in HTML. Actually Google Groups is very nicely done, and its CGI programs combine multiple articles on a single web page, but the prefetch of Gnus is smarter and more transparent. If elisp had become ``the language of the web,'' Google could simply download a preconfigured ``Gnus'' to my computer, and give me the cleverest prefetch and lowest latency available.

I think problems like this generalize well across many database applications---for example, anything that returns ``20 results per page.'' The particular angle on the Java hype that I'm refuting seems unusually greedy compared to solving the click-[wait]-scroll problem---behind the veil of all this fancy language is a desire to sign on gigantic numbers of sheeplike users instantly whether they like it or not, by force-loading Java environments with advertiser-friendly features onto their computers along with this web-browser program that they actually want, sort of like a trojan horse, and then embedding invisible Java programs into web pages that they will accidentally visit. Anyway, that accurately describes how I've run most actual Java programs in this lifetime. The browser says: Loading Java. ``Criminy.'' A window pops up: The applet globoNetDynamics.CommerceBasket2000PRO.userTrack.dbFrobometer is requesting the following priviledges: complete control of your audiovisual experience and access to everything on your disk. Would you like to (a) Submit or (b) crash the browser? ``What happened to the so-called Sand Box?'' Thank you for submitting to the JavaControls dialog box. ``Why isn't anything happening? Where's the Java? I thought this was supposed to be cool, like running programs is cool. Oh, there it is, that rotating logo in the lower right corner. Watery crap! Thanks a lot, Java. Woo hoo. Raise the fucking roof.''

Is that the real advantage of all this language-of-the-web stuff? There ought to be some small place left for the idea that high-quality programs will attract users. Why bother when you can DRIVE hits to your site BY THE THOUSANDS with OPT-IN JAVA SPADVERTIZING!

The Lore The clever design of the Java language permits the existance of ``The Java Operating System.'' In exchange for the limitation that it can only run Java programs, the JavaOS achieves a superior return of functionality for complexity compared to traditional operating systems.
The Reality The Java Operating System is actually a neat idea. It disappoints me that almost all relevant operating systems these days are immitations of Unix, ``The C operating system''. Ideas like kernels that run in ``supervisor mode'', virtual memory designed to protect one program from another and the costly TLB-flushes that this necessitates, and sbrk(...)-based memory allocation, are all costly consequences of supporting C programs. Other languages have other costly requirements, and I'm far from convinced that ``The C operating system'' is the best tradeoff. It is good in one sense: it can support practically any language. Even if not at imagineably-optimal efficiency, at least it can do so without catastrophic crashes. And while sbrk(...) and Unix VM may not be the best way of allocating memory for notC languages with garbage collection, still it is at least a reasonably inexpensive (in hardware and software) way of giving any language the memory it requires.

But, what about when we're through inventing new languages and want to pick one of them in which to write useful programs and sell to consumers? Or what about when there is no hard disk for VM backing-store? Is ``The C Operating System'' still the most convenient solution? I doubt it.

It gets worse. Modern CPUs are ``The C microprocessor'', because they include MMUs designed to accomodate ``The C Operating System''-'s memory allocation style, supervisor bits to accomodate its notion of a ``kernel'', branch prediction to accomodate its function-calling conventions, and so on. C has developed a harmful stranglehold on our unnaturally limited concept of what a ``computer'' is. If we are ever to design the Robot Masters that will ultimately wrest the planet from human control, we must first wrest the computing community from C's control.

There are a few problems with the JavaOS as a conceptual result of this rebellion against ``The C Operating System''. First, Java is too C-like. Java programs are basically C++ programs with Purify-style code instrumentation. In one sense, it's amazing that such a trivial change as Purify-ing everything can allow us to re-evaluate all sorts of fundamental OS decisions, throw huge chunks of code out the window, and come up with a final architecture that isn't totally absurd. In another sense, the end result doesn't perform too well, and we can do better. We have already done better.

We've already seen several variants of The Lisp Machine, where the entire operating system is written in Lisp. Symbolics and TI both offered useful Lisp machines, and most of them remain in the hands of happy owners. Apple's NewtonOS is The Forth Operating System, and attains levels of code-compactness that Java can't touch. The AS/400 is also a sort of ``Database Machine'', where the processor is a tweaked PowerPC with hardware support for whatever weird languages IBM wants people to use.

A student, in hopes of understanding the Lambda-nature, came to Greenblatt. As they spoke a Multics system hacker walked by. ``Is it true,'' asked the student, ``that PL-1 has many of the same data types as Lisp?'' Almost before the student had finished his question, Greenblatt shouted, ``FOO!'' and hit the student with a stick.

Also, there are operating systems like vxworks that give up on memory protection but remain ``The C Operating System''---for example, on vxworks if a program dies, you can restart it, but any memory it allocated ``leaks'': remains allocated until a reboot. In a celfone where the Phone Application starts at poweron and never dies, this makes a lot of sense, and it frees the OS from some of C's usual performance-sucking needs. For one thing, the kernel and application can malloc(...) out of the same giant heap, instead of doing ridiculous things like growing ``system'' memory from one end of address space and ``user'' memory from the other end, like other primitive operating systems (pre-Multifinder MacOS, PalmOS) do.

Anyway, I have to admit the JavaOS is my favorite of all Java's many flimsy marketing claims. It's important to remember, whenever someone proposes a ``single-language'' operating system, that we are already [ab]using a single-language OS: Unix.

In any case, we've been around this [language]OS block before. And, like so many other aspects of Java, the JavaOS is a half-baked attempt compared to the superior notC operating systems like Genera which preceeded it.


academia / map / carton's page / Miles Nordin <carton@Ivy.NET>
Last update (UTC timezone): $Id: java_languageoftomorrow.html,v 1.4 2002/09/12 23:23:41 carton Exp $