Not all computers will ever be as powerful as personal computers. There's a huge market for computers with limited memory and processing power, because they're embedded in other devices too, and these devices need to be cheap. Where do you suppose all those 8-bit microcontrollers, which have processing power roughly equivalent to the C-64 I first learned to program on twenty odd years ago, are used? They're in your cellphone, your watch, your air conditioner, your washing machine, in your car. Even if memory prices for personal computers fall to levels such that everyone could afford a personal computer with 10 petabytes of storage (which frankly, I don't think this will ever happen, given that the size of the entire Internet seems to be in that neighborhood), garbage collection will still be very much useful, if only in the embedded systems that are pretty much everywhere, even in subsystems of personal computers as well.
I'll add one more thing: static binding. It's the only functional language/LISP dialect I've ever seen developed after the 1970's (besides Emacs Lisp) that still does dynamic binding. I decided that this would be lots of trouble for programming in the large, which is why I never decided to pursue it.
This NewLisp fragment (which is also valid Scheme) illustrates this:
This form evaluates to 5 in NewLisp but 3 in Scheme. The equivalent in Arc:
(let x 1
(let f (fn (y) (+ x y))
(let g (fn (f y) (let x 3 (f y)))
(g f 2))))
as expected evaluates to 3, so Arc also does static binding. Dynamic binding makes safe use of free variables in different contexts that much harder, and essentially makes referential transparency all but impossible. I cannot for the life of me, especially after reading Steele and Sussman's "The Art of The Interpreter", imagine why the designers of NewLISP thought it would be a good idea to use dynamic binding in their language. Static binding had a reputation in the past as carrying with it a performance hit, which is why RMS didn't use it for Emacs Lisp, but research done afterwards has shown that there are ways of doing it properly that don't sacrifice performance.
Continuations are more general than lightweight processes. They can also be used to implement exceptions, coroutines, and Ruby-style generators, and many other interesting control abstractions besides.
I'm just curious why we'd explicitly be re-implementing lwp over and over for each unique use rather than having a more formal implementation.
It's great that the arc application server does this for us, but maybe it could be part of the language? It certainly gives a lot of flexibility and it's a no brainer for making programs shorter, since any sort of server would be rewriting it (and probably with many uniquely wrong choices).
Obviously, () is not an arc-list (as defined above). Thus, the test (all arc-list? args) fails with (+ '(a b c) ()) : finally, (apply + args), which is the numerical + of scheme is applied, hence the error message.
I considered it - there was a point where I thought "Wouldn't it be cool if I implemented Cheney-on-the-MTA in JavaScript?" JavaScript doesn't have setjmp/longjmp, but it can be faked with exceptions. But I wasn't sure about the garbage-collection aspect, since you don't have the same fine-grained control over memory that you do in C, and I was afraid that just holding onto the continuation closure would accidentally capture the whole rest of the stack (because of arguments.caller), trading a stack overflow for a massive memory leak. And since I didn't want to spend too much time on the project, I decided to punt on the whole thing.
I think the setTimeout trampoline is better anyways - in addition to cleaning everything up, it also gives the browser's event loop a chance to run, so you don't risk locking up the browser.
Arc, is, for the moment, written on top of MzScheme. that presents a bit of a problem, because writing extensions to Arc entails first writing an extension to MzScheme and then writing some glue to allow the extension to be used from within Arc. This is a bit of a mess. It would be nice if C extensions could be created for Arc directly, but that would probably entail the development of a standalone Arc built on C code somehow (either directly or via a technique similar to how they built Scheme48). Ruby took this approach, and it is at present so easy to write a C extension to Ruby, almost as easy as actually writing Ruby code, that it puts most other scripting languages' extension mechanisms to shame (cough...cough...Perl).
By the way, being able to call the best libraries written in other languages also means that you wind up thinking in those languages as well, rather than in Arc. My experience comes from Ruby, and in many cases a Ruby library written with Ruby's idioms in mind, wrapping an underlying library written in C, trumps a library wrapper written by means of an automated tool such as SWIG any day of the week. The same would probably be true of Arc as well, once this aspect comes to pass.
I notice the Arc source is littered with a few references to Scheme 48, specifically things not working in it. But if that got fixed, then you'd have access to C libraries, because IIRC Scheme 48 was built to interface well with C.
The only thing that seems to affect, going by comments, is the conversion from characters to integers and vice versa. If that's the only actual problem, there are only about ten places in the Arc code that need some replacement fix.