Perhaps it should be a different phase entirely, so instead of a REPL, we have a RDEPL (read-desugar-eval-print loop). Then macros would avoid not just E, but DE. And perhaps we could have another sort of macro that does desugar, merely skipping the E, but that might make things more compilcated. And if we just have the no-DE kind, we could layer the no-E kind on top of it. This allows things like almkglor mentions with w/html, but is still consistent and Arc-in-Arc.
Currently macros are part of the desugar phase - macroexpansion and intrasymbol syntax expansion are done in the same step. This allows a macro to return symbol syntax, which might be expanded into a macro call too.
An interesting article is "The Origins of the Turing Thesis Myth", which explains why it's a myth that a Turing machine can do anything a computer can.
The quick summary is that Turing machines can compute any algorithmic function. However,real computers do more than computing functions, and today's applications cannot be modeled by Turing Machines.
And this is what comes of trying to be flippant around people who know more than I do :)
That's a very interesting paper. I hadn't really thought about the theoretical ramifications of everything we let computers do these days... On the "I agree" front, there really isn't much to say.
Though I'm not convinced that it's strictly untrue that "all computable problems are function-based." For instance, take the robotic car: you could, for instance, model that as sequence of functions being called with new inputs, where each call represented the motion of that car over one "tick," which could be as short or as long as you like. And if there's a hole in that argument (which there may well be), then if worst comes to worse, we can use quantum mechanics to model the section of the universe with the computer running the program and obtain a mathematical description that way :)
Eh, but the problem is the assumption that the world itself cannot be modelled as part of the tape that the Turing Machine eats.
From a quick glance through the paper and the LtU comments it seems that its point is that interactive I/O cannot be modelled by the Turing Machine.
But as I've learned in Haskell, I/O can itself be mathematically treated, specifically by monads: the world-before-i/o-event is the monad that is input to a function, and the world-after-i/o-event is the monad you return. And I'm pretty sure that monads themselves can be modeled by TM: they can be represented by a part of the tape that the TM gets to and modifies, just like any function-to-TM mapping.
The problem is that the paper uses words like 'model' and 'function-based' rather vaguely. You can model I/O with a TM, but you can't actually do it, which is what they're getting at.
Welcome! It's good to see more people with an interest in Arc.
As for your questions:
1. The best way is to do (thread (asv)), which will launch the server in a separate thread. Then, to modify it, just (load "blog.arc") and refresh the pages.
2. No idea :/
3. As far as I know, it doesn't---but there's been remarkably little spam here. I'm not 100% sure of this, though; I could well be wrong.
Also, I recommend using Anarki instead of arc2.tar. It's much more actively developed and has a lot of really nice features. There's more information at http://arclanguage.org/item?id=4951 .
> 1. The best way is to do (thread (asv)), which will launch the server in a separate thread. Then, to modify it, just (load "blog.arc") and refresh the pages.
There's already one existing syntax for !, and I think it's unlikely that pg will add another. We're taking advantage of something which is, essentially, defined to be meaningless in arc2.tar but which we can make work on Anarki and using it. And of course, pg can do anything he wants, but that could involve both new ssyntax and new functions. We have been warned that what we're doing is "unsafe," after all. If we do avoid alphanumerics and choose, say, djoin, what happens if pg choose njoin for arc3.tar? Or uses the d prefix for something else? Using existing ssyntax characters is probably pretty safe.
As you might have guessed, by the way, I support the join! standard :)
> If we do avoid alphanumerics and choose, say, djoin, what happens if pg choose njoin for arc3.tar? Or uses the d prefix for something else?
We say "I suggest running a poll on this, pg" ^^
But I still like module syntax T.T However the appended ! convention is winning by a really large margin (waaa hopeless!). So we need to modify the builtin ssyntax/ssexpand to ignore trailing "!" and/or standardize on my ssyntaxes.arc
OK, another point. There is a virtue in having a standard that is backwards compatible with the base Arc release.
For example, I recently borrowed classifier.arc from anarki and ran it with the base release of arc with no troubles. If people start using the ! standard, then to use anarki libs one would have to run all of anarki. Which may be OK, it just has to be accepted that this is the case.
(As a matter of style I like ! too)
(somewhat related tangent:
[One of the best of these is a Gosperism. Once, when we were at a Chinese restaurant, Bill Gosper wanted to know whether someone would like to share with him a two-person-sized bowl of soup. His inquiry was: "Split-p soup?" -- GLS]
We're already not backwards compatible. Things like ssyntax.arc, for an extreme example, but also our additions of functions to places like arc.arc, e.g.butlast. That doesn't exist in arc2.tar, but we have it anyway. Anything that uses such functions requires Anarki. This is more extreme (you wouldn't be able to copy just those functions over), but not unique.
Well, it has to do with the ssyntaxes.arc precedence rules and how they work: basically, split according to the current ssyntax, then go to next ssyntax. Since #\. is listed before #\!, symbols are first split by #\. into (foo! x), so it works properly.
It won't work with a type that ends in ! and if you use the ? ssyntax:
(def my-type! (x)
(annotate 'my-type! x))
(my-type!? my-type!.1) ; will transform to (my-type '?)
That's what I would have thought, but it appeared to work. Though it may only have worked because of your second observation. And given that, I will repeat my desire for the destructive! custom. I like it because it doesn't interfere with any name (e.g. how alist could be association list or the "is the object a list?" predicate [though that's a bad example, you get the idea]), it has seen use in multiple languages, and it pretty clearly says what it means (assuming you want to encourage functional programming, as I think we do).
I suggest running a poll on this - of course, pg probably won't care either way, but we can integrate his code into Anarki next time he bothers to release an update, ne?
I think this convention is good; I'm just somewhat concerned with the fact that foo!bar is plenty overloaded.
edit: IIRC this has been suggested a few times before already, so I think there'll be good support for this - but it means we will then have to formally standardize the ssyntaxes too.
The problem is that when using join with zap you need to cons a lot of memory, while a destructive join would not cons up memory, it would just traverse the list setting the right cdrs. Probably the expression "recycling operations" describes better what I wanted to say, because operations such as nconc reuse the memory of their arguments.