Sorry if I completely misunderstand things, but shouldn't this work? Let rebinds the name pat to the table, shadowing the old macro binding, and (pat 1) then refers to the element of the table indexed by 1 (which in this example doesn't exist, but I don't think that's the point of the example).
If there's a bug in ac, let's fix that right now, rather than hacking around it and making the rest of arc that much less elegant and simple. (Of course, you might say you want people to be able to use pattern-matching and be able to have variables named pat. That's a good criticism, but it's not how I read your post.)
preventing the cars of applications being automatically macroexpanded. I don't understand the other problem you brought up. Why would (fn (macroname ...) ...) not work?
Actually, that was the point I was planning to do the hack: (let ...) and friends deduce down to (fn ...) forms, so logically the place to do replacement of variable names should be in (fn ...) forms. ac-call is involved in function calling, which is really related.
However I've since changed my mind, because the problem then becomes:
(withs (do 1 re 2 mi 3)
(+ do re mi))
Bonus points if you figure out why your solution will also fail the above ^^
Because we know that macros are really just tagged procedures, we can apply them ourselves and let lexical scoping guarantee that they aren't shadowed!
Granted, it's ugly, and it only solves part of the problem, but I think it's a reasonable place to start hacking from.
>Granted, it's ugly, and it only solves part of the problem, but I think it's a reasonable place to start hacking from.
Oh, no. Because it means you'll need to (apply (rep macro) body) whenever you need to use a macro within a different macro. That's just too much bug-prone work and is completely unreasonable.
Also, on further reflection, I think perhaps the original hack is better - even given the weird behavior of (withs (do 1 re 2 mi 3) ...). If we're really making a language for quick hacking and prototyping, and giving the programmer all the tools they need, then maybe letting them redefine do is exactly the right thing to do. In your example it doesn't make sense, sure, but what if it was redefined as a function? Or a new macro that did something interesting with its body (inserted debug statements)? Maybe we should deliberately let the programmer redefine everything they want (as pg says) - and make sure not to write anything in a safer, more idiot-proof style. That's not the point of arc.
(Namespace collision is another issue. But we need a way to deal with that that only affects things when you want it to, and never when you don't.)
Oh no, heck no squared. Because if you do, that means that every macro has to publish any symbols-in-functional-position it actually creates. And every programmer has to check every macro he or she uses to make sure that any local variables he or she has do not conflict with the symbols-in-functional-position published by the macros. Hades breaking loose and all that. Funny action at a distance.
As for redefining a macro - then let's implement 'macrolet, that way we don't have capture.
I disagree, actually, but not because it wouldn't work. It seems to me that the simplest way to think about data storage is to give the programmer two options: to store data in noncontiguous memory with pointers, with O(n) access but O(1) insertion/reordering/whatever, or in contiguous memory in an array, with O(1) access but potentially O(n) insertion. It's an appropriate arc concept because it gets at the core of what is going on, and lets the programmer do whatever they want, while keeping the language simple and elegant (only two options!). I believe everything else can be built out of that, if you really want to do it (not that I think hash tables should leave - they're far too useful for that. but arrays and pointers give you what you need to build whatever you want).
But surely what is important is freedom and power to express ideas, not dictate their implementation to the compiler/runtime?
If the two systems behave exactly the same (mapping keys to values), and the only penalty is execution time (which depending on how the automatic switching works, shouldn't be much), why should they be separate concepts in the language itself?
I see your point, and you may be right. But I could respond to that: if all we're abstracting over is key/value mappings, then why do we have lists?
I think we're looking at this from different points of view. I'm thinking of a basis for the space of all data layouts in memory (so to speak), and you're thinking of having a generic interface to mappings, if I understand you right.
I have a thought that might shed some light on this, and I'd like to hear what you and everyone else think about it. It seems like we're talking about two types of maps - maps whose keys are values (hash tables), and maps whose keys are nonnegative integers (arrays, but also some uses of lists).
The difference between them is the way you think about it. Hash tables are pure associative maps, which you use in the application server to store user data for cookies, but you could also use as record types, because they're really the same thing. Maps indexed by integers, on the other hand, hold data in a sequence, like the temperatures in Durham, NC for the past month, and they tend to be variable-sized and ordered because the integers have a natural order, and because the most natural set of integers is all of them.
Are we really looking for ways to represent these two concepts?
But do we think about arrays differently because there's a legitimate reason to, or because that's just the way it has worked in other languages?
I mean, the fact that a simple:
(def array (elements) (table))
Would bridge that conceptual gap probably means that although it will feel weird using a table as an array, including arrays solely for familiarity is probably the kind of backwards compatability that Arc is looking to avoid.
You're right, we are definitely approaching this from different directions. I do think that a generic interface to mappings is the way to go, so long as the abstraction doesn't leak too much (i.e. reasonably small performance loss).
Hmm. The obvious implementation to me seems to be this:
(def slice (seq start (o end (len seq)))
(subseq seq (mod start (len seq)) (mod end (len seq))))
However, this would give slice meaning for indices outside the length of the sequence, by modding them back into the range. Is this ever actually useful? (Perhaps in some sort of loop that doesn't know the range of the sequence? Maybe you want something that looks like an infinitely long sequence but is actually a cycle of some length?)