I agree with some of your first point, but the second not so much. I think pg released arc as a first cut with one application baked in to act as marker for a stable version. Obviously I can only guess, but I also think he expected a larger community of interest, one that could take it to the next level. That never happened and with only a handful of keeners, 10 years later, the things you mention in your first point are non-existent or half-baked experiments. It's possible Arc may get there, and I hope it does, but at this rate it may take a hundred years. :)
More interestingly, though, shawn is bringing some interest back (at least for me) and making substantial changes that could breathe new life into Arc. I don't agree with the empty list - nil change, but the table changes and reader changes are good. I do think the more seamless the racket interop is and the more racket can be leveraged, the better. Clojure has good interop with java and that's what made Clojure explosive. If we can do that with Arc/Racket then we are better off for it.
"Clojure has good interop with java and that's what made Clojure explosive. If we can do that with Arc/Racket then we are better off for it."
Do we ever expect Anarki values to be somehow better than Racket values are? If so, then they shouldn't be the same values. (The occasional "interop headaches" are a symptom of Anarki values being more interchangeable with Racket values than they should be, giving people false hope that they'll be interchangeable all the time.)
I think this is why Arc originally tossed out Racket's macro system, its structure type system, and its module system. Arc macros, values, and libraries could potentially be better than Racket's, somehow, someday. If they didn't already have a better module system in mind, then maybe they were just optimistic that experimentation would get them there.
Maybe that's a failed experiment, especially in Anarki where we've had years to form consensus on better systems than Racket's, and aligning the language with Racket is for the best.
But I have a related but different experience with Cene; I have more concrete reasons to break interop there.
I'm building Cene largely because no other language has the kind of extensibility I want, even the languages I'm implementing it in. So it's not a surprise that Cene's modules aren't going to be able to interoperate with Racket's modules (much less JavaScript's modules) as peers. And since the design of user-defined macros and user-defined types ties into the design of the modules they're defined in, Cene can't really reuse Racket's macro system or first-class values either.
My comment is only a remark to "what holds back widespread Arc adoption".
If your goal is for arc to have widespread adoption then being able to leverage racket in a meaningful way will help get you there.
Currently the ability to drop into racket is not getting people to use arc, it still seems people would rather just use racket. http://arclanguage.org/item?id=20781
IMO, It would be better if arc had implicit methods that provide access to racket capabilities. In Clojure having libraries, name spaces, and a seamless interfaces to java translated into a plethora of libraries for Clojurians to utilize. Can we not do the same? Well if the goal is for "widespread adoption" then we need to.
I agree; an empty list is a value. And when you consider interop with other langs, they will infer it to be some object too, where predicates will see it as a value not the lack of one.
Order definitely matters, and I'm envisioning functionality like repeating tasks that once moved to the "done" stack/marked done wrap around to `pipe.0`(Imagine something that must be done once a month or once a week).
I suppose I could tag items as monthly or weekly in which case they'd get appended to pipe!month or pipe!week as soon as they're done.
Another example is the `(createItem)` function which should take an item and an optional "stack" parameter. If no stack parameter items should be appended to the zeroeth stack `pipe.0` identified by index because it will be named by different keys in different pipelines.
Another example is a `(push)` function which pushes an item from its current stage to the "next" stage. With order this is simple.
EDIT: Order even matter matters within a stack. In the current kanban system I have a bunch of things I need to do today, the top priorities are actually at the top. I wouldn't want them getting all disheveled.
"I can't believe this never came up working with json"
I think order-preserving tables are pretty useful for JSON -- particularly to help ensure implementation details aren't leaking even in a distributed system where JSON is being passed through multiple parsers and serializers -- but it's worth noting that even JSON objects are specified to be unordered and are commonly treated as such. (RFC 8259: "An object is an unordered collection ...")
Even JavaScript's standard JSON.parse(...) doesn't preserve order the way you might like it to. Here's what I get in Firefox right now:
This is because JavaScript objects historically didn't preserve any particular iteration order, and although they have more consistent cross-browser behavior these days, they seem to have settled on some rules like iterating through numeric properties like "1" before other properties like "a". (From what I've found, I see people saying this behavior is a standard as of ES2015, but I don't see it in the ES2018 spec.)
JavaScript Maps are a different story; they're a newer addition to the language, and they're strictly specified to preserve insertion order. Other languages use dictionaries which preserve order, including Groovy (where [a: 1, b: 2] is a LinkedHashMap literal), Ruby, and... you've already found details on Python.
Even in Groovy, JavaScript (Maps), and Ruby, the only way to observe the order of their collections is to iterate from the beginning, and the only way to influence it is to insert entries at the end. This means any substantial interactions with the order of these collections will be just about as inefficient and inconvenient as if we had converted the whole collection to an association list, processed it that way, and converted it back. Essentially, it seems the order is preserved only to prevent programs from having language-implementation-dependent bugs, and maybe to help with recognizing values in debug output, not because it's a useful for programming.
Every one of these languages makes it easy to communicate the intent of an ordered collection by using lists or arrays of some sort. I'm pretty sure I've seen this lead to association lists even in JSON, particularly for things like database query results:
It could be me, but I have trouble imagining any situation where I would need this syntax.
- If I'm not doing both kinds of lookup several times in the same part of the code, the lookup styles don't both need to be concise at the same time. Different parts of the code can convert to different representations that are concise to use in that context.
- If it's the root layer of a data structure, then I can simply use two different local variables to refer to that data, one of which always treats it as a list and one of which always treats it as a table.
- Unless I'm processing some specific indices, I'll usually want to iterate through the whole data structure, so I usually wouldn't be using any indexing operations in the first place.
- If I know what specific indexes I want at development time, then I would usually use an unordered table (for easy indexing) or a fixed-length list (for easy destrucuring).
- If I'm performing any two lookups for the same reason, they can usually use the same expression in the code.
So I would have to be writing some code where I'm performing several different lookups of specific ordered indexes and specific chosen keyed indexes, all of which are for distinct reasons I know at development time but few of which are under indexes I know at development time. Moreover, each lookup must be two or more layers deep into a nested data structure.
This seems to me like a very specific situation. I suppose maybe it might come up more often in data-driven applications that have many scripted extensions which are specialized to certain configurations of data, but it seems to me even those would rarely care about specific ordered indexes.
Even what you've described about your application makes it sound like you only need to look things up by ordered indexes when the user wants to move an item from one stack to the next (or to the first). If that one part of your code is verbose, I recommend not worrying about it. If the rest of your code is verbose, how about converting your alists to tables whenever you enter that code?
I'm sorry, you have a very clear idea in mind of the data structure you want, and it's very reasonable to want to know how to build it in a way that gives you convenient syntax for it.
I think I'm just trying to encourage you to get to know other techniques because I think that's the easiest way to start. Building a custom data structure in Arc is not something many people go to the trouble to do, so it's a bit hard to come up with a good example.
I don't think you need any new ssyntaxes for this new data structure.
Say you have an ordered dictionary value `db` and you want convenient syntaxes to look up items by index (where 0 should give you the first item) and by key (where 0 should give you the item with key 0). One thing you might be able to do is this:
db!by-key.0
db!by-pos.0
To get these lookups working, you only need to use Anarki's `defcall`. Here's a stub that might get you started:
(defcall ordered-dict (self val)
(case val
by-key (fn (key) ...)
by-pos (fn (pos) ...)
(err "Expected by-key or by-pos")))
Unfortunately, this doesn't allow you to assign:
(= db!by-key.0 "val")
(= db!by-pos.0 "val")
To get these to work, you must know a little bit about how Arc implements assigment. The above two calls are essentially turn into this:
(sref db!by-key "val" 0)
(sref db!by-pos "val" 0)
This means you need to extend `sref` using Anarki's `defextend` to make these work.
But we've defined db!by-key and db!by-pos to be procedures that only know how to get the value, not how to set it. We need them to return something that knows how to do both.
So let's define another type, 'getset, which contains a getter function and a setter function.
(def make-getset (get set)
(annotate 'getset (list get set)))
(def getset-get (gs . args)
(apply rep.gs.0 args))
(def getset-set (gs val . args)
(apply rep.gs.1 val args))
You should pick some representation to use (annotate 'ordered-dict ...) with (or whatever other name you like for this type) so you can implement all four of these methods.
Note that we don't need to do (defextend sref ...) for ordered-dict values here because the `sref` calls don't ever get passed the table directly. You would only need that if you wanted (= db!by-key "val") or (= db.0 "val") to do something, but those look like buggy code to me.
So far, so good, but you probably want to do more with ordered dictionaries than getting and setting.
There's at least one more utility you might like to `defextend` in Anarki for use with ordered dictionaries: `walk`. A few Anarki utilities like `each` internally call `walk` to iterate over collections, so if you extend `walk`, you can get several Anarki utilities to work with ordered dictionaries all at once.
You may also want to make various other utilities for coercing between ordered dictionatires and other Anarki values, such as tables and alists. That way you can easily use Anarki's existing table and alist utilities when they work for your situation.
Note that this approach doesn't make it possible to use the {...} curly brace syntax you and shawn have been talking about. Since there are potentially many kinds of tables, I'm not sure whether giving them all read syntaxes really makes sense; we might start getting into some obscure punctuation. :-p
I hope that's a little bit more helpful advice for your situation. I tried to write up something like this the other day and got carried away implementing it all myself, but then I realized it might not be what you really wanted to use here. It didn't even take an approach as convenient to use as this one. Maybe this can give you enough of a starting point to reach a point you're happy with.
I agree that order matters. It matters in pretty much every application that uses data. Even HN has data ordered by newest, rank (best) etc. And Arc has a tonne of functionality to support sorting and comparing data to let you do that very thing.
But I don't see the need to order data in your application equating to the need for tables that support constant insertion order.
I 'm not going to say what you're doing is wrong because it's not, but I am going to say it's non-standard and I think there other ways to manage your data that doesn't require ordered tables (which have downsides with data growth). It's just my opinion, but there's not much you can't do with the standard storing of table records and maintaining of indexes.
That said if your data load is always low with little to no growth it could very well be a good fit.
It's only more memory efficient than its previous incarnation.
It's an interesting implementation though. The use of sparse arrays to index the entries is compelling. Measuring this kind of stuff can be non-trivial though as you also have to account for gc (and/or compaction) at different times within your application. There are always trade-offs.
Personally I haven't compared the various implementations (clojure vs. racket vs. python) such that I can give you any real insight. I know clojure's (or rather java's) array-maps are costly when performing iterative lookups within large data sets because I've benchmarked it.
The best bet is to see how an implementation works for your app with your data.
> "The microbenchmarks that we did show large improvements on large and very large dictionaries (particularly, building dictionaries of at least a couple 100s of items is now twice faster) and break-even on small ones (between 20% slower and 20% faster depending very much on the usage patterns and sizes of dictionaries)."
I do think the data load for this would always be low.
I feel like implementing tables with indices would basically be like implementing ordered dictionaries (new nomenclature I've arrived at per: http://arclanguage.org/item?id=21037 ), no?
> I now have it on my todo list to determine how much the standard library of Lumen matches the names Arc uses.
Lumen looks awesome. I was looking for the docs to determine how much of arc actually exists within lumen, but I couldn't find anything. So if you do this, please let me know.
Also, If I can get some time down the road, I'd like to implement some basic dom manipulation functions. Personally I see Lumen as the best means to do mobile app development in Arc (which is probably one of the best things that can happen for Arc IMHO). Arc on the server side, Lumen on the client side and a code base that's useable by both would be really nice.
If you have a small number of key/value pairs and care about order then alists are both efficient and practical, but otherwise I would not give them too much weight.
So an example I can give is found in html.arc. The start-tag function has an alist used for generating tag attributes:
Notice (pair (cdr spec)) is an alist. Now if you wanted to extend start-tag with conditional operations on the tag spec, you could bind (pair (cdr spec)) to say var 'attrs' and use arcs built in functions to inspect and modify as you see fit. As you can see a table is probably not needed when there's only a half-dozen items (at max) plus you would lose the order. A list would prevent you from pairing up to do anything meaningful like an inspection or conditional modification.
It's worth noting that a serious downfall for alists is having no real means to detect a data type, because really it doesn't have one (unlike a table). You could inspect the first item attempting to detect the type that way, but the logic's soundness/efficiency breaks down pretty quickly in all but the simplest cases.
So, again, a small number of pairs where you are confident in the shape of the data and have total control of the data usage (i.e where and how it get's passed around) then they are good, but otherwise you would need a pretty good or nuanced reason for it.
Some of pgs other idea's [1] are: "having several that share the same tail, and preserve old values".
i.e. You could have some function that accumulates pairs (where some or many have the same key, but a different value). This is where you don't want the obvious behaviour a table provides (where the last one added wins), and/or you need previous entries. Noting that you can save an alist to a file, and reload them easily while still having access to the history of values for a given key, thus you're able to re-construct your operations based on historical data. You just can't do that with a table.
Yeah, in clojure we have the standard hash-maps, but we also have array-maps (which maintain the insertion order) and sorted-maps (which are sorted by keys).
In the last ten years I've only needed ordered maps once or twice. In one case it was for a custom query language I wrote that generated queries from data alone.
eg. (query db :users (array-map :gender "female" :name "xena"))
so in this example adding the :gender clause first would restrict the query and improve performance.
I also remember, in my early clojure days, I made the mistake of relying on standard hash-maps:
eg. (query db :users {:gender "female" :name "xena"})
Until I discovered a query where the performance died and found out hash-maps are actually array-maps (under the hood) until 9 entries. It then auto converts to real hash-maps and loses its order (Clojure does this because maintaining order on bigger data sets is costly from an efficiency/memory perspective).
Can you give an example of where the notation is better with tables than with alists? Maybe there's something you can do about it, e.g. writing a macro, using `defcall` or `defset`, or extending `sref`.
I recommend not expecting `quote` or `quasiquote` to be very useful outside the context of metaprogramming.
Quotation is helpful for metaprogramming in a tangible way, because we can easily refactor code in ways that move parts of it from being quoted to being unquoted or vice versa.
And quotation is limited to metaprogramming in a tangible way, because we can only quote data that's reasonably maintainable in the same format we're maintaining our other code in. For instance, an Arc `quote` or `quasiquote` operation is itself written inside an Arc source code file, which is plain text, so it isn't very useful for quoting graphics or audio data.
We can of course use other functions or macros to construct those kinds of data. That's essentially Arc's relationship with tables. When we've constructed tables, we've just used (obj ...) and (listtab ...) and such.
Adding tables to the language syntax is doable, but it could have some quirks.
; Should this cause an error, or should it result in the same thing as
; '(let i 0 `{,++.i "foo"}) or '(let i 0 `{,++.i "foo"})? Each option
; is a little surprising, since any slight edit to the code, like
; replacing one ++.i with (++ i 1), would give us valid code to
; construct a two-entry table, and this code very well might have
; arisen from a slight edit in the opposite direction.
'(let i 0
`{,++.i "foo" ,++.i "bar"})
; Should this always result in 2, or should it result in 1 if "foo"
; comes last in the table's iteration order?
(let x 0
`{"foo" ,(= x 1) "bar" ,(= x 2)}
x)
Personally, I tend to go the other way: I prefer to have as few kinds of data as possible in a language's quotable syntax.
A macroexpander needs to extract two things from the code at any given time: The name of the next macro to expand, and the region of syntax the macro should operate on. Symbols help encode macro names. Lists and symbols together help encode regions of plain text. I think it's for these reasons that symbols and lists are so essential to Arc's syntax.
Arc has other kinds of data that commonly occur in its quotable syntax, like strings and numbers, but it doesn't need them nearly as much as symbols and lists. Arc could expand symbols like |"Hello, world!"| and |123| as ssyntax, or it could simply let everyone put up with writing things like (string '|Hello, world!|) and (int:string '|123|) each time.
Tables would fit particularly well into a language's quotable syntax if they somehow helped encode regions of syntax. For instance, if a macro body consisted of all the files in a directory, then a table could be an appropriate represention of that file collection.
> I recommend not expecting `quote` or `quasiquote` to be very useful outside the context of metaprogramming.
My immediate reaction is to disagree. A lot of the reason Lisp is so great is that quasiquotation is orthogonal to macros/metaprogramming.
> ; Should this cause an error, or should it result in the same thing as
> ; '(let i 0 `{,++.i "foo"}) or '(let i 0 `{,++.i "foo"})?
Those two fragments are the same?
In general it feels unnecessarily confusing to include long doc comments in code fragments here. We're already using prose to describe the code before and after.
Code comments make sense when sharing a utility that you expect readers to copy/paste directly into a file to keep around on their disks. But I don't think that's what you intend here?
Finally, both your examples seem to be more about side effects in literals? That is a bad idea whether it's a table literal or not, and whether it uses quasiquoting or not. Do you have a different example to show the issue without relying on side-effects?
I've replied separately about why I would say quasiquotation is only useful for code generation. In this reply I'll focus on the topic of the quirks we might have to deal with if we have Arc tables as quasiquotable syntax.
I think they're mostly unrelated topics, but I was using the quirks of tables in `quasiquote` to motivate keeping the number of quasiquotable syntaxes small and focused. Since I believe quotation is essentially only good for code generation (as I explain in more detail in the other reply), my preference is generally to focus the quasiquotable syntaxes on that purpose alone.
---
"In general it feels unnecessarily confusing to include long doc comments in code fragments here. We're already using prose to describe the code before and after."
Sorry, and thanks for the feedback on this.
There's a deeper problem here where my posts can get a bit long, with a lot of asides. :) I thought of those code examples as an aside or a subsection. If you were going to skim over the code, I wanted it to be syntactically easy to skim over the related prose at the same time.
This was something I felt was particularly worth skipping over. Ultimately, the quirks of using tables as syntax are mostly just as easy to put up with as the quirks of using tables for anything else. (I've gone to the trouble to make what I think of as non-quirky tables for Cene, but it's a very elaborate design, and I wouldn't actually expect to see non-quirky tables in Arc.)
Since I was only using these quirks to motivate why `quasiquote` would tend to be focused on code generation, I probably didn't invest enough space to fully explain what the quirks were. I'll try to explain them now....
---
"Those two fragments are the same?"
Whoops, those two fragments were supposed to be '(let i 0 `{,++.i "foo"}) and '(let i 0 `{,++.i "bar"}).
---
"Finally, both your examples seem to be more about side effects in literals? That is a bad idea whether it's a table literal or not, and whether it uses quasiquoting or not. Do you have a different example to show the issue without relying on side-effects?"
I don't know if I'd say the unquoted-key example depends on side effects, but the unquoted-value example very much does. Here it is again:
(let x 0
`{"foo" ,(= x 1) "bar" ,(= x 2)}
x)
The quirk here is that the usual left-to-right evaluation order of Arc can't necessarily be guaranteed for table-based syntax, and if the evaluation order matters for any reason, it must be because of some kind of side effect.
Removing side effects from the language is a great remedy for this, but typically that kind of effort can only go so far. In an untyped language, we usually have to deal with the side effects of run time type errors and nontermination, even if we eliminate everything else:
Even if we commit to programming without any run time errors or nontermination (perhaps enforcing termination with the help of a type system like that of Coq or Agda), we still have some cases like this where the order matters:
A programmer in Arc or Racket might expect this program to reach a space limit relatively soon on machines with less than 64TB of space available, since Arc and Racket guarantee left-to-right evaluation order.
If the programmer actively intends for this program to fail fast, you and I will probably agree they would be better off sequencing the operations a little more explicitly, maybe like this:
But suppose the programmer doesn't initially realize the program will fail at all. It only crosses their mind when they come back to diagnose bugs in their code, at which point they expect these expressions to evaluate from left to right because that's what Arc and Racket normally guarantee.
That's when they have to realize that the tables in their syntax have gotten in the way of this guarantee.
Simple solution: We clearly document this so people don't expect left-to-right evaluation order in this situation.
Alternative simple solution: We make tables order-preserving so they can be evaluated as expected.
That covers the unquoted-value example.
Now let's consider the unquoted-key example:
'(let i 0
`{,++.i "foo" ,++.i "bar"})
In this one, the quirk is that the two occurrences of ,++.i are expressed with the same syntax, so at read time the table would have two identical keys, even though the programmer may expect them to express different behavior.
While it looks like this example depends on side effects (in this case mutation), I'm not so sure it does. Here's an alternative example which shows the same issue without necessarily using side effects:
This involves a hypothetical macro (current-location) which would expand to a string literal describing the filename, line, and column where it was expanded.
Is it a side effect? Maybe not; a file of code that used (current-location) would usually be semantically equivalent to a file that spelled out the same string literal by hand. In a language with separately compiled modules, both files might compile to the same result, which would make that semantic equivalence precise. In such a language, we typically wouldn't have any reason to mind if a module used (current-location) in its source code, even if we preferred to avoid it for some reason in our own code. This makes it into some kind of "safe" side effect, if it's even a side effect at all.
Nevertheless, within a single file, the expression (current-location) could look the same in two places but give different results.
That's where using `unquote` in table keys becomes quirky: The source code of two table keys may look identical (and hence cause a duplicate key conflict at the source code level) even if the programmer thinks of them as being different because they eventually generate different results.
Because of this quirk, the programmer may have to use some kind of workaround, like putting slightly different useless code into each key:
Simple solution: We clearly document this so programmers can use that workaround with confidence. To help make sure programmers are aware of this documentation, we report descriptive errors at read time or at "quasiquotation construction time" if a table would be made with duplicate keys.
Alternative simple solution: We decide never to allow table keys to be unquoted. If a table key appears to be unquoted, the table key actually consists of a list of the form (unquote ...). We still report errors at construction time or read time so programmers don't mistakenly believe `{same-key ,(foo) same-key ,(bar)} will evaluate both expressions (foo) and (bar).
Relying on the order arguments are evaluated in is always going to result in grief. Regardless of programming language. It's one of those noob mistakes that we've all made and learned from. I think we shouldn't be trying to protect people from such mistakes. I'd rather think about how we can get people to make such mistakes faster, so they can more rapidly build up the requisite scar tissue :)
So yes, we should document this, but not just in this particular case of tables. It feels more like something to bring up in the tutorial.
Edit: to be clear, I'm not (yet) supporting Kinnard's original proposal. I haven't fully digested it yet. I'm just responding to your comment in isolation ^_^
"My immediate reaction is to disagree. A lot of the reason Lisp is so great is that quasiquotation is orthogonal to macros/metaprogramming."
Do you have particular reasons in mind? It sounds like you're reserving those until you understand what I'm saying with my quasiquoted table examples, but I think those examples are mostly incidental to the point I'm making. (I'll clarify them in a separate reply.)
Maybe I can express this again in a different way.
I bet we can at least agree, on a definitional level, that quotation is good for constructing data out of data that's written directly in the code.
I contend quotation is only very useful when it comes to code generation.
If there were ever some kind of data we could quote that we couldn't use as program syntax, then we could just remove the quotation boundary and we'd have a fresh new design for a program syntax, which would bring us back up to parity between quotation and code generation.
In a Lispy language like Arc, usually it's possible to write a macro that acts as a synonym of `quote` itself. That means the set of things that can be passed to macros must be a superset of the things that can be passed to `quote`. Conversely, since all code should be quotable, the set of things that can be passed to `quote` must be a superset of the things passed to macros, so they're precisely the same set.
This time I've made it sound like some abstract property of macro system design, but it doesn't just come up in the design of an axiomatic language core; it comes up in the day-to-day use of the language, too. Quoted lists that don't begin with prefix operators are indented oddly compared to practically all the other lists in a Lispy program. I expect similar issues arise with syntax highlighting. In general, the habits and tooling we use with the language syntax don't treat quasiquoted non-code as a seamless part of the language. So, reserving quasiquotation for actual code generation purposes tends to let it help out in the places it really helps while keeping it out of the places where it causes awkward and distracting editor interactions.
> I bet we can at least agree, on a definitional level, that quotation is good for constructing data out of data that's written directly in the code.
No, I think I disagree there, assuming I'm understanding you correctly.
One common case where I used to use quasiquote was in data migrations, and there was never a macro in sight. I don't precisely remember a real use case involving RSS feeds and user data back in the day, but here's a made-up example.
Say you're running a MMORPG that started out in 2D, but you're now adding a third dimension, starting all players off at an elevation of 0m above sea level. Initially your user data is 2-tuples that look like this:
(lat long)
Now you want it to look like this:
(x y z)
..where x is the old latitude and z is the old longitude.
Here are two ways to perform this transform. Using quasiquote:
Hopefully that conveys the idea. Maybe the difference doesn't seem large, but imagine the schema gets more complex and more deeply nested. Having lots of `list` and `cons` tokens around is a drag.
I've always thought there's a deep duality between quasiquote and destructuring. Totally independent of macros.
"No, I think I disagree there, assuming I'm understanding you correctly."
That's interesting.... How would you describe what quotation does, then, if you wouldn't say it lets you write certain data directly in the code?
---
In your data migration example, I notice you're reading and writing the data. You're even putting newlines in it, which suggests you might sometimes view the contents of that written data directly. If you're viewing it directly, it makes sense to want the code that generates it to look similar to what it looks like in that representation.
It's not always feasible for code to resemble data, but since that file is plain text with s-expressions, and since the code that generates it is plain text with s-expressions, it is very possible: First you can pretend they're the exact same language, and then you can use `quasiquote` for code generation.
You might not have thought of it in that order, but I think the cases where `quasiquote` fails to be useful are exactly the cases where it's hard to pretend the generated data is in the same language as the code generating it.
---
"I've always thought there's a deep duality between quasiquote and destructuring."
I've always thought it would be more flexible if the first element of the list were a prefix operation, letting us destructure other things like tables and tagged values.
One of the few things I implemented in patmac.arc was a `quasiquote` pattern that resembles Arc destructuring just like you're talking about.
Racket doesn't need a library like patmac.arc because it already comes with a pattern-matching DSL with user-definable match expanders. One of Racket's built-in match syntaxes is `quasiquote`.
The main issue with alists is that the special syntax doesn't work and the notation is so verbose . . . I don't know if the efficiency issues would even come to bear for me.
EDIT: nesting is not behaving as I expect but that may be a product of my own misunderstanding.
I would be careful not to structure data/logic to accommodate a special syntax.
i.e while using:
pipe!todo.1!id
is certainly fancy, writing a function is just as effective and most likely more performant since it doesn't require read de-structuring:
(fetch pipe todo first id)
So I'm suggesting you shape your syntax usage around your data, not your data around your syntax. You can always write a macro to obtain your desired level of brevity.
The general idea behind the fix is that quoted literals need to be treated as data. Arc now has two new functions for this purpose: quoted and unquoted.
The fact that (quote {a 1}) now becomes a hash table is a little strange. I’m not entirely sure that’s correct behavior. It depends whether (car '({a 1})) should yield a hash table. It seems like it should, which is reified in the code now.
EDIT: Ok, I've force-pushed the fixed commit. (Sorry for the force-push.)
If you `git reset --hard HEAD~1 && git pull` it should work now.
Personally, I found I prefer racket's pretty-printing with the horrible hash tables compared to something like {pipe "water" a 1 b 2 c ...} because if you try to evaluate `items` or `profs` you won't have a clue what the data is without pretty-printing.
And it turns out I suck at writing pretty-printers. Someone else do it!
I'm pretty sure Clojure is geared towards being practical over innovative. Or at least that's how I see it. When I think of Clojure I think in terms of its stability, reach, accessibility, correctness, and the thousands of small, seemingly insignificant, design decisions that it got right. All of this adds up to Clojure being a great user experience that has garnered a really good community. It truly is the only lisp dialect that large organizations, outside of academia, should take seriously.
edit - just to bring home the point. I spent the last number of years on a project developing a significant Clojure code base with many libraries. And for quite a while (the last 2 years) I had been avoiding upgrading my version of Clojure, but when Clojure 1.10.0 came out a few weeks back I decided to do it. I was worried, but I only hit 2 warnings which took 15 minutes to fix. I'm telling you that's pretty impressive from a stability point of view.
"Looking at it differently: to "get" Clojure one has to understand that there is no silver bullet: no single revolutionary "feature". Instead, the gains come from tiny improvements all over the place: immutable data structures, good built-in library for dealing with them, core.async, transducers, spec, Clojurescript, not having to convert to JSON or XML on the way to the client, re-using the same code on server-side and client-side, and more."