TurtleWare — Dynamic Let - The Empire Strikes Back
@2024-10-28 00:00 · 3 days agoTable of Contents
- Thread Local storage exhausted
- The layer of indirection
- I can fix her
- Let's write some tests!
- Summary
Thread Local storage exhausted
In the last post I've described a technique to use dynamic variables by value
instead of the name by utilizing the operator PROGV
. Apparently it works fine
on all Common Lisp implementations I've tried except from SBCL
, where the
number of thread local variables is by default limited to something below 4000.
To add salt to the injury, these variables are not garbage collected.
Try the following code to crash into LDB
:
(defun foo ()
(loop for i from 0 below 4096 do
(when (zerop (mod i 100))
(print i))
(progv (list (gensym)) (list 42)
(values))))
(foo)
This renders our new technique not very practical given SBCL
popularity. We
need to either abandon the idea or come up with a workaround.
The layer of indirection
Luckily for us we've already introduced a layer of indirection. Operators to
access dynamic variables are called DLET
, DSET
and DREF
. This means, that
it is enough to provide a kludge implementation for SBCL
with minimal changes
to the remaining code.
The old code works the same as previously except that instead of SYMBOL-VALUE
we use the accessor DYNAMIC-VARIABLE-VALUE
, and the old call to PROGV
is now
DYNAMIC-VARIABLE-PROGV
. Moreover DYNAMIC-EFFECTIVE-SLOT
used functions
BOUNDP
and MAKUNBOUND
, so we replace these with DYNAMIC-VARIABLE-BOUND-P
and DYNAMIC-VARIABLE-MAKUNBOUND
. To abstract away things further we also
introduce the constructor MAKE-DYNAMIC-VARIABLE
(defpackage "EU.TURTLEWARE.BLOG/DLET"
(:local-nicknames ("MOP" #+closer-mop "C2MOP"
#+(and (not closer-mop) ecl) "MOP"
#+(and (not closer-mop) ccl) "CCL"
#+(and (not closer-mop) sbcl) "SB-MOP"))
(:use "CL"))
(in-package "EU.TURTLEWARE.BLOG/DLET")
(eval-when (:compile-toplevel :execute :load-toplevel)
(unless (member :bordeaux-threads *features*)
(error "Please load BORDEAUX-THREADS."))
(when (member :sbcl *features*)
(unless (member :fake-progv-kludge *features*)
(format t "~&;; Using FAKE-PROGV-KLUDGE for SBCL.~%")
(push :fake-progv-kludge *features*))))
(defmacro dlet (bindings &body body)
(flet ((pred (binding)
(and (listp binding) (= 2 (length binding)))))
(unless (every #'pred bindings)
(error "DLET: bindings must be lists of two values.~%~
Invalid bindings:~%~{ ~s~%~}" (remove-if #'pred bindings))))
(loop for (var val) in bindings
collect var into vars
collect val into vals
finally (return `(dynamic-variable-progv (list ,@vars) (list ,@vals)
,@body))))
(defmacro dset (&rest pairs)
`(setf ,@(loop for (var val) on pairs by #'cddr
collect `(dref ,var)
collect val)))
(defmacro dref (variable)
`(dynamic-variable-value ,variable))
;;; ...
(defmethod mop:slot-boundp-using-class
((class standard-class)
object
(slotd dynamic-effective-slot))
(dynamic-variable-bound-p (slot-dvar object slotd)))
(defmethod mop:slot-makunbound-using-class
((class standard-class)
object
(slotd dynamic-effective-slot))
(dynamic-variable-makunbound (slot-dvar object slotd)))
With these in place we can change the portable implementation to conform.
#-fake-progv-kludge
(progn
(defun make-dynamic-variable ()
(gensym))
(defun dynamic-variable-value (variable)
(symbol-value variable))
(defun (setf dynamic-variable-value) (value variable)
(setf (symbol-value variable) value))
(defun dynamic-variable-bound-p (variable)
(boundp variable))
(defun dynamic-variable-makunbound (variable)
(makunbound variable))
(defmacro dynamic-variable-progv (vars vals &body body)
`(progv ,vars ,vals ,@body)))
I can fix her
The implementation for SBCL will mediate access to the dynamic variable value with a synchronized hash table with weak keys and values. The current process is the hash table key and the list of bindings is the hash table value. For compatibility between implementations the top level value of the symbol will be shared.
The variable +FAKE-UNBOUND+
is the marker that signifies, that the variable
has no value. When the list of bindings is EQ
to +CELL-UNBOUND+
, then it
means that we should use the global value. We add new bindings by pushing to it.
#+fake-progv-kludge
(progn
(defvar +fake-unbound+ 'unbound)
(defvar +cell-unbound+ '(no-binding))
(defclass dynamic-variable ()
((tls-table
:initform (make-hash-table :synchronized t :weakness :key-and-value)
:reader dynamic-variable-tls-table)
(top-value
:initform +fake-unbound+
:accessor dynamic-variable-top-value)))
(defun make-dynamic-variable ()
(make-instance 'dynamic-variable))
(defun dynamic-variable-bindings (dvar)
(let ((process (bt:current-thread))
(tls-table (dynamic-variable-tls-table dvar)))
(gethash process tls-table +cell-unbound+)))
(defun (setf dynamic-variable-bindings) (value dvar)
(let ((process (bt:current-thread))
(tls-table (dynamic-variable-tls-table dvar)))
(setf (gethash process tls-table +cell-unbound+) value))))
We define two readers for the variable value - one that simply reads the value,
and the other that signals an error if the variable is unbound. Writer for its
value either replaces the current binding, or if the value cell is unbound, then
we modify the top-level symbol value. We use the value +FAKE-UNBOUND+
to check
whether the variable is bound and to make it unbound.
#+fake-progv-kludge
(progn
(defun %dynamic-variable-value (dvar)
(let ((tls-binds (dynamic-variable-bindings dvar)))
(if (eq tls-binds +cell-unbound+)
(dynamic-variable-top-value dvar)
(car tls-binds))))
(defun dynamic-variable-value (dvar)
(let ((tls-value (%dynamic-variable-value dvar)))
(when (eq tls-value +fake-unbound+)
(error 'unbound-variable :name "(unnamed)"))
tls-value))
(defun (setf dynamic-variable-value) (value dvar)
(let ((tls-binds (dynamic-variable-bindings dvar)))
(if (eq tls-binds +cell-unbound+)
(setf (dynamic-variable-top-value dvar) value)
(setf (car tls-binds) value))))
(defun dynamic-variable-bound-p (dvar)
(not (eq +fake-unbound+ (%dynamic-variable-value dvar))))
(defun dynamic-variable-makunbound (dvar)
(setf (dynamic-variable-value dvar) +fake-unbound+)))
Finally we define the operator to dynamically bind variables that behaves
similar to PROGV
. Note that we PUSH
and POP
from the thread-local hash
table DYNAMIC-VARIABLE-BINDINGS
, so no synchronization is necessary.
#+fake-progv-kludge
(defmacro dynamic-variable-progv (vars vals &body body)
(let ((svars (gensym))
(svals (gensym))
(var (gensym))
(val (gensym)))
`(let ((,svars ,vars))
(loop for ,svals = ,vals then (rest ,svals)
for ,var in ,svars
for ,val = (if ,svals (car ,svals) +fake-unbound+)
do (push ,val (dynamic-variable-bindings ,var)))
(unwind-protect (progn ,@body)
(loop for ,var in ,svars
do (pop (dynamic-variable-bindings ,var)))))))
Let's write some tests!
But of course, we are going to also write a test framework. It's short, I
promise. As a bonus point the API is compatibile with fiveam
, so it is
possible to drop tests as is in the appropriate test suite.
(defvar *all-tests* '())
(defun run-tests ()
(dolist (test (reverse *all-tests*))
(format *debug-io* "Test ~a... " test)
(handler-case (funcall test)
(serious-condition (c)
(format *debug-io* "Failed: ~a~%" c))
(:no-error (&rest args)
(declare (ignore args))
(format *debug-io* "Passed.~%")))))
(defmacro test (name &body body)
`(progn
(pushnew ',name *all-tests*)
(defun ,name () ,@body)))
(defmacro is (form)
`(assert ,form))
(defmacro pass ())
(defmacro signals (condition form)
`(is (block nil
(handler-case ,form
(,condition () (return t)))
nil)))
(defmacro finishes (form)
`(is (handler-case ,form
(serious-condition (c)
(declare (ignore c))
nil)
(:no-error (&rest args)
(declare (ignore args))
t))))
Now let's get to tests. First we'll test our metaclass:
(defclass dynamic-let.test-class ()
((slot1 :initarg :slot1 :dynamic nil :accessor slot1)
(slot2 :initarg :slot2 :dynamic t :accessor slot2)
(slot3 :initarg :slot3 :accessor slot3))
(:metaclass class-with-dynamic-slots))
(defparameter *dynamic-let.test-instance-1*
(make-instance 'dynamic-let.test-class
:slot1 :a :slot2 :b :slot3 :c))
(defparameter *dynamic-let.test-instance-2*
(make-instance 'dynamic-let.test-class
:slot1 :x :slot2 :y :slot3 :z))
(test dynamic-let.1
(let ((o1 *dynamic-let.test-instance-1*)
(o2 *dynamic-let.test-instance-2*))
(with-slots (slot1 slot2 slot3) o1
(is (eq :a slot1))
(is (eq :b slot2))
(is (eq :c slot3)))
(with-slots (slot1 slot2 slot3) o2
(is (eq :x slot1))
(is (eq :y slot2))
(is (eq :z slot3)))))
(test dynamic-let.2
(let ((o1 *dynamic-let.test-instance-1*)
(o2 *dynamic-let.test-instance-2*))
(signals error (slot-dlet (((o1 'slot1) 1)) nil))
(slot-dlet (((o1 'slot2) :k))
(is (eq :k (slot-value o1 'slot2)))
(is (eq :y (slot-value o2 'slot2))))))
(test dynamic-let.3
(let ((o1 *dynamic-let.test-instance-1*)
(exit nil)
(fail nil))
(flet ((make-runner (values)
(lambda ()
(slot-dlet (((o1 'slot2) :start))
(let ((value (slot2 o1)))
(unless (eq value :start)
(setf fail value)))
(loop until (eq exit t) do
(setf (slot2 o1) (elt values (random (length values))))
(let ((value (slot2 o1)))
(unless (member value values)
(setf fail value)
(setf exit t))))))))
(let ((r1 (bt:make-thread (make-runner '(:k1 :k2))))
(r2 (bt:make-thread (make-runner '(:k3 :k4))))
(r3 (bt:make-thread (make-runner '(:k5 :k6)))))
(sleep .1)
(setf exit t)
(map nil #'bt:join-thread (list r1 r2 r3))
(is (eq (slot2 o1) :b))
(is (null fail))))))
Then let's test the dynamic variable itself:
(test dynamic-let.4
"Test basic dvar operators."
(let ((dvar (make-dynamic-variable)))
(is (eql 42 (dset dvar 42)))
(is (eql 42 (dref dvar)))
(ignore-errors
(dlet ((dvar :x))
(is (eql :x (dref dvar)))
(error "foo")))
(is (eql 42 (dref dvar)))))
(test dynamic-let.5
"Test bound-p operator."
(let ((dvar (make-dynamic-variable)))
(is (not (dynamic-variable-bound-p dvar)))
(dset dvar 15)
(is (dynamic-variable-bound-p dvar))
(dynamic-variable-makunbound dvar)
(is (not (dynamic-variable-bound-p dvar)))))
(test dynamic-let.6
"Test makunbound operator."
(let ((dvar (make-dynamic-variable)))
(dset dvar t)
(is (dynamic-variable-bound-p dvar))
(finishes (dynamic-variable-makunbound dvar))
(is (not (dynamic-variable-bound-p dvar)))))
(test dynamic-let.7
"Test locally bound-p operator."
(let ((dvar (make-dynamic-variable)))
(is (not (dynamic-variable-bound-p dvar)))
(dlet ((dvar 15))
(is (dynamic-variable-bound-p dvar)))
(is (not (dynamic-variable-bound-p dvar)))))
(test dynamic-let.8
"Test locally unbound-p operator."
(let ((dvar (make-dynamic-variable)))
(dset dvar t)
(is (dynamic-variable-bound-p dvar))
(dlet ((dvar nil))
(is (dynamic-variable-bound-p dvar))
(finishes (dynamic-variable-makunbound dvar))
(is (not (dynamic-variable-bound-p dvar))))
(is (dynamic-variable-bound-p dvar))))
(test dynamic-let.9
"Stress test the implementation (see :FAKE-PROGV-KLUDGE)."
(finishes ; at the same time
(let ((dvars (loop repeat 4096 collect (make-dynamic-variable))))
;; ensure tls variable
(loop for v in dvars do
(dlet ((v 1))))
(loop for i from 0 below 4096
for r = (random 4096)
for v1 in dvars
for v2 = (elt dvars r) do
(when (zerop (mod i 64))
(pass))
(dlet ((v1 42)
(v2 43))
(values))))))
(test dynamic-let.0
"Stress test the implementation (see :FAKE-PROGV-KLUDGE)."
(finishes ; can be gc-ed
(loop for i from 0 below 4096 do
(when (zerop (mod i 64))
(pass))
(dlet (((make-dynamic-variable) 42))
(values)))))
All that is left is to test both dynamic variable implementations:
BLOG/DLET> (lisp-implementation-type)
"ECL"
BLOG/DLET> (run-tests)
Test DYNAMIC-LET.1... Passed.
Test DYNAMIC-LET.2... Passed.
Test DYNAMIC-LET.3... Passed.
Test DYNAMIC-LET.4... Passed.
Test DYNAMIC-LET.5... Passed.
Test DYNAMIC-LET.6... Passed.
Test DYNAMIC-LET.7... Passed.
Test DYNAMIC-LET.8... Passed.
Test DYNAMIC-LET.9... Passed.
Test DYNAMIC-LET.0... Passed.
NIL
And with the kludge:
BLOG/DLET> (lisp-implementation-type)
"SBCL"
BLOG/DLET> (run-tests)
Test DYNAMIC-LET.1... Passed.
Test DYNAMIC-LET.2... Passed.
Test DYNAMIC-LET.3... Passed.
Test DYNAMIC-LET.4... Passed.
Test DYNAMIC-LET.5... Passed.
Test DYNAMIC-LET.6... Passed.
Test DYNAMIC-LET.7... Passed.
Test DYNAMIC-LET.8... Passed.
Test DYNAMIC-LET.9... Passed.
Test DYNAMIC-LET.0... Passed.
NIL
Summary
In this post we've made our implementation to work on SBCL even when there are more than a few thousand dynamic variables. We've also added a simple test suite that checks the basic behavior.
As it often happens, after achieving some goal we get greedy and achieve more. That's the case here as well. In the next (and the last) post in this series I'll explore the idea of adding truly thread-local variables without a shared global value. This will be useful for lazily creating context on threads that are outside of our control. We'll also generalize the implementation so it is possible to subclass and implement ones own flavor of a dynamic variable.
vindarel — Running my 4th Common Lisp script in production© - you can do it too
@2024-10-22 17:19 · 9 days agoLast week I finished a new service written in Common Lisp. It now runs in production© every mornings, and it expands the set of services I offer to clients.
It’s the 4th service of this kind that I developed: - they are not big - but have to be done nonetheless, and the quicker the better (they each amount to 1k to 2k lines of Lisp code), - they are not part of a super advanced domain that requires Common Lisp superpowers - I am the one who benefits from CL during development, - I could have written them in Python - and conversely nothing prevented me from writing them in Common Lisp.
So here lies the goal of this post: illustrate that you don’t need to need a super difficult problem to use Common Lisp. This has been asked many times, directly to me or on social media :)
At the same time, I want to encourage you to write a little something about how you use Common Lisp in the real world. Sharing creates emulation. Do it! If you don’t have a blog you can simply write in a new GitHub repository or in a Gist and come share on /r/lisp. We don’t care. Thanks <3
We’ll briefly see what my scripts do, what libraries I use, how I deploy them, what I did along the way.
Needless to say that I dogfooded my CIEL (beta) meta-library and scripting tool for all those projects.
Table of Contents
- Scripts n°4 and 2 - shaping and sending data - when you can write Lisp on the side
- Scripts n°3 and 1 - complementary web apps
- Lasting words
- Links
Scripts n°4 and 2 - shaping and sending data - when you can write Lisp on the side
My latest script needs to read data from a DB, format what’s necessary according to specifications, and send the result by SFTP.
In this case I read a DB that I own, created by a software that I develop and host. So I could have developed this script in the software itself, right? I could have, but I would have been tied to the main project’s versioning scheme, quirks, and deployment. I rather had to write this script on the side. And since it can be done on the side, it can be done in Common Lisp.
I have to extract products and their data (price, VAT...), aggregate the numbers for each day, write this to a file, according to a specification.
To read the DB, I used cl-dbi
. I didn’t format the SQL with SxQL
this time like in my web apps (where I use the Mito light ORM), but I wrote SQL directly. I’m spoiled by the Django ORM
(which has its idiosyncrasies and shortcomings), so I double checked
the different kinds of JOINs and all went well.
I had to group rows by some properties, so it was a great time to use serapeum:assort
. I left you an example here: https://dev.to/vindarel/common-lisps-group-by-is-serapeumassort-32ma
Dates have to be handled in different formats. I used local-time
of
course, and I still greatly appreciate its lispy formatter syntax:
(defun date-yymmddhhnnss (&optional date stream)
(local-time:format-timestring stream
(or date (local-time:now))
:format
'((:year 4)
(:month 2)
(:day 2)
(:hour 2)
(:min 2)
(:sec 2)
)))
the 2 in (:month 2)
is to ensure the month is written with 2 digits.
Once the file is written, I have to send it to a SFTP server, with the client’s codes.
I wrote a profile
class to encapsulate the client’s data as well as
some functions to read the credentials from either environment
variables, the file system, or a lisp variable. I had a top-level
profile object for ease of testing, but I made sure that my functions
formatting or sending data required a profile
parameter.
(defun send-stock (profile &key date) ...)
(defun write-stock (profile filename) ...)
Still nothing surprising, but it’s tempting to only use global parameters for a one-off script. Except the program grows and you pay the mess later.
SFTP
To send the result through SFTP, I had to make a choice. The SFTP
command line doesn’t make it possible to give a password as argument
(or via an environment variable, etc). So I use lftp
(in Debian
repositories) that allows to do that. In the end, we format a command
like this:
lftp sftp://user:****@host -e "CD I/; put local-file.name; bye"
You can format the command string and run it with uiop:run-program
:
no problem, but I took the opportunity to release another utility:
First, you create a profile
object. This one-liner reads the
credentials from a lispy file:
(defvar profile (make-profile-from-plist (uiop:read-file-form "CREDS.lisp-expr"))
then you define the commands you’ll want to run:
(defvar command (put :cd "I/" :local-filename "data.csv"))
;; #<PUT cd: "I/", filename: "data.csv" {1007153883}>
and finally you call the run
method on a profile and a command. Tada.
Deploying
Build a binary the classic way (it’s all on the Cookbook), send it to your server, run it.
(during a testing phase I have deployed “as a script”, from sources, which is a bit quicker to pull changes and try again on the server)
Set up a CRON job.
No Python virtual env to activate in the CRON environment...
Add command line arguments the easy way or with the library of your choice (I like Clingon).
Script n°2 and simple FTP
My script #2 at the time was similar and simpler. I extract the same products but only take their quantities, and I assemble lines like
EXTRACTION STOCK DU 11/04/2008
....978202019116600010000001387
....978270730656200040000000991
For this service, we have to send the file to a simple FTP server.
We have a pure Lisp library for FTP (and not SFTP) which works very well, cl-ftp.
It’s a typical example of an old library that didn’t receive any update in years and so that looks abandoned, that has seldom documentation but whose usage is easy to infer, and that does its job as requested.
For example we do this to send a file:
(ftp:with-ftp-connection (conn :hostname hostname
:username username
:password password
:passive-ftp-p t)
(ftp:store-file conn local-filename filename))
I left you notes about cl-ftp and my SFTP wrapper here:
Scripts n°3 and n°1 - specialized web apps
A recent web app that I’m testing with a couple clients extends an existing stock management system.
This one also was done in order to avoid a Python monolith. I still needed additions in the Python main software, but this little app can be independent and grow on its own. The app maintains its state and communicates it with a REST API.
It gives a web interface to their clients (so my clients’ clients, but not all of them, only the institutional) so that they can:
- search for products
- add them in shopping carts
- validate the cart, which sends the data to the main software and notifies the owner, who will work on them.
The peculiarities of this app are that:
- there is no user login, we use unique URLs with UUIDs in the form:
http://command.client.com/admin-E9DFOO82-R2D2-007/list?id=1
- I need a bit of file persistence but I didn’t want the rigidity of a database so I am using the clache library. Here also, not a great activity, but it works©. I persist lists and hash-tables. Now that the needs grow and the original scope doesn’t cut it any more, I wonder how long I’ll survive without a DB. Only for its short SQL queries VS lisp code to filter data.
I deploy a self-contained binary: code + html templates in the same binary (+ the implementation, the web server, the debugger...), with Systemd.
I wrote more on how to ship a standalone binary with templates and static assets with Djula templates here:
I can connect to the running app with a Swank server to check and set parameters, which is super helpful and harmless.
It is possible to reload the whole app from within itself and I did it with no hiccups for a couple years, but it isn’t necessary the most reliable, easiest to set up and fastest method. You can do it, but nobody forces you to do this because you are running CL in production. You can use the industry’s boring and best practices too. Common Lisp doesn’t inforce a “big ball of mud” approach. Develop locally, use Git, use a CI, deploy a binary...
Every thing that I learned I documented it along the way in the Cookbook ;)
Another app that I’ll mention but about which I also wrote earlier is my first web app. This one is open-source. It still runs :)
In this project I had my friend and colleague contribute five lines of Lisp code to add a theme switcher in the backend that would help him do the frontend. He had never written a line of Lisp before. Of course, he did so by looking at my existing code to learn the existing functions at hand, and he could do it because the project was easy to install and run.
(defun get-template(template &optional (theme *theme*))
"Loads template from the base templates directory or from the given theme templates directory if it exists."
(if (and (str:non-blank-string-p theme)
(probe-file (asdf:system-relative-pathname "abstock" (str:concat "src/templates/themes/" theme "/" template))))
;; then
(str:concat "themes/" theme "/" template)
;; else :D
template))
He had to annotate the if
branches :] This passed the code review.
Lasting words
The 5th script/app is already on the way, and the next ones are awaiting that I open their .docx specification files. This one was a bit harder but the Lisp side was done sucessfully with the efficient collaboration of another freelance lisper (Kevin to not name him).
All those tasks (read a DB, transform data...) are very mundane.
They are everywhere. They don’t always need supercharged web framework or integrations.
You have plenty of opportunities to make yourself a favor, and use Common Lisp in the wild. Not counting the super-advanced domains where Lisp excels at ;)
Links
- https://lispcookbook.github.io/cl-cookbook/
- awesome-cl
- companies using Common Lisp in production (at least the ones we know)
- Common Lisp course in videos – it helps me, and you ;) I added 9 videos about CLOS last month, and more are coming. It’s 86 minutes of an efficient code-first approach, out of 7+ hours of total content in the course. After this chapter you know enough to read the sources of the Hunchentoot web server or of the Kandria game.
I have done some preliminary Common Lisp exploration prior to this course but had a lot of questions regarding practical use and development workflows. This course was amazing for this! I learned a lot of useful techniques for actually writing the code in Emacs, as well as conversational explanations of concepts that had previously confused me in text-heavy resources. Please keep up the good work and continue with this line of topics, it is well worth the price! [Preston, October of 2024]
TurtleWare — Dynamic Let
@2024-10-22 00:00 · 9 days agoTable of Contents
Dynamic Bindings
Common Lisp has an important language feature called dynamic binding
. It is
possible to rebind a dynamic variable somewhere on the call stack and downstream
functions will see that new value, and when the stack is unwound, the old value
is brought back.
While Common Lisp does not specify multi-threading, it seems to be a consensus among various implementations that dynamic bindings are thread-local, allowing for controlling the computing context in a safe way.
Before we start experiments, let's define a package to isolate our namespace:
(defpackage "EU.TURTLEWARE.BLOG/DLET"
(:local-nicknames ("MOP" #+closer-mop "C2MOP"
#+(and (not closer-mop) ecl) "MOP"
#+(and (not closer-mop) ccl) "CCL"
#+(and (not closer-mop) sbcl) "SB-MOP"))
(:use "CL"))
(in-package "EU.TURTLEWARE.BLOG/DLET")
Dynamic binding of variables is transparent to the programmer, because the
operator LET
is used for both lexical and dynamic bindings. For example:
(defvar *dynamic-variable* 42)
(defun test ()
(let ((*dynamic-variable* 15)
(lexical-variable 12))
(lambda ()
(print (cons *dynamic-variable* lexical-variable)))))
(funcall (test))
;;; (42 . 12)
(let ((*dynamic-variable* 'xx))
(funcall (test)))
;;; (xx . 12)
Additionally the language specifies a special operator PROGV
that gives the
programmer a control over the dynamic binding mechanism, by allowing passing the
dynamic variable by value instead of its name. Dynamic variables are represented
by symbols:
(progv (list '*dynamic-variable*) (list 'zz)
(funcall (test)))
;;; (zz . 12)
The problem
Nowadays it is common to encapsulate the state in the instance of a class.
Sometimes that state is dynamic. It would be nice if we could use dynamic
binding to control it. That said slots are not variables, and if there are many
objects of the same class with different states, then using dynamic variables
defined with DEFVAR
is not feasible.
Consider the following classes which we want to be thread-safe:
(defgeneric call-with-ink (cont window ink))
(defclass window-1 ()
((ink :initform 'red :accessor ink)))
(defmethod call-with-ink (cont (win window-1) ink)
(let ((old-ink (ink win)))
(setf (ink win) ink)
(unwind-protect (funcall cont)
(setf (ink win) old-ink))))
(defclass window-2 ()
())
(defvar *ink* 'blue)
(defmethod ink ((window window-2)) *ink*)
(defmethod call-with-ink (cont (win window-2) ink)
(let ((*ink* ink))
(funcall cont)))
The first example is clearly not thread safe. If we access the WINDOW-1
instance from multiple threads, then they will overwrite a value of the slot
INK
.
The second example is not good either, because when we have many instances of
WINDOW-2
then they share the binding. Nesting CALL-WITH-INK
will overwrite
the binding of another window.
The solution
The solution is to use PROGV
:
(defclass window-3 ()
((ink :initform (gensym))))
(defmethod initialize-instance :after ((win window-3) &key)
(setf (symbol-value (slot-value win 'ink)) 'red))
(defmethod call-with-ink (cont (win window-3) ink)
(progv (list (slot-value win 'ink)) (list ink)
(funcall cont)))
This way each instance has its own dynamic variable that may be rebound with a
designated operator CALL-WITH-INK
. It is thread-safe and private. We may add
some syntactic sugar so it is more similar to let:
(defmacro dlet (bindings &body body)
(loop for (var val) in bindings
collect var into vars
collect val into vals
finally (return `(progv (list ,@vars) (list ,@vals)
,@body))))
(defmacro dset (&rest pairs)
`(setf ,@(loop for (var val) on pairs by #'cddr
collect `(symbol-value ,var)
collect val)))
(defmacro dref (variable)
`(symbol-value ,variable))
Dynamic slots
While meta-classes are not easily composable, it is worth noting that we can mold it better into the language by specifying that slot itself has a dynamic value. This way CLOS aficionados will have a new tool in their arsenal.
The approach we'll take is that a fresh symbol is stored as the value of each instance-allocated slot, and then accessors for the slot value will use these symbols as a dynamic variable. Here are low-level accessors:
;;; Accessing and binding symbols behind the slot. We don't use SLOT-VALUE,
;;; because it will return the _value_ of the dynamic variable, and not the
;;; variable itself.
(defun slot-dvar (object slotd)
(mop:standard-instance-access
object (mop:slot-definition-location slotd)))
(defun slot-dvar* (object slot-name)
(let* ((class (class-of object))
(slotd (find slot-name (mop:class-slots class)
:key #'mop:slot-definition-name)))
(slot-dvar object slotd)))
(defmacro slot-dlet (bindings &body body)
`(dlet ,(loop for ((object slot-name) val) in bindings
collect `((slot-dvar* ,object ,slot-name) ,val))
,@body))
Now we'll define the meta-class. We need that to specialize functions responsible
for processing slot definitions and the instance allocation. Notice, that we
make use of a kludge to communicate between COMPUTE-EFFECTIVE-SLOT-DEFINITION
and EFFECTIVE-SLOT-DEFINITION-CLASS
– this is because the latter has no
access to the direct slot definitions.
;;; The metaclass CLASS-WITH-DYNAMIC-SLOTS specifies alternative effective slot
;;; definitions for slots with an initarg :dynamic.
(defclass class-with-dynamic-slots (standard-class) ())
;;; Class with dynamic slots may be subclasses of the standard class.
(defmethod mop:validate-superclass ((class class-with-dynamic-slots)
(super standard-class))
t)
;;; When allocating the instance we initialize all slots to a fresh symbol that
;;; represents the dynamic variable.
(defmethod allocate-instance ((class class-with-dynamic-slots) &rest initargs)
(declare (ignore initargs))
(let ((object (call-next-method)))
(loop for slotd in (mop:class-slots class)
when (typep slotd 'dynamic-effective-slot) do
(setf (mop:standard-instance-access
object
(mop:slot-definition-location slotd))
(gensym (string (mop:slot-definition-name slotd)))))
object))
;;; To improve potential composability of CLASS-WITH-DYNAMIC-SLOTS with other
;;; metaclasses we treat specially only slots that has :DYNAMIC in initargs,
;;; otherwise we call the next method.
(defmethod mop:direct-slot-definition-class
((class class-with-dynamic-slots) &rest initargs)
(loop for (key val) on initargs by #'cddr
when (eq key :dynamic)
do (return-from mop:direct-slot-definition-class
(find-class 'dynamic-direct-slot)))
(call-next-method))
;;; The metaobject protocol did not specify an elegant way to communicate
;;; between the direct slot definition and the effective slot definition.
;;; Luckily we have dynamic bindings! :-)
(defvar *kludge/mop-deficiency/dynamic-slot-p* nil)
(defmethod mop:compute-effective-slot-definition
((class class-with-dynamic-slots)
name
direct-slotds)
(if (typep (first direct-slotds) 'dynamic-direct-slot)
(let* ((*kludge/mop-deficiency/dynamic-slot-p* t))
(call-next-method))
(call-next-method)))
(defmethod mop:effective-slot-definition-class
((class class-with-dynamic-slots) &rest initargs)
(declare (ignore initargs))
(if *kludge/mop-deficiency/dynamic-slot-p*
(find-class 'dynamic-effective-slot)
(call-next-method)))
Finally we define a direct and an effective slot classes, and specialize slot accessors that are invoked by the instance accessors.
;;; There is a considerable boilerplate involving customizing slots.
;;;
;;; - direct slot definition: local to a single defclass form
;;;
;;; - effective slot definition: combination of all direct slots with the same
;;; name in the class and its superclasses
;;;
(defclass dynamic-direct-slot (mop:standard-direct-slot-definition)
((dynamic :initform nil :initarg :dynamic :reader dynamic-slot-p)))
;;; DYNAMIC-EFFECTIVE-SLOT is implemented to return as slot-value values of the
;;; dynamic variable that is stored with the instance.
;;;
;;; It would be nice if we could specify :ALLOCATION :DYNAMIC for the slot, but
;;; then STANDARD-INSTANCE-ACCESS would go belly up. We could make a clever
;;; workaround, but who cares?
(defclass dynamic-effective-slot (mop:standard-effective-slot-definition)
())
(defmethod mop:slot-value-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(dref (slot-dvar object slotd)))
(defmethod (setf mop:slot-value-using-class)
(new-value
(class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(dset (slot-dvar object slotd) new-value))
(defmethod mop:slot-boundp-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(boundp (slot-dvar object slotd)))
(defmethod mop:slot-makunbound-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(makunbound (slot-dvar object slotd)))
With this, we can finally define a class with slots that have dynamic values. What's more, we may bind them like dynamic variables.
;;; Let there be light.
(defclass window-4 ()
((ink :initform 'red :dynamic t :accessor ink)
(normal :initform 'normal :accessor normal))
(:metaclass class-with-dynamic-slots))
(let ((object (make-instance 'window-4)))
(slot-dlet (((object 'ink) 15))
(print (ink object)))
(print (ink object)))
ContextL provides a similar solution with dynamic slots, although it provides much more, like layered classes. This example is much more self-contained.
The context
Lately I'm working on the repaint queue for McCLIM. While doing so I've decided to make stream operations thread-safe, so it is possible to draw on the stream and write to it from arbitrary thread asynchronously. The access to the output record history needs to be clearly locked, so that may be solved by the mutex. Graphics state is another story, consider the following functions running from separate threads:
(defun team-red ()
(with-drawing-options (stream :ink +dark-red+)
(loop for i from 0 below 50000 do
(write-string (format nil "XXX: ~5d~%" i) stream))))
(defun team-blue ()
(with-drawing-options (stream :ink +dark-blue+)
(loop for i from 0 below 50000 do
(write-string (format nil "YYY: ~5d~%" i) stream))))
(defun team-pink ()
(with-drawing-options (stream :ink +deep-pink+)
(loop for i from 0 below 25000 do
(case (random 2)
(0 (draw-rectangle* stream 200 (* i 100) 250 (+ (* i 100) 50)))
(1 (draw-circle* stream 225 (+ (* i 100) 25) 25))))))
(defun gonow (stream)
(window-clear stream)
(time (let ((a (clim-sys:make-process #'team-red))
(b (clim-sys:make-process #'team-blue))
(c (clim-sys:make-process #'team-grue)))
(bt:join-thread a)
(bt:join-thread b)
(bt:join-thread c)
(format stream "done!~%"))) )
Operations like WRITE-STRING
and DRAW-RECTANGLE
can be implemented by
holding a lock over the shared resource without much disruption. The drawing
color on the other hand is set outside of the loop, so if we had locked the
graphics state with a lock, then these functions would be serialized despite
being called from different processes. The solution to this problem is to make
graphics context a dynamic slot that is accessed with WITH-DRAWING-OPTIONS
.
Summary
I hope that I've convinced you that dynamic variables are cool (I'm sure that majority of readers here are already convinced), and that dynamic slots are even cooler :-). Watch forward to the upcoming McCLIM release!
If you like technical writeups like this, please consider supporting me on Patreon.
TurtleWare — Dynamic Let - A New Hope
@2024-10-22 00:00 · 9 days agoTable of Contents
Dynamic Bindings
Common Lisp has an important language feature called dynamic binding
. It is
possible to rebind a dynamic variable somewhere on the call stack and downstream
functions will see that new value, and when the stack is unwound, the old value
is brought back.
While Common Lisp does not specify multi-threading, it seems to be a consensus among various implementations that dynamic bindings are thread-local, allowing for controlling the computing context in a safe way.
Before we start experiments, let's define a package to isolate our namespace:
(defpackage "EU.TURTLEWARE.BLOG/DLET"
(:local-nicknames ("MOP" #+closer-mop "C2MOP"
#+(and (not closer-mop) ecl) "MOP"
#+(and (not closer-mop) ccl) "CCL"
#+(and (not closer-mop) sbcl) "SB-MOP"))
(:use "CL"))
(in-package "EU.TURTLEWARE.BLOG/DLET")
Dynamic binding of variables is transparent to the programmer, because the
operator LET
is used for both lexical and dynamic bindings. For example:
(defvar *dynamic-variable* 42)
(defun test ()
(let ((*dynamic-variable* 15)
(lexical-variable 12))
(lambda ()
(print (cons *dynamic-variable* lexical-variable)))))
(funcall (test))
;;; (42 . 12)
(let ((*dynamic-variable* 'xx))
(funcall (test)))
;;; (xx . 12)
Additionally the language specifies a special operator PROGV
that gives the
programmer a control over the dynamic binding mechanism, by allowing passing the
dynamic variable by value instead of its name. Dynamic variables are represented
by symbols:
(progv (list '*dynamic-variable*) (list 'zz)
(funcall (test)))
;;; (zz . 12)
The problem
Nowadays it is common to encapsulate the state in the instance of a class.
Sometimes that state is dynamic. It would be nice if we could use dynamic
binding to control it. That said slots are not variables, and if there are many
objects of the same class with different states, then using dynamic variables
defined with DEFVAR
is not feasible.
Consider the following classes which we want to be thread-safe:
(defgeneric call-with-ink (cont window ink))
(defclass window-1 ()
((ink :initform 'red :accessor ink)))
(defmethod call-with-ink (cont (win window-1) ink)
(let ((old-ink (ink win)))
(setf (ink win) ink)
(unwind-protect (funcall cont)
(setf (ink win) old-ink))))
(defclass window-2 ()
())
(defvar *ink* 'blue)
(defmethod ink ((window window-2)) *ink*)
(defmethod call-with-ink (cont (win window-2) ink)
(let ((*ink* ink))
(funcall cont)))
The first example is clearly not thread safe. If we access the WINDOW-1
instance from multiple threads, then they will overwrite a value of the slot
INK
.
The second example is not good either, because when we have many instances of
WINDOW-2
then they share the binding. Nesting CALL-WITH-INK
will overwrite
the binding of another window.
The solution
The solution is to use PROGV
:
(defclass window-3 ()
((ink :initform (gensym))))
(defmethod initialize-instance :after ((win window-3) &key)
(setf (symbol-value (slot-value win 'ink)) 'red))
(defmethod call-with-ink (cont (win window-3) ink)
(progv (list (slot-value win 'ink)) (list ink)
(funcall cont)))
This way each instance has its own dynamic variable that may be rebound with a
designated operator CALL-WITH-INK
. It is thread-safe and private. We may add
some syntactic sugar so it is more similar to let:
(defmacro dlet (bindings &body body)
(loop for (var val) in bindings
collect var into vars
collect val into vals
finally (return `(progv (list ,@vars) (list ,@vals)
,@body))))
(defmacro dset (&rest pairs)
`(setf ,@(loop for (var val) on pairs by #'cddr
collect `(symbol-value ,var)
collect val)))
(defmacro dref (variable)
`(symbol-value ,variable))
Dynamic slots
While meta-classes are not easily composable, it is worth noting that we can mold it better into the language by specifying that slot itself has a dynamic value. This way CLOS aficionados will have a new tool in their arsenal.
The approach we'll take is that a fresh symbol is stored as the value of each instance-allocated slot, and then accessors for the slot value will use these symbols as a dynamic variable. Here are low-level accessors:
;;; Accessing and binding symbols behind the slot. We don't use SLOT-VALUE,
;;; because it will return the _value_ of the dynamic variable, and not the
;;; variable itself.
(defun slot-dvar (object slotd)
(mop:standard-instance-access
object (mop:slot-definition-location slotd)))
(defun slot-dvar* (object slot-name)
(let* ((class (class-of object))
(slotd (find slot-name (mop:class-slots class)
:key #'mop:slot-definition-name)))
(slot-dvar object slotd)))
(defmacro slot-dlet (bindings &body body)
`(dlet ,(loop for ((object slot-name) val) in bindings
collect `((slot-dvar* ,object ,slot-name) ,val))
,@body))
Now we'll define the meta-class. We need that to specialize functions responsible
for processing slot definitions and the instance allocation. Notice, that we
make use of a kludge to communicate between COMPUTE-EFFECTIVE-SLOT-DEFINITION
and EFFECTIVE-SLOT-DEFINITION-CLASS
– this is because the latter has no
access to the direct slot definitions.
;;; The metaclass CLASS-WITH-DYNAMIC-SLOTS specifies alternative effective slot
;;; definitions for slots with an initarg :dynamic.
(defclass class-with-dynamic-slots (standard-class) ())
;;; Class with dynamic slots may be subclasses of the standard class.
(defmethod mop:validate-superclass ((class class-with-dynamic-slots)
(super standard-class))
t)
;;; When allocating the instance we initialize all slots to a fresh symbol that
;;; represents the dynamic variable.
(defmethod allocate-instance ((class class-with-dynamic-slots) &rest initargs)
(declare (ignore initargs))
(let ((object (call-next-method)))
(loop for slotd in (mop:class-slots class)
when (typep slotd 'dynamic-effective-slot) do
(setf (mop:standard-instance-access
object
(mop:slot-definition-location slotd))
(gensym (string (mop:slot-definition-name slotd)))))
object))
;;; To improve potential composability of CLASS-WITH-DYNAMIC-SLOTS with other
;;; metaclasses we treat specially only slots that has :DYNAMIC in initargs,
;;; otherwise we call the next method.
(defmethod mop:direct-slot-definition-class
((class class-with-dynamic-slots) &rest initargs)
(loop for (key val) on initargs by #'cddr
when (eq key :dynamic)
do (return-from mop:direct-slot-definition-class
(find-class 'dynamic-direct-slot)))
(call-next-method))
;;; The metaobject protocol did not specify an elegant way to communicate
;;; between the direct slot definition and the effective slot definition.
;;; Luckily we have dynamic bindings! :-)
(defvar *kludge/mop-deficiency/dynamic-slot-p* nil)
(defmethod mop:compute-effective-slot-definition
((class class-with-dynamic-slots)
name
direct-slotds)
(if (typep (first direct-slotds) 'dynamic-direct-slot)
(let* ((*kludge/mop-deficiency/dynamic-slot-p* t))
(call-next-method))
(call-next-method)))
(defmethod mop:effective-slot-definition-class
((class class-with-dynamic-slots) &rest initargs)
(declare (ignore initargs))
(if *kludge/mop-deficiency/dynamic-slot-p*
(find-class 'dynamic-effective-slot)
(call-next-method)))
Finally we define a direct and an effective slot classes, and specialize slot accessors that are invoked by the instance accessors.
;;; There is a considerable boilerplate involving customizing slots.
;;;
;;; - direct slot definition: local to a single defclass form
;;;
;;; - effective slot definition: combination of all direct slots with the same
;;; name in the class and its superclasses
;;;
(defclass dynamic-direct-slot (mop:standard-direct-slot-definition)
((dynamic :initform nil :initarg :dynamic :reader dynamic-slot-p)))
;;; DYNAMIC-EFFECTIVE-SLOT is implemented to return as slot-value values of the
;;; dynamic variable that is stored with the instance.
;;;
;;; It would be nice if we could specify :ALLOCATION :DYNAMIC for the slot, but
;;; then STANDARD-INSTANCE-ACCESS would go belly up. We could make a clever
;;; workaround, but who cares?
(defclass dynamic-effective-slot (mop:standard-effective-slot-definition)
())
(defmethod mop:slot-value-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(dref (slot-dvar object slotd)))
(defmethod (setf mop:slot-value-using-class)
(new-value
(class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(dset (slot-dvar object slotd) new-value))
(defmethod mop:slot-boundp-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(boundp (slot-dvar object slotd)))
(defmethod mop:slot-makunbound-using-class
((class class-with-dynamic-slots)
object
(slotd dynamic-effective-slot))
(makunbound (slot-dvar object slotd)))
With this, we can finally define a class with slots that have dynamic values. What's more, we may bind them like dynamic variables.
;;; Let there be light.
(defclass window-4 ()
((ink :initform 'red :dynamic t :accessor ink)
(normal :initform 'normal :accessor normal))
(:metaclass class-with-dynamic-slots))
(let ((object (make-instance 'window-4)))
(slot-dlet (((object 'ink) 15))
(print (ink object)))
(print (ink object)))
ContextL provides a similar solution with dynamic slots, although it provides much more, like layered classes. This example is much more self-contained.
The context
Lately I'm working on the repaint queue for McCLIM. While doing so I've decided to make stream operations thread-safe, so it is possible to draw on the stream and write to it from arbitrary thread asynchronously. The access to the output record history needs to be clearly locked, so that may be solved by the mutex. Graphics state is another story, consider the following functions running from separate threads:
(defun team-red ()
(with-drawing-options (stream :ink +dark-red+)
(loop for i from 0 below 50000 do
(write-string (format nil "XXX: ~5d~%" i) stream))))
(defun team-blue ()
(with-drawing-options (stream :ink +dark-blue+)
(loop for i from 0 below 50000 do
(write-string (format nil "YYY: ~5d~%" i) stream))))
(defun team-pink ()
(with-drawing-options (stream :ink +deep-pink+)
(loop for i from 0 below 25000 do
(case (random 2)
(0 (draw-rectangle* stream 200 (* i 100) 250 (+ (* i 100) 50)))
(1 (draw-circle* stream 225 (+ (* i 100) 25) 25))))))
(defun gonow (stream)
(window-clear stream)
(time (let ((a (clim-sys:make-process #'team-red))
(b (clim-sys:make-process #'team-blue))
(c (clim-sys:make-process #'team-grue)))
(bt:join-thread a)
(bt:join-thread b)
(bt:join-thread c)
(format stream "done!~%"))) )
Operations like WRITE-STRING
and DRAW-RECTANGLE
can be implemented by
holding a lock over the shared resource without much disruption. The drawing
color on the other hand is set outside of the loop, so if we had locked the
graphics state with a lock, then these functions would be serialized despite
being called from different processes. The solution to this problem is to make
graphics context a dynamic slot that is accessed with WITH-DRAWING-OPTIONS
.
Summary
I hope that I've convinced you that dynamic variables are cool (I'm sure that majority of readers here are already convinced), and that dynamic slots are even cooler :-). Watch forward to the upcoming McCLIM release!
If you like technical writeups like this, please consider supporting me on Patreon.
Scott L. Burson — Comparison: FSet vs. Sycamore
@2024-10-18 05:35 · 13 days ago[BULLETIN: Quicklisp now has the latest version of FSet.]
Sycamore, primarily by Neil Dantam, is a functional collections library that is built around the same weight-balanced binary tree data structure (with leaf vectors) that FSet uses. While the README on that page comments briefly on the differences between Sycamore and FSet, I don't feel that it does FSet justice. Here is my analysis.
Dantam claims that his library is 30% to 50% faster than FSet on common operations. While I haven't done comprehensive micro-benchmarking, a couple of quick tests indicates that this claim is plausible. A look through the internals of the implementation confirms that it is clean and tight, and I must commend him. There may be some techniques in here that I could usefully borrow.
Most of the performance difference is necessitated by two design choices that were made differently in the two libraries. One of these Dantam mentions in his comparison: FSet's use of a single, global ordering relation implemented as a CLOS generic function, vs. Sycamore's more standard choice of requiring a comparison function to be supplied when a collection is created. The other one he doesn't mention: the fact that FSet supports a notion of equivalent-but-unequal values, which are values that are incomparable — there's no way, or at least no obvious way, to say which is less than the other, and yet we want to treat them as unequal. The simplest example is the integer 1 and the single-float 1.0, which have equal numerical values (and cl:= returns true on them), but which are nonetheless not eql. (I have a previous blog post that goes into a lot more detail about equality and comparison.) Since Sycamore expects the user-supplied comparison function to return an integer that is negative, zero, or positive to indicate the ordering of its arguments, there's no encoding for the equivalent-but-unequal case, nor is there any of the code that would be required to handle that case.
Both of these decisions were driven by my goal for the FSet project. I didn't just want to provide a functional collections library that could be called occasionally when one had a specific need for such a data structure. My ambition was much grander: to make functional collections into a reasonable default choice for the vast majority of programming situations. I wanted FSet users (including, of course, myself) to be able to use functional collections freely, with very little extra effort or thought. While Lisp by itself reaches a little bit in this direction — lists can certainly be used functionally — lists used as functional collections run into severe time complexity problems as those collections get large. I wanted the FSet collections to be as convenient and well-supported as lists, but without the time complexity issues.
— Or rather, I wanted them to be even more convenient than lists. Before writing FSet, I had spent years working in a little-known proprietary language called Refine, which happened to be implemented on top of Common Lisp, so it was not unusual to switch between the two languages. And I had noticed something. In contrast to CL, with its several different predefined equality predicates and with its functions that take :test arguments to specify which one to use, Refine has a single notiion of equality. The value space is cleanly divided between immutable types, which are compared by value — along with numbers, these include strings, sets, maps, and seqs — and mutable objects, which are always compared by identity. And it worked! I found I did not miss the ability to specify an equality predicate when performing an operation such as "union". It was just never needed. Get equality right at the language level, and the problem goes away.
Although FSet's compare generic function isn't just for equality — it also defines an ordering that is used by the binary trees — I thought it would probably turn out to be the case that a single global ordering, implemented as a generic function and therefore extensible, would be fine the vast majority of the time. I think experience has borne this out. And just as you can mix types in Lisp lists — say, numbers and symbols — without further thought, so you can have any combination of types in an FSet set, effortlessly. (A project I'm currently working on actually takes considerable advantage of this capability.)
As for supporting equivalent-but-unequal values, this desideratum flows directly from the principle of least astonishment. While it might not be too surprising for a set or map implementation to fail distinguish the integer 1 from the float 1.0, it certainly would be very surprising, and almost certainly a source of bugs in a compiler that used it, for it to fail to distinguish two uninterned symbols with the same name. (I saw a macro expansion recently that contained two distinct symbols that both printed as #:NEW. It happens.) A compiler using Sycamore for a map on symbols would have to supply a comparison function that accounted for this; it couldn't just compare the package name and symbol name. (You'd have to do something like keep a weak hash table mapping symbols to integers, assigned in the order in which the comparison function encountered them. It's doable, but FSet protects you from this madness.)
Along with those deep semantic design choices, I've spent a lot of time on developing a wide and featureful API for FSet (an effort that's ongoing). FSet has many features that Sycamore lacks, including:
- seqs, a binary-tree sequence implementation that holds arbitrary Lisp objects (Sycamore ropes hold only characters, which is certainly an important special case, but why restrict ourselves?)
- default values for maps and seqs (the value to return when the key is outside the domain is associated with the collection, not supplied at the call site; this turns out to be a significant convenience)
- generic functions that operate on both lists and FSet collections, to shadow the CL builtins
- the powerful map-union and map-intersection operations (I'll blog about these in the future)
- more ways to iterate over the collections (the FSet tutorial has a good summary, about 3/4 of the way down)
- speaking of the tutorial, FSet has lots more documentation
Let me digress slightly to give an example of how FSet makes programming more elegant and convenient. Joe Marshall just put up a blog post comparing Go(lang) with Common Lisp, which is worth a read on its own; I'm just going to grab a code snippet from there to show a little bit of what programming with FSet is like. Here's Joe's code:
(defun collate (items &key (key #'identity) (test #'eql) (merger (merge-adjoin #'eql)) (default nil))
(let ((table (make-hash-table :test test)))
(dolist (item items table)
(let ((k (funcall key item)))
(setf (gethash k table) (funcall merger (gethash k table default) item))))))
(defun merge-adjoin (test)
(lambda (collection item)
(adjoin item collection :test test)))
And here's what I would write using FSet:
(let ((result (map :default (set))))
(dolist (item items result)
(includef (@ result (funcall key item)) item))))
(Well, I would probably move result outside the dolist form to make it clearer what the return value is, but let's go with Joe's stylistic choice here.)
For those who haven't used FSet: the form (map :default (set)) creates a map whose default is the empty set, meaning that lookups on that map will return the empty set if the key is not in the map. This saves the includef form from having to handle that possibility.
My version makes assumptions, it's true, about how you want to collect the items with a given key; it doesn't give you other choices. It could, but what would be the point? It's already using a general set with better time complexity than lists, and saving you from having to write anything like merge-adjoin. The extensible global equivalence relation means you're not going to need to supply a :test either.
I think the FSet-enhanced code is cleaner, more elegant, and therefore clearer than the plain-CL version. Don't you agree? Maybe you wouldn't say it's a huge improvement, okay, but it's a small example; in a larger codebase, I would argue, these small improvements add up.
* * * * *
To summarize: if you just want a library you can call in a few places for specific purposes, Sycamore might work better for you (but think hard if you're writing a comparator for symbols). FSet can certainly be used that way, but it can be much more. If you want to see one way in which Common Lisp can be made into a better language, without giving up anything that we love about it, I urge you to give FSet a try.
FSet has changed the way I write Lisp programs. — an FSet user
(UPDATE: the magnitude of the performance difference between FSet and Sycamore surprised me, and inspired me
to do some profiling of FSet. It turned out that I could get a 20% speedup on
one micro-benchmark simply by adding some inline
declarations. Mea culpa, mea culpa, mea maxima culpa; I should have
done this years ago. With that change, the generic function overhead
appears to be the only significant cause of the remaining ~20%
performance difference. I tried creating a Sycamore set using a thin wrapper around fset:compare, and the resulting performance was very similar to that of FSet with its new inlines.)
Joe Marshall — Lisp vs. golang
@2024-10-17 02:17 · 14 days agoIt's no secret that I'm an aficionado of Lisp. It's my go to language, especially when I don't know what I'm doing. I call it research and prototyping, but it's really just playing around until something works.
We had a need for some auditing of some of our databases at work. They ought to agree with each other and with what GitHub and CircleCI think. It took a couple of weeks part time to prototype a solution in Common Lisp. It showed that the databases were in 99% agreement and found the few points of disagreement and anomalies that we ought to fix or look out for.
I want to integrate this information into a dashboard on one of our tools. I prototyped this by spinning up a Common Lisp microservice that returns the information in JSON format.
But management prefers that new services are written in golang. It would be easier for me to rewrite the service in golang than to try to persuade others to use Common Lisp. It also gives me the opportunity to compare the two languages head to head on a real world problem.
No, this is not a fair comparison. When I wrote the Lisp code I was exploring the problem space and prototyping. I'm much more experienced with Lisp than with golang. The golang version has the advantage that I know what I want to do and how to do it. In theory, I can just translate the Common Lisp code into golang. But then again, this is a “second system” which is not a prototype and has slightly larger scope and fuller requirements. So this cannot be a true head to head comparison.
The first point of comparison is macros (or lack thereof). I
generally don't use a lot of macros in Common Lisp, but they come in
handy when I do use them. One macro I wrote is
called audit-step
, which you can wrap around any
expresion and it prints out a message before and after the
expression is evaluated. The steps are numbered in sequence, and
nested steps get nested numbers (like step 2.3.1). If you wrap the
major function bodies with this macro, you get a nice trace of the
call sequence in the log.
Golang doesn't have macros, but it has first class functions. It's easy enough to write a function that takes a function as an argument and wraps it to output the trace messages. In fact, the macro version in Common Lisp just rewrites the form into such a function call. But the macro version hides a level of indentation and a lambda. In golang, my major functions all start with
func MajorFunction (args) int { return AuditStep("MajorFunction", "aux message", func() int { // body of MajorFunction // Actual code goes here. }) }
The bodies of all my major functions are indented by 16 spaces, which is a little much.
I like higher order functions. I can write one higher order
function and parameterize it with functions that handle the specific
cases. In my auditing code, one such workhorse function is
called collate
. It takes a list of objects and creates
a table that maps values to all objects in the list that contain
that value. To give an example, imaging you have a list of objects
that all have a field called foo
. The foo
field is a string. The collate
function can return a
table that maps strings to all objects that have that string in the
foo field.
collate
is very general. It takes a list of objects
and four keyword arguments. The :key
argument is a
function that extracts the value to collate on.
The :test
argument is a function that compares two keys
(it defaults to eql
if not specified).
The :merger
argument is a function to add the mapped object to its appropriate
collection in the table (it defaults to adjoin). The :default
argument
specifies the initial value of a collection in the table (it
defaults to nil).
The :merger function is the most interesting. It takes the key and
the object and the current value of the table at that key. It
returns the new value of the table at that key. The default merger
function is adjoin
, which adds the object to the
collection at the key if it is not already there. But you can
specify a different merger function. For example, if you want to
count the number of objects at each key, you can specify a merger
function that increments a counter.
The functional arguments to the collate function are often the
results of other higher order functions. For example,
the :key
argument is often the result of composing
selector functions. The :merger
argument is often the
result of composing a binary merge function with a unary transformer
function. The transformer function is often the result of composing
a number of primitive selectors and transformers.
In Common Lisp, it is quite easy to write these higher order
functions. We can compose two unary functions with
the compose2
function:
(defun compose2 (f g) (lambda (x) (funcall f (funcall g x)))
and then compose as many functions as we like
by fold-left
of compose2
starting with
the identity
function:
(defun compose (&rest fs) (fold-left #'compose2 #'identity fs))
We can compose a binary function with a unary function in three ways: we can pipe the output of the binary function into the unary function, or we can pipe the output of the unary function into one or the other of the inputs of the binary function.
(defun binary-compose-output (f g) (lambda (x y) (funcall f (funcall g x y)))) (defun binary-compose-left (f g) (lambda (x y) (funcall f (funcall g x) y))) (defun binary-compose-right (f g) (lambda (x y) (funcall f x (funcall g y))))
The collate
function can now assume that a lot of the
work is done by the :key
and :merger
functions that are passed in. It simply builds a hash table and
fills it:
(defun collate (item &key (key #'identity) (test #'eql) (merger (merge-adjoin #'eql)) (default nil)) (let ((table (make-hash-table :test test))) (dolist (item items table) (let ((k (funcall key item))) (setf (gethash k table) (funcall merger (gethash k table default) item)))))) (defun merge-adjoin (test) (lambda (collection item) (adjoin item collection :test test)))
So suppose, for example, that we have a list of records. Each
record is a three element list. The third element is a struct that
contains a string. We want a table mapping strings to the two
element lists you get when you strip out the struct. This is easily
done with collate
:
(collate records :key (compose #'get-string #'third) :test #'equal ; or #'string= if you prefer :merger (binary-compose-right (merge-adjoin #'equal) #'butlast))
The audit code reads lists of records from the database and from GitHub
and from CircleCI and uses collate
to build hash tables
we can use to quickly walk and validate the data.
Translating this into golang isn't quite so easy. Golang has first
class function, true, but golang is a statically typed language.
This causes two problems. First, the signature of the higher order
functions includes the types of the arguments and the return value.
This means you cannot just slap on the lambda
symbol,
you have to annotate each argument and the return value. This is
far more verbose. Second, higher order functions map onto
parameterized (generic) types. Generic type systems come with their
own little constraint language so that the computer can figure out
what concrete types can correctly match the generic types. This
makes higher order functions fairly unweildy.
Consider compose2
. The functions f
and g
each have an input and output type, but the
output type of g
is the input type of f
so only three types are involved
func Compose2[T any, U any, V any](f func(U) V, g func(T) U) func(T) V { return func(x T) V { return f(g(x)) } }
If want to compose three functions, we can write this:
func Compose3[T any, U any, V any, W any](f func(V) W, g func(U) V, h func(T) U) func(T) W { return func(x T) W { return f(g(h(x))) } }The generic type specifiers take up as much space as the code itself.
I don't see a way to write an n-ary compose function. It would have to be dynamically parameterized by the intermediate types of all the functions it was composing.
For the collate
function, we can write this:
func Collate[R any, K comparable, V any]( list *Cons[R], keyfunc func(R) K, merger func(V, R) V, defaultValue V) map[K]V { answer := make(map[K]V) for list != nil { key := keyfunc(list.Car) probe, ok := answer[key] if !ok { probe = defaultValue } answer[key] = merger(probe, list.Car) list = list.Cdr } return answer }
We have three types to parameterize over: the type of the
list elements (i.e. the record type) R
, the type of
the key K
, and the type of the value V
.
The key type is needs to be constrained to be a valid key in a map,
so we use the comparable
constraint. Now that we have
the types, we can annotate the arguments and return value. The list
we are collating is a list of R
elements. The key
function takes an R
and returns a K
. The
merger takes an existing value of type V
and the record
of type R
and returns a new value of
type V
.
The magic of type inference means that I do not have to annotate all the variables in the body of the function, but the compiler cannot read my mind and infer the types of the arguments and return value. Golang forces you to think about the types of arguments and return values at every step of the way. Yes, one should be aware of what types are being passed around, but it is a burden to have to formally specify them at every step. I could write the Common Lisp code without worrying too much about types. Of couse the types would have to be consistent at runtime, but I could write the code just by considering what was connected to what. In golang, the types are in your face at every function definition. You not only have to think about what is connected to what, you have to think about what sort of thing is passed through the connection.
I'm sure that many would argue that type safety is worth the trouble of annotation. I don't want to argue that it isn't. But the type system is cumbersome, awkward, and unweildy, especially when you are trying to write higher order functions.
It is taking me longer to write the golang version of the audit service than it did to write the Common Lisp version. There are several reasons. First, I am more experienced with Common Lisp than golang, so the right Common Lisp idioms just come to mind. I have to look up many of the golang idioms. Second, the golang code is trying to do more than the Common Lisp code. But third, golang itself introduces more friction than Common Lisp. Programs have to do more than express the algorithm, they have to satisfy the type system.
There are more points of comparison between the two languages. When I get frustrated enough, I'll probably write another post.
Quicklisp news — October 2024 Quicklisp dist update now available
@2024-10-15 20:16 · 15 days agoNew projects:
- adp-github — ADP extension to generate github markdown files. — MIT
- adp-plain — Add Documentation, Please... using plain text. An extension of ADP to generate files with barely additional features. — MIT
- allioli — Alliolification — MIT
- alternate-asdf-system-connections — Allows for ASDF system to be connected so that auto-loading may occur. This is a fork of asdf-system-connections and incorporates a load-system-driven mechanism for loading dependencies and also loads the dependencies of the connections. — MIT
- cbor — CBOR encoder/decoder — MIT
- charje.documentation — Documentation is an opinionated yet customizable docstring parsing library. — AGPL V3 or any later version
- chipi — House automation bus in Common Lisp — Apache-2
- cl-aseprite — Aseprite file format parser — GPLv3
- cl-astar — A heavily optimized yet flexible A* pathfinding algorithm implementation — MIT
- cl-ceigen-lite — A Common Lisp wrapper around CEIGEN-LITE - which is itself a C wrapper around the C++ Eigen library. — MIT
- cl-cf — Computations using continued fractions — GPL-3
- cl-concord — CONCORD implementation based on Common Lisp — LGPL
- cl-duckdb — CFFI wrapper around the DuckDB C API — MIT License
- cl-fastcgi — FastCGI wrapper for Common Lisp — BSD License
- cl-flx — Rewrite emacs-flx in Common Lisp — MIT
- cl-frugal-uuid — Common Lisp UUID library with zero dependencies — MIT License
- cl-gog-galaxy — A wrapper for the GOG Galaxy SDK — zlib
- cl-lc — List comprehensions — MIT
- cl-naive-ptrees — Functions to make it easier to work with plist(s) and plist trees. Works with plist(s) pairs as units and not as individual list items. — MIT
- cl-qoa — An implementation of the Quite Okay Audio format. — zlib
- cl-reddit — Reddit client api library — BSD
- cl-resvg — An up-to-date bindings library for the resvg SVG rendering library — zlib
- cl-trivial-clock — Common Lisp library to get accurate wall-clock times on multiple platforms — MIT License
- clack-cors — A Clack middleware to set CORS related HTTP headers. — Unlicense
- clack-prometheus — Clack middleware to serve stats in Prometheus format. — Unlicense
- clith — Common Lisp wITH macro. A general WITH macro. — MIT
- clj-arrows — Implements Clojure-styled threading/transformation macros. — MIT
- clos-encounters — A collection of OOP patterns benefiting from the CLOS MOP. — Unlicense
- coalton — An efficient, statically typed functional programming language that supercharges Common Lisp. — MIT
- cocoas — A toolkit library to help deal with CoreFoundation, Cocoa, and objc — zlib
- com.danielkeogh.graph — A fast an reliable graph library. — MIT
- fast-mpsc-queue — Multi-Producer Single-Consumer queue implementation. — MIT
- file-finder — File finder. Enable rapid file search, inspection and manipulation. — GPL3+
- golden-utils — A utility library. — MIT
- hiccl — HTML generator for Common Lisp — MIT
- hsx — Hypertext S-expression — MIT
- hunchentoot-stuck-connection-monitor — Monitors hunchentoot connections and logs the connections stuck in the same state for a long time (due to slow or inactive clients and network stream timeouts that hunchentoot tries to utilize not working properly). Offers an option to shutdown the stuck connections sockets manually or automatically, thus unblocking the connection threads and preventing thread and socket leak. See https://github.com/edicl/hunchentoot/issues/189 — BSD-2-Clause
- incless — A portable and extensible Common Lisp printer implementation (core) — BSD
- inravina — A portable and extensible Common Lisp pretty printer. — MIT
- invistra — A portable and extensible Common Lisp FORMAT implementation — BSD
- knx-conn — KNXnet/IP implementation in Common Lisp — GNU GPL, version 3
- machine-state — Retrieve machine state information about CPU time, memory usage, etc. — zlib
- myweb — simple web server written in common lisp for educational reasons — LGPLv3
- noisy — Perlin noise for arbitrary numbers of dimensions. — MIT
- nontrivial-gray-streams — A compatibility layer for Gray streams including extensions — MIT
- open-with — Open a file in a suitable external program — zlib
- openai-openapi-client — Openai API client — AGPLv3+
- openrpc — CI for Common Lisp OpenRPC library. — BSD
- parse-number-range — Parses LOOP's convenient "for-as-arithmetic" syntax into 5 simple values: from, to, limit-kind (:inclusive, :exclusive or nil if unbounded), by (step) and direction (+ or -)). Further related utilities are provided. Intended for easy implementation of analogous functionality in other constructs. — Public Domain
- precise-time — Precise time measurements — zlib
- pregexp — Portable regular expressions for Common Lisp — MIT-like
- progressons — Display a progress bar on one line. — MIT
- quaviver — A portable and extensible floating point string library — MIT
- quilc — A CLI front-end for the Quil compiler — Apache License 2.0 (See LICENSE.txt)
- qvm — An implementation of the Quantum Abstract Machine. — Apache License 2.0 (See LICENSE.txt)
- random-sampling — Functions to generate random samples with various distributions — zlib
- rs-dlx — Knuth's Algorithm X with dancing links. — Modified BSD License
- scrapycl — The web scraping framework for writing crawlers in Common Lisp. — Unlicense
- smoothers — Statistical methods to create approximating functions that attempt to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. — MS-PL
- trivial-adjust-simple-array — A tiny utility to change array size ensuring it is simple. — MIT
- trivial-system-loader — A system installation/loading abstraction for Common Lisp — MIT
- trivial-toplevel-commands — Trivial Toplevel Commands allows to define toplevel commands available on most implementations in a portable fashion. — BSD-3 Clause
- trivial-toplevel-prompt — Portability library to customize REPL prompts. — BSD-3 Clause
- utf8-input-stream — A UTF-8 string input stream over a binary stream for Common Lisp — MIT
- whereiseveryone.command-line-args — Automatically create a command-line-argument parser for a given Common Lisp function definition. — AGPL v3 or any later version
Updated projects: 3b-bmfont, 3bgl-shader, 3bmd, 3d-math, 3d-spaces, 40ants-asdf-system, 40ants-slynk, access, acclimation, action-list, adhoc, adopt, adp, agnostic-lizard, alexandria, alexandria-plus, anatevka, anypool, april, arc-compat, architecture.builder-protocol, array-utils, arrow-macros, assoc-utils, async-process, atomics, auto-restart, aws-sdk-lisp, babel, bdef, bike, binary-structures, binding-arrows, birch, blackbird, bordeaux-threads, calm, carrier, caveman, ccldoc, cephes.cl, cepl, cerberus, cffi, cffi-object, cffi-ops, chanl, chunga, ci, ci-utils, ciao, cl-6502, cl-algebraic-data-type, cl-all, cl-ansi-term, cl-async, cl-atelier, cl-autowrap, cl-base32, cl-bmas, cl-bmp, cl-bnf, cl-brewer, cl-buchberger, cl-cmark, cl-collider, cl-colors2, cl-confidence, cl-containers, cl-cookie, cl-csv, cl-custom-hash-table, cl-cxx-jit, cl-data-structures, cl-dbi, cl-digraph, cl-dot, cl-enchant, cl-environments, cl-fast-ecs, cl-fbx, cl-fluent-logger, cl-form-types, cl-forms, cl-freetype2, cl-gamepad, cl-github-v3, cl-gltf, cl-gobject-introspection, cl-graph, cl-grip, cl-gserver, cl-hamcrest, cl-hash-util, cl-html-readme, cl-i18n, cl-info, cl-ini, cl-ipfs-api2, cl-kanren, cl-lib-helper, cl-liballegro, cl-liballegro-nuklear, cl-log, cl-markless, cl-marshal, cl-migratum, cl-mixed, cl-modio, cl-mount-info, cl-mpg123, cl-mssql, cl-mustache, cl-mysql, cl-neovim, cl-netpbm, cl-oju, cl-opengl, cl-opensearch-query-builder, cl-opus, cl-patterns, cl-plus-ssl-osx-fix, cl-ppcre, cl-project, cl-protobufs, cl-pslib, cl-pslib-barcode, cl-rashell, cl-readline, cl-sat.minisat, cl-sdl2-image, cl-sdl2-mixer, cl-sdl2-ttf, cl-sendgrid, cl-sentry-client, cl-skkserv, cl-smtp, cl-ssh-keys, cl-steamworks, cl-str, cl-svg, cl-telegram-bot, cl-threadpool, cl-tiled, cl-torrents, cl-tqdm, cl-transducers, cl-transit, cl-unicode, cl-unification, cl-unix-sockets, cl-utils, cl-vectors, cl-vorbis, cl-wavefront, cl-webdriver-client, cl-webkit, cl-webmachine, cl-who, clack, clack-pretend, clad, classimp, clast, clath, clavier, clazy, clerk, clgplot, climacs, clingon, clip, clj-con, clj-re, clobber, clog, clog-ace, clog-collection, clog-plotly, clog-terminal, clohost, closer-mop, clss, cluffer, clunit2, clx, cmd, codata-recommended-values, codex, coleslaw, collectors, colored, com-on, common-lisp-jupyter, commondoc-markdown, compiler-macro-notes, conduit-packages, consfigurator, contextl, croatoan, ctype, cytoscape-clj, damn-fast-priority-queue, dartscluuid, data-frame, data-lens, datafly, dbus, decompress, defenum, definer, definitions, deflate, defmain, deploy, depot, deptree, dexador, dissect, djula, dns-client, doc, docs-builder, dsm, dufy, easter-gauss, easy-audio, easy-macros, easy-routes, eclector, equals, erjoalgo-webutil, erudite, esrap, event-emitter, external-program, external-symbol-not-found, fare-csv, fare-scripts, fast-http, fast-websocket, file-attributes, file-notify, file-select, filesystem-utils, fiveam, fiveam-matchers, flexi-streams, float-features, flow, fn, fset, functional-trees, fuzzy-dates, gadgets, generic-cl, github-api-cl, glfw, glsl-toolkit, harmony, hashtrie, helambdap, http2, hunchentoot, imago, in-nomine, inferior-shell, introspect-environment, ironclad, jose, js, json-mop, jsonrpc, jzon, khazern, lack, lass, lemmy-api, letv, lichat-protocol, lichat-tcp-client, linear-programming, lisp-binary, lisp-chat, lisp-critic, lisp-pay, lisp-stat, lispcord, lla, local-time, log4cl-extras, logging, lru-cache, magicl, maiden, maidenhead, manifolds, math, mcclim, memory-regions, messagebox, method-combination-utilities, mgl-pax, misc-extensions, mito, mk-defsystem, mmap, mnas-package, mnas-string, moira, multiposter, mutility, mutils, named-closure, ndebug, neural-classifier, new-op, nibbles, nibbles-streams, ningle, nodgui, north, numerical-utilities, nytpu.lisp-utils, omglib, ook, open-location-code, openapi-generator, orizuru-orm, overlord, papyrus, parachute, parse-number, pathname-utils, petalisp, phos, picl, plot, plump, plump-sexp, pngload, policy-cond, polymorphic-functions, postmodern, ppath, prometheus-gc, psychiq, purgatory, py4cl, py4cl2, py4cl2-cffi, qlot, qoi, query-fs, quick-patch, quickhull, quri, random-state, reblocks, reblocks-auth, reblocks-file-server, reblocks-lass, reblocks-navigation-widget, reblocks-parenscript, reblocks-prometheus, reblocks-typeahead, reblocks-ui, reblocks-websocket, rove, s-dot2, sandalphon.lambda-list, sb-fastcgi, sc-extensions, sel, select, serapeum, shasht, shop3, si-kanren, sketch, slime, slite, sly, snooze, spinneret, staple, static-vectors, statistics, stepster, stmx, stripe, swank-crew, swank-protocol, sxql, symath, system-locale, taglib, teddy, ten, testiere, tfeb-lisp-hax, tfm, tiny-routes, tooter, trivia, trivial-arguments, trivial-clipboard, trivial-file-size, trivial-gray-streams, trivial-main-thread, trivial-octet-streams, trivial-package-locks, trivial-package-manager, trivial-sanitize, trivial-shell, type-templates, typo, uax-15, uiop, usocket, vellum, vellum-binary, vellum-csv, vellum-postmodern, verbose, vernacular, vom, websocket-driver, winhttp, with-branching, with-contexts, woo, xhtmlambda, xml-emitter, yason, zippy, zpb-ttf.
Removed projects: abstract-arrays, ahungry-fleece, cl-cheshire-cat, cl-darksky, cl-epoch, cl-naive-store, convolution-kernel, dense-arrays, extensible-compound-types, extensible-optimizing-coerce, fast-generic-functions, flac-metadata, freebsd-ffi, listoflist, luckless, one-more-re-nightmare, postmodern-localtime, stumpwm-dynamic-float, stumpwm-sndioctl, unicly.
To get this update, use:
(ql:update-dist "quicklisp")
Sorry this update took so long. My goal is to resume monthly releases.
Enjoy!
Patrick Stein — Ray Tracing In One Weekend (in Lisp, and n-dimenions)
@2024-09-27 02:37 · 34 days agoEarlier this year, I started working through the online book Ray Tracing In One Weekend (Book 1). I have been following along with it in Common Lisp, and I have been extending it all from 3-dimensional to n-dimensional.
I reproduced 4-dimensional versions of all of the book images which you can see on my weekend-raytracer github page.
Here is the final image. This is a 250-samples-per-pixel, 640x360x10 image plane of three large hyperspheres (one mirrored, one diffuse, one glass) atop a very large, diffuse hypersphere. Also atop this very large hypersphere are a bunch of smaller hyperspheres of varying colors and materials. The image is rendered with some defocus-blur.
Final image of 4-dimensional sceneCaveat: This depends on a patched version of the policy-cond library that is not in the current Quicklisp distribution but should be in the next.
Eugene Zaikonnikov — Breaking the Kernighan's Law
@2024-09-15 15:00 · 46 days ago"Debugging is twice as hard as writing the code in the first place. Therefore, if you write the code as cleverly as possible, you are, by definition, not smart enough to debug it.." — Brian W. Kernighan.
I'm a sucker for sage advice much as anyone else, and Kernighan is certainly right on money in the epigraph. Alas there comes a time in programmer's career when you just end up there despite the warning. It could be that you were indeed too clever for your own good, or maybe the code isn't quite yours anymore after each of your colleague's take on it over the years. Or just sometimes, the problem is indeed so hard that it strains your capacity as a coder.
It would usually start with a reasonable idea made into first iteration code. The solution looks fundamentally sound but then as you explore the problem space further it begins to seep nuance, either as manifestation of some real world complexity or your lack of foresight. When I run into this my first instinct is to instrument the code. If the problem is formidable you got to respect it: flailing around blindly modifying things or ugh, doing a rewrite at this stage is almost guaranteed to be a waste of time. It helps to find a promising spot, chisel it, gain a foothold in the problem, and repeat until you crack it. Comfortable debugging tools here can really help to erode the original Kernighan coefficient from 2 to maybe 1.6 or 1.4 where you can still have a chance.
Lisp users are fortunate with the options of interactive debugging, and one facility I reach often for is the plain BREAK
. It's easy enough to wrap it into a conditional for particular matches you want to debug. However sometimes you want it to trigger after a particular sequence of events across different positions in code has taken place. While still doable it quickly becomes cumbersome and this state machine starts to occupy too much mental space which is already scarce. So one day, partly as a displacement activity from being intimidated by a Really Hard Problem I wrote down my debugging patterns as a handful of macros.
Enter BRAKE. Its features reflect my personal preferences so are not necessarily your cup of tea but it could be a starting point to explore in this direction. Things it can do:
- act as a simple
BREAK
with no arguments (duh) - wrap an s-expression, passing through its values upon continuing
- trigger sequentially based on the specified position for a common tag
- allow for marks that don't trigger the break but mark the position as reached
- provide conditional versions for the expressions above
- print traces of tagged breakpoints/marks
If you compile functions with debug on you hopefully should be able to see the wrapped sexpr's result values.
(use-package '(brake))
(defun fizzbuzz ()
(loop for n from 100 downto 0
for fizz = (zerop (mod n 3))
for buzz = (zerop (mod n 5)) do
(format t "~a "
(if (not (or fizz buzz))
(format nil "~d" n)
(brake-when (= n 0)
(concatenate 'string
(if fizz "Fizz" "")
(if buzz "Buzz" "")))))))
These macros try to detect common cases for tagged sequences being either aborted via break or completed to the last step, resetting them after to the initial state. However it is possible for a sequence to end up "abandoned", which can be cleaned up by a manual command.
Say in the example below we want to break when the two first branches were triggered in a specific order. The sequence of 1, 3, 4 will reinitialize once the state 4 is reached, allowing to trigger continuously. At the same time if we blow our stack it should reset to initial when aborting.
(defun ack (m n)
(cond ((zerop m) (mark :ack 3 (1+ n)))
((zerop n) (mark :ack 1 (ack (1- m) 1)))
(t (brake :ack 4 (ack (1- m) (ack m (1- n)))))))
In addition there are a few utility functions to report on the state of brakepoints, enable or disable brakes based on tags and turn tracing on or off. Tracing isn't meant to replace the semantics of TRACE
but to provide a souped up version of debug by print statements everyone loves.
CL-USER> (report-brakes)
Tag :M is DISABLED, traced, with 3 defined steps, current state is initial
Tag :F is DISABLED with 2 defined steps, current state is 0
Tag :ACK is ENABLED with 3 defined steps, current state is initial
Disabling breakpoints without recompilation is really handy and something I find using all the time. The ability to wrap a sexpr was often sorely missed when using BREAK
in constructs without implicit body.
Sequencing across threads is sketchy as the code isn't guarded but in many cases it can work, and the appeal of it in debugging races is clear. One of those days I hope to make it more robust while avoiding potential deadlocks but it isn't there yet. Where it already shines tho is in debugging complex iterations, mutually recursive functions and state machines.
Patrick Stein — I Broke It
@2024-09-13 15:46 · 48 days agoUnfortunately, I managed to delete my WordPress database at a time when the most recent backup I had was from 11 years ago.
So… I will hopefully get some newer information uploaded again sometime.
But, most of my content is gone.
Scott Burson pointed out that The Wayback Machine has my content: http://web.archive.org/web/20240901104727/https://nklein.com/
And, Daniel Brooks pointed out that NewsBlur has the stuff from my RSS feeds: https://www.newsblur.com/site/101509/nklein-software
I will eventually get to moving some of that back here properly.
Yukari Hafner — Porting SBCL to the Nintendo Switch
@2024-09-13 09:03 · 48 days agoFor the past two years Charles Zhang and I have been working on getting my game engine, Trial, running on the Nintendo Switch. The primary challenge in doing this is porting the underlying Common Lisp runtime to work on this platform. We knew going into this that it was going to be hard, but it has proven to be quite a bit more tricky than expected. I'd like to outline some of the challenges of the platform here for posterity, though please also understand that due to Nintendo's NDA I can't go into too much detail.
Current Status
I want to start off with where we are at, at the time of writing this article. We managed to port the runtime and compiler to the point where we can compile and execute arbitrary lisp code directly on the Switch. We can also interface with shared libraries, and I've ported a variety of operating system portability libraries that Trial needs to work on the Switch as well.
The above photo shows Trial's REPL example running on the Switch devkit. Trial is setting up the OpenGL context, managing input, allocating shaders, all that good stuff, to get the text shown on screen; the Switch does not offer a terminal of its own.
Unfortunately it also crashes shortly after as SBCL is trying to engage its garbage collector. The Switch has some unique constraints in that regard that we haven't managed to work around quite yet. We also can't output any audio yet, since the C callback mechanism is also broken. And of course, there's potentially a lot of other issues yet to rear their head, especially with regards to performance.
Whatever the case, we've gotten pretty far! This work hasn't been free, however. While I'm fine not paying myself a fair salary, I can't in good conscience have Charles invest so much of his valuable time into this for nothing. So I've been paying him on a monthly basis for all the work he's been doing on this port. Up until now that has cost me ~17'000 USD. As you may or may not know, I'm self-employed. All of my income stems from sales of Kandria and donations from generous supporters on Patreon, GitHub, and Ko-Fi. On a good month this totals about 1'200 USD. On a bad month this totals to about 600 USD. That would be hard to get by in a cheap country, and it's practically impossible in Zürich, Switzerland.
I manage to get by by living with my parents and being relatively frugal with my own personal expenses. Everything I actually earn and more goes back into hiring people like Charles to do cool stuff. Now, I'm ostensibly a game developer by trade, and I am working on a currently unannounced project. Games are very expensive to produce, and I do not have enough reserves to bankroll it anymore. As such, it has become very difficult to decide what to spend my limited resources on, and especially a project like this is much more likely to be axed given that I doubt Kandria sales on the Switch would even recoup the porting costs.
To get to the point: if you think this is a cool project and you would like to help us make the last few hurdles for it to be completed, please consider supporting me on Patreon, GitHub, or Ko-Fi. On Patreon you get news for every new library I release (usually at least one a month) and an exclusive monthly roundup of the current development progress of the unannounced game. Thanks!
An Overview
First, here's what's publicly known about the Switch's environment: user code runs on an ARM64 Cortex-A57 chip with four cores and 4 GB RAM, and on top of a proprietary microkernel operating system that was initially developed for the Nintendo 3Ds.
SBCL already has an ARM64 Linux port, so the code generation side is already solved. Kandria also easily fits into 4GB RAM, so there's no issues there either. The difficulties in the port reside entirely in interfacing with the surrounding proprietary operating system of the switch. The system has some constraints that usual PC operating systems do not have, which are especially problematic for something like Lisp as you'll see in the next section.
Fortunately for us, and this is the reason I even considered a port in the first place, the Switch is also the only console to support the OpenGL graphics library for rendering, which Trial is based upon. Porting Trial itself to another graphics library would be a gigantic effort that I don't intend on undertaking any time soon. The Xbox only supports DirectX, though supposedly there's an OpenGL -> DirectX layer that Microsoft developed, so that might be possible. The Playstation on the other hand apparently still sports a completely proprietary graphics API, so I don't even want to think about porting to that platform.
Anyway, in order to get started developing I had to first get access. I was lucky enough that Nintendo of Europe is fairly accommodating to indies and did grant my request. I then had to buy a devkit, which costs somewhere around 400 USD. The devkit and its SDK only run on Windows, which isn't surprising, but will also be a relevant headache later.
Before we can get on to the difficulties in building SBCL for the Switch, let's first take a look at how SBCL is normally built on a PC.
Building SBCL
SBCL is primarily written in Lisp itself. There is a small C runtime as well, which you use a usual C compiler to compile, but before it can do that, there's some things it needs to know about the operating system environment it compiles for. The runtime also doesn't have a compiler of its own, so it can't compile any Lisp code. In order to get the whole process kicked off, SBCL requires another Lisp implementation to bootstrap with, ideally another version of itself.
The build then proceeds in roughly five phases:
build-config
This step just gathers whatever build configuration options you want for your target and spits them out into a readable format for the rest of the build process.make-host-1
Now we build the cross-compiler with the host Lisp compiler, and at the same time emit C header files describing Lisp object layouts in memory as C structs for the next step.
make-target-1
Next we run the target C compiler to create the C runtime. As mentioned, this uses a standard C compiler, which can itself be a cross-compiler. The C runtime includes the garbage collector and other glue to the operating system environment. This step also produces some constants the target Lisp compiler and runtime needs to know about by using the C compiler to read out relevant operating system headers.
make-host-2
With the target runtime built, we build the target Lisp system (compiler and the standard library) using the Lisp cross-compiler built by the Lisp host compiler in
make-host-1
. This step produces a "cold core" that the runtime can jump into, and can be done purely on the host machine. This cold core is not complete, and needs to be executed on the target machine with the target runtime to finish bootstrapping, notably to initialize the object system, which requires runtime compilation. This is done inmake-target-2
The cold core produced in the last step is loaded into the target runtime, and finishes the bootstrapping procedure to compile and load the rest of the Lisp system. After the Lisp system is loaded into memory, the memory is dumped out into a "warm core", which can be loaded back into memory in a new process with the target runtime. From this point on, you can load new code and dump new images at will.
Notable here is the need to run Lisp code on the target machine itself. We can't cross-compile "purely" on the host, not in the least because user Lisp code cannot be compiled without also being run like batch-compiled C code can, and when it is run it assumes that it is in the target environment. So we really don't have much of a choice in the matter.
In order to deploy an application, we proceed similar to make-target-2
: We compile in Lisp code incrementally and then when we have everything we need we dump out a core with the runtime attached to it. This results in a single binary with a data blob attached.
When the SBCL runtime starts up it looks for a core blob, maps it into memory, marks pages with code in them as executable, and then jumps to the entry function the user designated. This all is a problem for the Switch.
Building for the Switch
The Switch is not a PC environment. It doesn't have a shell, command line, or compiler suite on it to run the build as we usually do. Worse still, its operating system does not allow you to create executable pages, so even if we could run the compilation steps on there we couldn't incrementally compile anything on it like we usually do for Lisp code.
But all is not lost. Most of the code is not platform dependent and can simply be compiled for ARM64 as usual. All we need to do is make sure that anything that touches the surrounding environment in some way knows that we're actually trying to compile for the Switch, then we can use another ARM64 environment like Linux to create our implementation.
With that in mind, here's what our steps look like:
build-config
We run this on some host system, using a special flag to indicate that we're building for the Switch. We also enable thefasteval
contrib. We needfasteval
to step in for any place where we would usually invoke the compiler at runtime, since we absolutely cannot do that on the Switch.make-host-1
This step doesn't change. We just get different headers that prep for the Switch platform.
make-target-1
Now we use the C compiler the Nintendo SDK provides for us, which can cross-compile for the Switch. Unfortunately the OS is not POSIX compliant, so we had to create a custom runtime target in SBCL that stubs out and papers over the operating system environment differences that we care about, like dynamic linking, mapping pages, and so on.
Here is where things get a bit weird. We are now moving on to compiling Lisp code, and we want to do so on a Linux host system. So we have to...build-config
(2)We now create a normal ARM64 Linux system with the same feature set as for the Switch. This involves the usual steps as before, though with a special flag to inform some parts of the Lisp process that we're going to ultimately target the Switch.
make-host-1
(2)make-target-1
(2)make-host-2
make-target-2
With all of this done we now have a slightly special SBCL build for Linux ARM64. We can now move on to compiling user code.
For user code we now perform some tricks to make it think it's running on the Switch, rather than on Linux. In particular we modify
*features*
to include:nx
(the Switch code name) and not:linux
,:unix
, or:posix
. Once that is set up and ASDF has been neutered, we can compile our program (like Trial) "as usual" and at the end dump out a new core.
We've solved the problem of actually compiling the code, but we still need to figure out how to get the code started on the Switch, since it does not allow us to do the usual core-mapping strategy. As such, attaching the new core to the runtime we made for the Switch won't work.
To make this work, we make use of two relatively unknown features of SBCL: immobile-code, and elfination. Usually when SBCL compiles code at runtime, it sticks it into a page somewhere, and marks that page executable. The code itself however could become unneeded at some point, at which point we'd like to garbage collect it. We can then reclaim the space it took up, and to do so compact the rest of the code around it. The immobile-code feature allows SBCL to take up a different strategy, where code is put into special reserved code pages and remains there. This means it can't be garbage collected, but it instead can take advantage of more traditional operating system support. Typically executables have pre-marked sections that the operating system knows to contain code, so it can take care of the mapping when the program is started, rather than the program doing it on its own like SBCL usually does.
OK, so we can generate code and prevent it from being moved. But we still have a core at the end of our build that we now need to transform into the separate code and data sections needed for a typical executable. This is done with the elfination step.
The elfinator looks at a core and performs assembly rewriting to make the code position-independent (a requirement for Address Space Layout Randomisation), and then tears it out into two separate files, a pure code assembly file, and a pure data payload file.
We can now take those two files and link them together with the runtime that the C compiler produced and get a completed SBCL that runs like any other executable would. So here's the last steps of the build process:
Run the elfinator to generate the assembly files
Link the final binary
Run the Nintendo SDK's authoring tools to bundle metadata, shared libraries, assets, and the application binary into one final package
That's quite an involved build setup. Not to mention that we need at least an ARM64 Linux machine to run most of the build on, as well as either an AMD64 Windows machine (or an AMD64 Linux machine with Wine) to run the Nintendo SDK compiler and authoring tools.
I usually use an AMD64 Linux machine, so there's a total of three machines involved: The AMD64 "driver," the ARM64 build host, and a Windows VM to talk to the devkit with.
I wrote a special build system with all sorts of messed up caching and cross-machine synchronisation logic to automate all of this, which was quite a bit of work to get going, especially since the build should also be drivable from an MSYS2/Windows setup. Lots of fun with path mangling!
So now we have a full Lisp system, including user code, compiling for and being able to run on the Switch. Wow! I've skipped over a lot of the nitty-gritty dealing with getting the build properly aware of which target it's building for, making the elfinator and immobile-code working on ARM64, and porting all of the support libraries like pathname-utils, libmixed, cl-gamepad, etc. Again, most of the details we can't openly talk about due to the NDA. However, we have upstreamed what work we could, and all of the Lisp libraries don't have a private fork.
It's worth noting though that elfination wasn't initially designed to produce position independent executable Lisp code, which is usually full of absolute pointers. So we needed to do a lot of work in the SBCL compiler and runtime to support load time relocation of absolute pointers and make sure code objects (which usually contain code constants) no longer have absolute pointers, as the GC can't modify executable sections. Not even the OS loader is allowed to modify executable sections to relocate absolute pointer. We did this by relocating absolute pointers like code constants outside of the text space into a read-writable space close enough to rewrite constant references in code to load from this r/w space instead, which the loader and the moving GC can fixup pointers at.
Instead of interfacing directly with the Nintendo SDK, I've opted to create my own C libraries that have a custom interface the Lisp libraries interface with in order to access the operating system functionality it needs. That way I can at least publish the Lisp bits openly, and only keep the small C library private. Anyway, now that we can run stuff we're not done yet. Our system actually needs to keep running, too, and that brings us to
The Garbage Collector
Garbage collection is a huge topic in itself and there's a ton of different techniques to make it work efficiently. The standard GC for SBCL is called "gencgc", a Generational Garbage Collector. Generational meaning it keeps separate "generations" of objects and scans the generations in different frequencies, copying them over to another generation's location to compact the space. None of this is inherently an issue for the Switch, if it weren't for multithreading.
When multiple threads are involved, we can't just move objects around, as another thread could be accessing it at any time. The easiest way to resolve this conflict is to park all threads before engaging garbage collection. So the question becomes: when a thread wants to start garbage collection, how does it get the other threads to park?
On Unix systems a pretty handy trick is used: we can use the signalling mechanism to send a signal to the other threads, which then take that hint to park.
On the Switch we don't have any signal mechanism. In fact, we can't interrupt threads at all. So we instead need to somehow get each thread to figure out that it should park on its own. The typical strategy for this is called "safepoints".
Essentially we modify the compiler a little bit to inject some extra code that checks whether the thread should park or not. This strategy has some issues, namely:
Adding a check isn't free. So we want to check as little as possible
If we don't check frequently enough, we are going to stall all the other threads because GC can't begin until they're all parked
If we have to inject a lot of instructions for a check, it is going to disrupt CPU cache lines and pipelining optimisations
The current safepoint system in SBCL was written for Windows, which similarly does not have inter-process signal handlers. However, unlike the Switch, it does still have signal handling for the current thread. So the current safepoint implementation was written with this strategy:
Each thread keeps a page around that a safepoint just writes a word to. When GC is engaged, those pages are marked as read-only, so that when the safepoint is hit and the other thread tries to write to the page, a segmentation fault is triggered and the thread can park. This is efficient, since we only need a single instruction to write into the page.
On the Switch we can't use this trick either, so we have to actually insert a more complex check, which can be tricky to get working as intended, as all parallel algorithms tend to be.
Since safepoints aren't necessary on any other platform than Windows, it also hasn't been tested anywhere else, so aside from modifying it for this new platform it's also just unstable. It is apparently quite a big mess in the code base and would ideally be redone from scratch, but hopefully we don't have to go quite that far.
I'd also like to give special mention to the issue that CLOS presents. Usually SBCL defers compilation of the "discriminating function" that is needed to dispatch to methods to the first call of the generic function. This is done because CLOS is highly dynamic and allows adding and removing methods pretty much at any time, and there's usually no good point in time that the system knows it is complete. Of course, on the Switch we can't invoke the compiler, so we can't really do this. For now our strategy has been to instead rely on the fast evaluator. We stub out the compile
function to create a lambda that executes the code via the evaluator instead. This has the advantage of working with any user code that relies on compile
as well, though it is obviously much slower for execution than it would be if we could actually compile.
This neatly brings us to
Future Work
The fasteval trick is mostly a fallback. Ideally I'd like to explore options to freeze as much of CLOS in place as possible right before the final image is dumped and compile as much as possible ahead of time. I'd also like to investigate the block compilation mode that Charles restored some years back more closely.
It's very possible that the Switch's underpowered processor will also force us to implement further optimisations, especially on the side of my engine and the code in Kandria itself. Up until now I've been able to get away with comparatively little optimisation, since even computers of ten years ago are more than fast enough to run what I need for the game. However, I'm not so sure that the Switch could match up to that even if it didn't also introduce additional constraints on performance with its lack of operating system support.
First, though, we need to get the garbage collector running fully. It runs enough to boot up and get into Trial's main loop, but as soon as it hits multi-generation compaction, it falls flat on its face.
Next we need to get callbacks from C working again. Apparently this is a part of the SBCL codebase that can only be described as "a mess," involving lots of hand-rolled assembly routines, which probably need some adjustments to work correctly with immobile-code and elfination. Callbacks fortunately are relatively rare, Trial only needs them for sound playback via libmixed.
There's also been some other issues that we've kept in the back of our heads but don't require our immediate attention, as well as some extra portability features I know I'll have to work on in Trial before its selftest suite fully passes on the Switch.
Conclusion
I'll be sure to add an addendum here should the state of the port significantly change in the future. Some people have also asked me if the work could be made public, or if I'd be willing to share it.
The answer to that is that while I would desperately like to share it all publicly, the NDA prevents us from doing so. We still upstream and publicise whatever we can, but some bits that tie directly into the Nintendo SDK cannot be shared with anyone that hasn't also signed the NDA. So, in the very remote possibility that someone other than me is crazy enough to want to publish a Common Lisp game on the Nintendo Switch, they can reach out to me and I'll happily give them access to our porting work once the NDA has been signed.
Naturally, I'll also keep people updated more closely on what's going on in the monthly updates for Patrons. With that all said, I once again plead with you to consider supporting me on Patreon, GitHub, or Ko-Fi. All the income from these will, for the foreseeable future, be going towards funding the SBCL port to the Switch as well as the current game project.
Thank you as always for reading, and I hope to share more exciting news with you soon!
Scott L. Burson — Equality and Comparison in FSet
@2024-09-05 06:58 · 56 days agoThis post is somewhat prompted by a recent blog post by vindarel, about Common Lisp's various built-in equality predicates. It is aleo related to Marco Antoniotti's CDR 8, Generic Equality and Comparison for Common Lisp, implemented by Charles Zhang; Alex Gutev's GENERIC-CL; and Henry Baker's well-known 1992 paper on equality.
Let me start by summarizing those designs. CDR 8 proposes a generic equaity function equals, and a comparison function compare. These are both CLOS generic functions intended to be user-extended, though they also have some predefined methods. equals has several keyword parameters controlling its exact behavior. One of these is case-sensitive, which controls string comparison. Another is recursive, which controls its behavior on conses; if recursive is false (the default), conses are compared by eq, but if it's true, a tree comparison is done. compare is specified to return one of the symbols <, >, =, or /= to indicate the relative order of its arguments; it also has keyword parameters such as case-sensitive and recursive.
GENERIC-CL replaces many CL operations with CLOS generic functions, and also adds new ones. It touches many parts of the language other than equality and comparison, but I'll leave those aside for now. It has two generic equality functions: equalp, which, notwithstanding the name, is case-sensitive for characters and strings, and likep, which is case-insensitive. It also has comparison predicates lessp etc., along with a compare function (implemented using lessp) that can return :less, :equal, or :greater.
Henry's paper makes some interesting arguments about how a Common Lisp equality predicate should behave; he makes these concrete by defining a novel predicate egal. His most salient point, for my purposes, is that mutable objects, including vectors and conses, should always be compared with eq. I will argue below that FSet adheres to the spirit of this desideratum even though not to its letter.
FSet advertises itself as a "set-theoretic" collections library, and as such, requires a well-defined notion of equality. Also, since it is implemented using balanced binary trees, it requires an ordering function. FSet defines a generic function compare with these properties:
- It returns one of the symbols :less, :equal, :greater, or :unequal (:unequal is used in certain rare cases of values which are not equal but cannot be consistently ordered)
- It implements a strict weak ordering, with an additional constraint: along with incomparability (indicated by either :equal or :unequal) being transitive, equality is also transitive by itself
- It can compare any two Lisp objects; this is an element of FSet's design philosophy
- Being a generic function, it is of course user-extensible
FSet's equality predicate is equal?, which simply calls compare and checks that the result is :equal. Thus, the only step required to add a user-defined type to the FSet universe is to define a compare method for it. FSet provides a few utilities to help with this, which I'll go into below.
The cases in which compare returns :unequal to indicate unequal-but-incomparable arguments include:
- Numbers of equal value but different types; that is, = would return true on them, but eql would return false. Example: the integer 1 and the float 1.0.
- Distinct uninterned symbols (symbols whose symbol-package is nil) whose symbol-names are equal (by string=).
- Objects of a type for which no specific compare method has been defined, and which are distinct according to eql.
- If you create a package, rename it, then create a new package reusing the original name of the first package, the two packages compare :unequal. (FSet holds on to the original name, to protect itself from the effects of rename-package, which could otherwise be distastrous.) Also, two symbols with the same name, one from the old package and one from the new, also compare :unequal.
- Aggregates which are being compared component-wise, in the case where none of the component-wise comparisons returns :less or :greater, and at least one of them returns :unequal.
If compare's default method is called with objects of different classes, it returns a result based solely on the classes; the contents of the objects are not examined. Again, it is part of FSet's design philosophy to give you as much freedom as reasonably possible; this includes allowing you to have sets containing more than one kind of object.
(In general, FSet's built-in ordering has been chosen for performance, not for its likely usefulness to clients. For example, compare on two strings of different lengths orders the shorter one first, ignoring the contents, because this requires only an O(1) operation.)
Comparison with equal
FSet's equal? on built-in CL types behaves almost identically to CL's equal, with the one difference that on vectors (other than bit-vectors), equal just calls eq, but equal? compares the contents. (I just noticed that this is not true for multidimensional arrays, and have filed an FSet bug.) (On bit-vectors, they both compare the contents.)
Comparison with CDR 8
There are noticeable similarities between FSet and the CDR 8 proposal; the latter not only includes a comparison function, but even provides for it to return /=, corresponding to FSet's :unequal, to indicate unequal but incomparable arguments. But the idea that the behavior of equality and comparison could be modified via keyword parameters does not seem appropriate for FSet. I think it would make FSet quite a bit harder to use, for little gain. For example, FSet comparison on lists walks the lists, but CDR 8, by default, just calls eq on their heads; users would have to remember to pass :recursive t to get the behavior they probably expect. FSet collections would have to remember which options they were created with, and if you tried, say, to take the union of two sets which used different options, you'd get an error.
Years of programming experience — not only with FSet but also with Refine, the little-known proprietary language that inspired FSet — have left me with the clear impression that having a single global equality predicate is a great simplification and very rarely limiting, provided it was defined properly to begin with.
I also note that FSet has more predefined methods for its comparison function (and therefore for its equality predicate) than are proposed in CDR 8. In particular, CDR 8's default compare methods return /= in more cases (e.g. distinct symbols), which is not terribly useful, in my view; FSet tries to minimize its use of :unequal because its data structure code, in that case, has to fall back to using alists, which have much poorer time complexity than binary trees. (OTOH, Marco seems to have overlooked the other cases listed above that arguably should be treated as unequal but incomparable.)
Comparison with GENERIC-CL
Again, there are noticeable similarities between FSet's and GENERIC-CL's equality predicates and comparison functions. GENERIC-CL does have two different equality predicates, equalp and likep, but these have no parameters other than the objects to be compared; it does not follow the CDR 8 suggestion of specifying keyword parameters that alter their behavior. Its equalp is very similar to FSet's equal?, but not quite identical; one difference is that it returns true when called on the integer 1 and the float 1.0, where both fset:equal? and cl:equal return false.
That normally-minor discrepancy is related to a larger deficiency: GENERIC-CL's comparison operator has no defined return value corresponding to :unequal, to indicate unequal-but-incomparable arguments. That is, FSet and CDR 8 both recognize that comparison can't implement a total ordering over all possible pairs of objects, but GENERIC-CL overlooks this point.
There are other overlaps between FSet and GENERIC-CL, but I'll save an analysis of those for another time.
Comparison with EGAL
Henry is proposing an extension to Common Lisp, not an operator that can be written in portable CL. This shows up in two ways: first, some of his sample code implementing egal requires access to implementation internals; second, he proposes a distinction between mutable and immutable vectors and strings that does not exist in CL. The text also suggests adding an immutable cons type to CL, though the sample code doesn't mention this case.
I agree with Henry in principle: a mutable cons (or string, or vector) is a very different beast from an immutable one; as he puts it, "eq is correct for mutable cons cells and equal is correct for immutable cons cells". CL would have been a better language, in principle, had conses been immutable, and immutable strings and vectors been available (if perhaps not the default). But here I must invoke one of my favorite quips: "The difference between theory and practice is never great in theory, but in practice it can be very great indeed." The key design goal of CL, to unify the Lisp community by providing a language into which existing programs in various Lisp dialects could easily be ported, demanded that conses remain mutable by default. Adding immutable versions of these types was not, to my knowledge, a priority.
And as Henry himself points out, in the overwhelmingly most common usage pattern for these types, they are treated as immutable once fully constructed. For example, a common idiom is for a function to build a list in reverse order, then pass it through nreverse before returning it; at that point, it is fully constructed, and it won't be modified thereafter. Obviously, this is a generalization over real-world Lisp programs and won't always be true, but since Lisp encourages sharing of structure, I think Lisp programmers learn pretty early that they have to be very careful when mutating a list or string or vector that they can't easily prove they're holding the only pointer to (normally by virtue of having just created it). Given that this is pretty close to being true in practice, and that comparing these aggregates by their contents is usually what people want to do when they use them as members of collections, it would seem odd for FSet to distinguish them by identity.
Also, there's the simple fact that for these built-in types, CL provides no portable way to order or hash them by identity. Such functionality must exist internally for use by eq and eql hash tables, but the language does not expose any portable interface to it.
So in this case, both programming convenience and the hard constraints of implementability force a choice that is contrary to theoretical purity: FSet must compare these types by their contents. The catch, of course, is that one must be careful, once having used a list or string or vector as an element of an FSet collection, never to modify it, lest one break the collection's ordering invariant. But in practice, this rule doesn't seem at all onerous: if you found the object in the heap somewhere — as opposed to having just created it— don't mutate it.
When it comes to user-defined types, however, the situation is quite different. It is easy for the programmer, defining a class intended for mutation, to arrange for FSet to distinguish objects of the class by their identity rather than their contents. The recommended way to do this is to include a serial-number slot that is initialized, at object-creation time, to the next value from an integer sequence; then write a compare method that uses this slot. (I'll show some examples shortly.)
So if the design of your program involves some pieces of mutable state that are placed in collections, my strong recommendation is that such state should never be implemented as a bare list or string or vector, but should always be wrapped in an instance of a user-defined class. I believe this to be a good design principle in general, even when FSet is not involved, but it becomes imperative for programs using FSet.
Adding Support for User-Defined Classes
When adding FSet support for a user-defined class, the first question is whether instances of the class represent mutable objects or mathematical values. If it's a mathematical value, it should be treated as immutable once constructed. (Alas, CL provides no way to enforce immutability.) In that case, it should be compared by content. FSet provides a convenient macro compare-slots for this purpose. Here's an example:
This specifies that frobs shall be ordered first by position, then by color. compare-slots handles the details for you, including the complications that arise if one of the slot value comparisons returns :unequal.
For standard classes, best performance is obtained by supplying slot names as quoted symbols rather than function-quoted accessor names:
I am not sure whether to recommend the use of slot names for structure classes; the answer may depend on the implementation. At least on SBCL, you're probably better off using accessor functions for structs.
(Actually, the functions supplied don't have to be accessors; you could compare by some computed value instead, if you wanted. I haven't seen a use for this possibility in practice, though.)
Structure classes implementing mutable objects should do something like this:
For standard classes implementing mutable objects, FSet provides an especially convenient solution: just include identity-ordering-mixin as a superclass:
That's it!
More on FSet's Single Global Ordering
I sometimes get pushback, albeit mostly from people who haven't actually used FSet, about my design decision to have a single global ordering implemented by compare, rather than allowing collections to accept an ordering function when they are created. Let me defend this decision a little bit.
Because the ordering is extensible by defining new methods on compare, a programmer can always force a non-default ordering by defining a wrapper type. For example, if you have a map whose keys are strings and which you want to be maintained in lexicographic order, you can easily write a structure class to wrap the strings, and give that class a compare method that forces the strings to be compared lexicographically. (FSet even helps you out by providing a generic function compare-lexicographically, which you can just call.)
That said, I believe the need to write wrapper classes arises very rarely. It's needed only when there is a reason that a set or map needs to be continually maintained in the non-default order. If the non-default ordering is needed only occasionally — say, when the collection is being printed — it's usually easier to convert it to a list at that point (see FSet's generic function convert, about which I should write another blog post) and then just call sort or stable-sort on it.
And there is a wonderful simplicity to having the ordering be global. Ease of use is a very important design goal for FSet; collection-specific orderings would give the user another wrinkle to think about. I just don't see that the benefits, which seem to me very small, would outweigh the cost in cognitive load.
Perhaps the best way to put it is that FSet is primarily intended for application programming, not systems programming. The distinction is fuzzy, but broadly, if programmer productivity is more important to you than squeezing out the last few percent of performance, you're doing application programming, not systems programming. This is not necessarily a distinction about the kind of program being written — there certainly are applications that have performance-sensitive parts — but rather, about the amount of knowledge, experience, and mental effort required to write it. FSet is designed for general productivity, not necessarily for someone who needs maximal control to achieve maximal performance.
Luís Oliveira — Interview about Lisp at SISCOG
@2024-09-03 09:52 · 58 days agoMy friend Rui was interviewed about Lisp and how we use it at SISCOG. The original interview is in Portuguese but you can read a translation via DeepL below:
SISCOG Engineering: get to know this cutting-edge Portuguese company
Find out how Lisp continues to drive innovation at SISCOG. Interview with Rui Patrocínio, Scheduling Team Leader at SISCOG
#1 How did you first get to know the Lisp programming language?
When I joined Técnico in 1999, the programming language taught in Introduction to Programming was Scheme, which is in the Lisp family. The curriculum closely followed the MIT 6.001 course (whose lectures from the 80s are on Youtube) and one of the best programming books ever for beginners and beyond: Structure and Interpretation of Computer Programs. The great advantage of learning to program with Scheme is that it is a very simple language, with a very simple syntax. All the effort goes into understanding the logic of what you're implementing, so there's no need to memorize the syntactical nuances of the language.
After that experience, Lisp reappeared as Common Lisp in the Artificial Intelligence course. Common Lisp is the current industrial version of the Lisp family of languages, which is what SISCOG uses on a daily basis. Common Lisp is a multi-paradigm language. Imperative, functional, object-oriented programming with very sophisticated meta-programming mechanisms available. As such, using the language as a whole implies some maturity and it's only natural that it should appear later in the curriculum of a computer engineering course.
I think it was the "power" of the Common Lisp language that made it particularly appealing to me and, despite the standard being from 1994, it is a very modern language. It should be noted that Guy Steele, one of the main figures behind the Common Lisp standard, was also heavily involved in the development of Scheme, C and Java, being hired by Sun at one point to improve Java. Here's a quote from him on a mailing list focused on the discussion of programming languages from the 1990s, talking about Java:
And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy? —Guy Steele
#2 How SISCOG uses Lisp in its products
We use Common Lisp in the vast majority of our software. Both desktop applications and backend parts of web applications are implemented in Common Lisp. We also use C++ to develop specific modules, but Common Lisp still has the most lines of code in our repositories. It's a very expressive language that compiles to machine code reasonably efficiently without much effort on the part of the programmer. So it's an easy choice for most scenarios.
It remains to be said that our products have been in operation for more than 30 years in various national and international companies, such as the London and Lisbon Underground, or the railways of the Netherlands or Canada. They are decision support software for the optimized planning of these transport operators' resources, namely time and space, materialized in timetables, vehicles and personnel. These products help to plan and manage these operational resources as quickly and efficiently as possible, providing gains and savings in various areas, as well as, for example, greater satisfaction on the part of workers thanks to shifts and work schedules that better meet their preferences. And all with Lisp as a base!
#3 What are the main advantages of the Lisp syntax compared to other programming languages, such as C++?
The great advantage of the syntax is also the thing you'll find most strange at first: the brackets. The fact that everything uses a prefixed syntax also makes everything quite uniform. This combination of parentheses and prefixed syntax is called s-expressions in Lisp. S-expressions are code and data at the same time, and this is what allows you to have macros (functions that take code as an argument and return code as a value) that extend the language transparently, as if they belonged to the standard language.
#4 Can you explain the concept of "S-expressions" and how they contribute to the clarity of LISP code?
Basically, "S-expressions" are one of two things:
- atomic expressions (e.g. the number 2024 or the string "hello world" or the symbol +)
- a list, typically in the format (operator arg1 arg2 ... argn)
It's this uniformity, as I said earlier, that makes everything simpler. It also helps a lot to be able to generate code programmatically because it's all about manipulating lists, for which Common Lisp has a very reasonable API. This is where the power of Lisp macros comes from.
At first you find it strange, then you understand it. [nice try, DeepL. Rui wrote "Primeiro estranha-se, depois entranha-se." which is quoting a famous Coca-Cola slogan written by Fernando Pessoa in the 1940s. Richard Zenith, in his biography Pessoa: An Experimental Life, attempts to translate it as "On the first day you drink it slow. On the fifth day you can't say no." —Luís]
#5 How does Lisp allow rapid prototyping using untyped variables?
Static typing in large codebases is widely advocated today because it allows more automatic tools to check for problems in the code. Common Lisp has dynamic typing, but allows optional type declaration. It's quite common to only declare types when it's necessary to "squeeze" more performance out of a code segment. The compiler, via type inference, provides some information about problems encountered when there are enough type declarations to extract relevant information. This allows us to "fight the compiler" only in segments of code where it is very relevant and not be making premature optimizations to the whole code.
#6 How does Lisp facilitate metaprogramming?
There are two basic "tools" for metaprogramming in Common Lisp: the Meta Object Protocol and the macros mentioned earlier.
Meta Object Protocol is basically the mechanism behind the implementation of the Common Lisp object system (called CLOS, Common Lisp Object System). Although it doesn't belong to the standard, it is available in most Common Lisp implementations and is sometimes useful, particularly when implementing utilities to inspect the code or improve our development environment. For example, it's possible to implement something that shows us the relationships between classes, what methods exist, etc. This is not something you do on a day-to-day basis, but it can be done by you without depending on the guts of a specific IDE.
Another interesting mechanism is macros. When CLOS was introduced, it was basically a set of macros on top of the Lisp of the time (historically there were a few more iterations - CLOS wasn't the first object system developed). In other words, with macros it's possible to extend the language to introduce most of the mechanisms and paradigms that exist in other languages (such as object-oriented programming) in a way that feels natural to a Lisp programmer.
This type of mechanism allows SISCOG to implement undo and redo mechanisms in our software in a way that is practically transparent to the programmer. This is done by extending the class declaration. The programmer notes for which attributes of each class a history needs to be kept and the end user is able to undo their operations and see these data variations instantly (with the typical implementation using Command Pattern, multi-level undo is not instantaneous as it is necessary to re-execute operations, which may not be trivial; the tradeoff is, of course, the memory spent).
#7 What is your opinion of Lisp's learning curve compared to other programming languages?
Anyone with experience in a few different paradigm languages can quickly program in Common Lisp without major problems. Most of the concepts are familiar. Our experience at SISCOG, particularly with more recent hires who had no contact with Lisp in college, is that people adapt quickly and within a week are programming and making minor corrections.
Obviously, there are parts of the language that are less common and you need more maturity to use them effectively. These include the Meta-object protocol and the macros mentioned earlier.
In this case, the need for greater maturity comes simply from the fact that we are extending the language. Designing a new language isn't easy and there are people whose career it is to do this (like Guy Steele mentioned earlier). Of course, this is also rare and the most common thing is to introduce macros for small functionalities that make the programmer more productive with Domain Specific Languages.
#8 Why is Lisp a popular choice for specialized areas such as artificial intelligence and natural language processing?
It's essentially for historical reasons. Lisp was born in that context, there have been several "classic" artificial intelligence systems implemented in Lisp and so it has continued to be used. The fact that it's very easy to start a project quickly, it's easy to iterate and progressively improve what's been done, also helps in a research context where we're more concerned with testing ideas than making the software "bulletproof", which you can do progressively (treating errors, adding static typing, etc.) in Common Lisp.
#9 Can you give some examples of projects at SISCOG where Lisp has proved particularly advantageous?
SISCOG has relied on Lisp from the start. It's a very stable language, with good compilers and good performance, it's compact and has allowed us to go through the history of computer science across various platforms (Lisp machines, Unix, Windows) and maintain a code base with immense domain knowledge over the years. A company with software in production for over 30 years is not very common in the world and Lisp is clearly our little secret weapon.
#10 How does the Lisp development community compare with other programming language communities in terms of support and resources?
The Lisp community isn't very large, but what we lack in programmers, we make up for in enthusiasm. Paul Graham (of Y Combinators) was a great driving force behind the language in the late 1990s, early 2000s and Hacker News still talks about Common Lisp (and variants) on a regular basis. Many years ago there was a strong community on the old newsgroups, but that has largely moved on to Reddit and IRC. There are also a few annual conferences (e.g. European Lisp Symposium) where some of the most important members of the community usually gather. We don't have by far the largest number of libraries available and sometimes we have to implement things 'in house'. The community does, however, have enough to make high-quality software that is sold all over the world, as SISCOG has demonstrated.
#11 What are your predictions for the future of Lisp in the technology industry?
It seems to me that the future of a language depends a lot on fashions and the investments made in it. Google uses Lisp via the purchase of a company a few years ago (ITA Software, for flight search) and, as long as these large companies continue to invest, languages will thrive. At the moment, fashions have moved on to other platforms, but let's see what the future brings. Perhaps our little secret weapon will become less and less secret.
vindarel — Common Lisp: equality functions explained (=, eq, equal, string= et all)
@2024-08-23 10:53 · 69 days agoCommon Lisp has various equality functions: =
, eq
, eql
, equal
,
equalp
, string-equal
, char-equal
... but what are the differences??
We tell you everything, with examples.
As usual, this is best read on the Common Lisp Cookbook (a new page added on August, 2024). This is where it will get the updates.
In short:
=
is only for numbers andequal
is the equality predicate that works on many things.- you can’t overload built-in operators such as
=
orequal
for your own classes, unless you use a library. - when you manipulate strings with functional built-ins (
remove-if
,find
...) and you are surprised to get no results, you probably forgot the:test
key argument:(find "foo" '("hello" "foo") :test #'equal)
.
Table of Contents
=
is for numbers (beware ofNIL
)eq
is low-level. Think pointers, position in memory.eql
is a bettereq
also for numbers of same types and characters.equal
is also for strings (for objects whose printed representation is similar).equalp
is case-insensitive for strings and for numerical value of numbers.- Other comparison functions
- Credits
- See also
=
is for numbers (beware of NIL
)
The =
function compares the value of two or more numbers:
(= 2 2) ;; => T
(= 2 2.0 2 2) ;; => T
(= 2 4/2) ;; => T
(= 2 42) ;; => NIL
but =
is only for numbers. In the below example we get an error with
the interactive debugger. We show the error message, the condition
type, and the backtrace, from SBCL.
(= 2 NIL)
;; => ERROR:
The value
NIL
is not of type
NUMBER
when binding SB-KERNEL::Y
[Condition of type TYPE-ERROR]
Restarts:
...
Backtrace:
0: (SB-KERNEL:TWO-ARG-= 2 NIL) [external]
1: (SB-VM::GENERIC-=)
2: (= 2 NIL)
Note how SB-KERNEL::Y
refers to an internal variable of the
compiler. No, you don’t have a Y
in your code.
As a consequence, if your equality check with numbers might contain
NILs, you can use equalp
, or encapsulate your variables with (or ...
0)
, or do prior checks with (null ...)
.
eq
is low-level. Think pointers, position in memory.
(eq x y) is true if and only if x and y are the same identical object.
eq
works for symbols and keywords.
Those are true:
(eq :a :a)
(eq 'a 'a)
If we compare an object with itself, it is eq
:
(let ((x '(a . b)))
(eq x x))
;; => T
eq
does not work to compare numbers, lists, strings and other
compound objects. It looks like it can, but it isn’t specified to be
true for all implementations.
As such, eq
works for numbers on my implementation, but it might not on yours:
(eq 2 2) ;; => T or NIL, this is not specified
An implementation might allocate the exact same position in memory for the same number, but it might not. This isn’t dictated by the standard.
Likewise, these might depend on the implementation:
(eq '(a . b) '(a . b)) ;; might be true or false.
(eq #\a #\a) ;; true or false
Comparing lists or strings are false:
(eq (list 'a) (list 'a)) ;; => NIL
(eq "a" "a") ;; => NIL
those strings (vectors of characters) are not equal by eq
because the compiler might have
created two different string objects in memory.
eql
is a better eq
also for numbers of same types and characters.
The
eql
predicate is true if its arguments areeq
, or if they are numbers of the same type with the same value, or if they are character objects that represent the same character.
In terms of usefulness, we could say that eq
< eql
.
Now this number comparison is true:
(eql 3 3) ;; => T
but beware, this one isn’t because 3 and 3.0 are not of the same type (integer and single float):
(eql 3 3.0) ;; => NIL
for complex numbers:
(eql #c(3 -4) #c(3 -4)) ;; is true.
(eql #c(3 -4.0) #c(3 -4)) ;; is false (because of -4.0 and -4)
Comparing two characters works:
(eql #\A #\A) ;; => T
And we still can’t compare lists or cons cells:
(eql (cons 'a 'b) (cons 'a 'b)) ;; => NIL
equal
is also for strings (for objects whose printed representation is similar).
The
equal
predicate is true if its arguments are structurally similar (isomorphic) objects. A rough rule of thumb is that two objects areequal
if and only if their printed representations are the same.
Again, conceptually, we could say that eq
< eql
< equal
.
We can still not compare numbers of different types:
(equal 3 3.0) ;; => NIL
but we can now compare lists and cons cells. Indeed, their printed representation is the same. No matter this time if they are different objects in memory.
(equal (cons 'a 'b) (cons 'a 'b)) ;; => T
(equal (list 'a) (list 'a)) ;; => T
We can compare strings!
(equal "Foo" "Foo") ;; => T
No matter if they are different objects in memory:
(equal "Foo" (copy-seq "Foo")) ;; => T
Case is important. Indeed, “FOO” doesn’t print the same as “foo”:
(equal "FOO" "foo") ;; => NIL
equalp
is case-insensitive for strings and for numerical value of numbers.
Two objects are
equalp
if they areequal
; if they are characters and satisfychar-equal
, which ignores alphabetic case and certain other attributes of characters; if they are numbers and have the same numerical value, even if they are of different types; or if they have components that are allequalp
.
Continuing with our ordering, we could say that eq
< eql
< equal
< equalp
.
We can compare two numbers, looking at their value, even if they have different types:
(equalp 3 3.0) ;; => T
Now look at our string comparison:
(equalp "FOO" "foo") ;; => T
equalp
is case *in*sensitive for strings because a string is a
sequence of characters, equalp
compares all of its components and it
uses char-equal
for characters, which ignores the characters’ case.
Other comparison functions
null
The function null
returns true if its one argument is NIL.
eql
is used by default by many CL built-ins
This is a common issue for newcomers who manipulate strings. Sometimes, you use a CL built-in function and you are puzzled why you get no result.
Look at this:
(find "foo" (list "test" "foo" "bar"))
;; NIL
we want to know if the string “foo” exists in the given list. We get NIL. What’s happening?
This CL built-in function, as all that work for sequences, use eql
for testing each elements. But (eql "foo" "foo")
won’t work for
strings. We need to use another test function.
All of those functions accept a :test
keyword parameter, that allows
you to change the test function:
(find "foo" (list "test" "foo" "bar") :test #'equal)
;; => "foo"
We can also use equalp
to ignore the string case:
(find "FOO" (list "test" "foo" "bar") :test #'equalp)
;; => "foo"
You will find more examples about those built-in functions in data-structures.
char-equal
We have a special operator to compare characters:
char-equal
ignores alphabetic case and certain other attributes of characters
strings and string-equal
string-equal
has a specific function signature to compare strings
and substrings (you can specify the start and end boundaries for
the comparison), but be aware that it uses char-equal
, so the
comparison is case-*in*sensitive. And it works with symbols.
(string-equal :foo "foo") ;; => T
(string-equal :foo "FOO") ;; => T
This is its docstring:
STRING-EQUAL
This is a function in package COMMON-LISP
Signature
(string1 string2 &key (start1 0) end1 (start2 0) end2)
Given two strings (string1 and string2), and optional integers start1,
start2, end1 and end2, compares characters in string1 to characters in
string2 (using char-equal).
See also our page strings.html.
Compare trees with tree-equal
Here you have it:
tree-equal
returns T if X and Y are isomorphic trees with identical leaves
Compare function table: to compare against (this), use (that) function
To compare against... Use...
Objects/Structs EQ
NIL EQ (but the function NULL is more concise and probably cheaper)
T EQ (or just the value but then you don't care for the type)
Precise numbers EQL
Floats =
Characters EQL or CHAR-EQUAL
Lists, Conses, Sequences EQ (if you want the exact same object)
EQUAL (if you just care about elements)
Strings EQUAL (case-sensitive), EQUALP (case-insensitive)
STRING-EQUAL (if you throw symbols into the mix)
Trees (lists of lists) TREE-EQUAL (with appropriate :TEST argument)
How to compare your own objects AKA built-in functions are not object-oriented
Use eq
to check that two objects are identical, that they are the same object in memory
If you want to compare your own objects with a logic of your own (for
example, two “person” objects will be considered equal if they have
the same name and surname), you can’t specialize a built-in function
for this. Use your own person=
or similar function, or use a library (see our links below).
While this can be seen as a limitation, not using generic functions has the advantage of being (much) faster.
As an example, let’s consider the person
class from the CLOS tutorial:
(defclass person ()
((name
:initarg :name
:accessor name)))
Let’s create two person objects, they have the same name but are two different objects:
(defparameter *p1* (make-instance 'person :name "me"))
(defparameter *p2-same-name* (make-instance 'person :name "me"))
Use eq
to compare two objects:
(eq *p1* *p1*) ;; => T
(eq *p1* *p2-same-name*) ;; => NIL
We use our own person=
method to compare different objects and decide when they are equal:
(defmethod person= (p1 p2)
(string= (name p1) (name p2)))
(person= *p1* *p2-same-name*) ;; => T
If you really want to use =
or equal
, use a library, see below.
Credits
- CLtL2: Equality Predicates
- the compare table: Leslie P. Polzer on Stack-Overflow
See also
- equals - generic equality for Common Lisp.
- generic-cl - a generic function interface to CL built-ins.
- we can use
=
or<
on our own custom objects.
- we can use
Tim Bradshaw — Wild pathnames in Common Lisp
@2024-08-18 17:00 · 74 days agoCommon Lisp’s pathname system has many problems. Here is proposal to make the situation a little better in one respect. This is not a general fix: it’s just trying to solve one problem.
The problem
The underlying problem is that on many platforms pathnames which ‘look like’ they contain wildcards are perfectly legal pathnames to the filesystem. So, on Unix & related systems [foo].*
is a legal filename. On these platforms wildcard handling is generally implemented in a library, or often in multiple semi-compatible libraries1.
CL then has two problems:
- there is no portable way to construct pathnames which look wild but are not;
- there is no portable way to parse a string which looks like a wild pathname but in fact should not be interpreted as one, for instance a string coming from some other application or library, or a filename stored in some file, such as an archive.
(1) happens because 19.2.2.3 says, in part
When examining wildcard components of a wildcard pathname, conforming programs must be prepared to encounter any of the following additional values in any component or any element of a list that is the directory component: […] A string containing implementation-dependent special wildcard characters. […]
That means that implementations are allowed to represent wildcard components of pathnames as strings, and that means that you can’t portably construct a non-wildcard pathname.
(2) happens because there’s no way to tell parse-namestring
or pathname
that the string you’ve handed to them is not wild, even though it looks like it is. That in turn means that to deal with this case you need to either write or find a pathname-parsing library which doesn’t have this problem.
These problems arise in practice: for instance some programs create filenames which look like [foo].xml
: SBCL at least parses strings like this as wild, as it is allowed to do. This then breaks programs which want to, for instance, process zip files, tar files or other archive formats.
A proposed solution
For (1) change 19.2.2.3 to say that wildcard components are never strings. Change the description of make-pathname
to say that if the corresponding components to it are strings (or suitably-constrained lists for the directory component) then the pathname is not wild, except if the default provides a component which is wild.
For (2) add an extra argument to both parse-namestring
and pathname
named wild
with a default of true. If given as nil
this will force string parsing to construct a non-wild pathname. If that is not possible, such as when pathname
is handed a pathname which is already wild, then an error will be signalled.
Notes
This is the smallest change I can think of which will solve the problem. Some implementations, SBCL for instance, already solve (1) in the suggested way. None, I think, solve (2).
For added value, it might be useful to specify that wildcard components can be given either as symbols or as lists whose first element is a symbol, and encourage implementations to return them as such if possible. So, for instance (:sequence "foo-" (:alternation "bar" "zap"))
might represent a wild name which matches "foo-bar"
and "foo-zap"
. I am not suggesting this particular notation however.
-
Let me introduce you to the joys of Unix. ↩
John Jacobsen — To The Metal... Compiling Your Own Language(s)
@2024-08-06 00:00 · 86 days agoLike many programmers, I have programming hobbies. One of these is implementing new languages. My most recent language project, l1, was a Lisp dialect whose primary data types are symbols and arbitrarily-large integers.
I've been happy with l1
, but it is interpreted; since I was actively
working on it last (2022), I've been wondering about the best way to
generate compiled standalone executable programs, written in l1
or
any other language.
The Problem In General
Execution models for programming languages take three basic approaches, listed in increasing order of speed:
- Tree-walking interpreter: Programs are read and parsed into ASTs
in memory, then executed step-by-step by an interpreter. This
is the approach
l1
uses. - Bytecode VM: Programs are compiled into a sort of abstract machine language, simpler than the physical processor's, and executed by a virtual machine (VM). Java and Python work this way.
- Machine code generation: The code is directly compiled into machine language and executed on the user's hardware. C and C++ programs work this way.
Languages using Option 2 often add just-in-time compilation to machine code, for extra performance. Option 3 is typically fastest, but is sometimes skipped in introductory compiler classes and tutorials. For example, Robert Nystrom's excellent Crafting Interpreters book devotes the first section to implementing a tree-walking interpreter implementation in Java and the second half to a compiler and bytecode VM written in C, with minimal coverage of how to target physical hardware. And the (also excellent) class on compiler writing that I took from David Beazley, in its first incarnation, stopped at the point of generating of so-called intermediate representation (IR) output (though students in the current iteration of the class do compile to native code, using LLVM).
Compiling to machine code is tricky because CPUs are inherently complex. Real hardware is intricate, cumbersome, and unintuitive if you're primarily accustomed to high-level languages. Additionally, there are numerous significant variants to consider (e.g., CPU/GPU, ARM/Intel, 32-bit/64-bit architectures).
But targeting machine code rather than interpreters or bytecode VMs is appealing, not just because it is an interesting challenge, but also because the resulting artifacts are small, stand-alone, and typically very fast. While running Python, Ruby, and Java programs require the appropriate infrastructure to be in place on the target machine at all times, Go, Rust, and C programs (among others) benefit from targeting the physical hardware: their programs tend to be smaller, and can be deployed to identical computers simply by copying the executable file, needing to deploy the interpreter, extra libraries, etc. to the target machine(s).
Small Is Beautiful
As a programmer who came up during the dawn of personal computers, I have some nostalgia for an era when programs or even entire operating systems fit on a few-hundred-kB floppy disk. Much existing software feels bloated to me, though some widespread tools are still lean and fast. For illustration purposes, here are the physical sizes of some of the venerable command-line Unix programs I use on a daily basis (this is on MacOS):
Program | Size (kB) |
---|---|
wc |
100 |
cat |
116 |
df |
116 |
more |
360 |
These were chosen more or less at random from my bash
history and
are representative of old-school Unix utilities. For comparison,
iMovie on my Mac is 2.8 GB, several thousand times larger than the
largest of these. Of course, the comparison is somewhat ludicrous -
iMovie does many amazing things... but I use all the above programs
hundreds or thousands of times more often than I do iMovie, so it's
good that that they are compact and run quickly. In a time of
increasingly bloated software stacks,
I find myself especially drawn to simple tools with small footprints.
An Approach
If targeting physical hardware is hard, what tools can we use to make the job easier?
I recently started learning about LLVM, a modular set of compiler tools which "can be used to develop a frontend for any programming language and a backend for any instruction set architecture" (Wikipedia). LLVM has been used heavily in the Rust toolchain and in Apple's developer tools.
The "modular" adjective is critical here: LLVM is separated into front-end, back-end and optimizing parts thanks to a shared "intermediate representation" (IR) - a sort of portable assembly language which represents simple computation steps in a machine-independent but low-level manner.
The LLVM IR takes a little getting used to but, with a little practice, is reasonably easy to read, and, more importantly, to generate.
As an example, consider the following simple C program, three.c
,
which stores the number 3 in a variable and uses it as its exit code.
We will use clang
, the LLVM C/C++/Obj-C/... compiler for the LLVM
ecosystem:
$ cat three.c
int x = 3;
int main() {
return x;
}
$ clang three.c -o three
$ ./three; echo $?
3
One can easily view, and possibly even understand, the assembler output for such a simple program:
$ clang -O3 -S three.c -o three.s
$ cat -n three.s
1 .section __TEXT,__text,regular,pure_instructions
2 .build_version macos, 14, 0 sdk_version 14, 4
3 .globl _main ; -- Begin function main
4 .p2align 2
5 _main: ; @main
6 .cfi_startproc
7 ; %bb.0:
8 Lloh0:
9 adrp x8, _x@PAGE
10 Lloh1:
11 ldr w0, [x8, _x@PAGEOFF]
12 ret
13 .loh AdrpLdr Lloh0, Lloh1
14 .cfi_endproc
15 ; -- End function
16 .section __DATA,__data
17 .globl _x ; @x
18 .p2align 2, 0x0
19 _x:
20 .long 3 ; 0x3
21
22 .subsections_via_symbols
In comparison, here is the LLVM IR for the same program:
$ clang -S -emit-llvm three.c -o three.ll
$ cat -n three.ll
1 ; ModuleID = 'three.c'
2 source_filename = "three.c"
3 target datalayout = "e-m:o-i64:64-i128:128-n32:64-S128"
4 target triple = "arm64-apple-macosx14.0.0"
5
6 @x = global i32 3, align 4
7
8 ; Function Attrs: noinline nounwind optnone ssp uwtable(sync)
9 define i32 @main() #0 {
10 %1 = alloca i32, align 4
11 store i32 0, ptr %1, align 4
12 %2 = load i32, ptr @x, align 4
13 ret i32 %2
14 }
15
16 attributes #0 = { noinline nounwind optnone ssp ;; .... very long list...
17
18 !llvm.module.flags = !{!0, !1, !2, !3, !4}
19 !llvm.ident = !{!5}
20
21 !0 = !{i32 2, !"SDK Version", [2 x i32] [i32 14, i32 4]}
22 !1 = !{i32 1, !"wchar_size", i32 4}
23 !2 = !{i32 8, !"PIC Level", i32 2}
24 !3 = !{i32 7, !"uwtable", i32 1}
25 !4 = !{i32 7, !"frame-pointer", i32 1}
26 !5 = !{!"Apple clang version 15.0.0 (clang-1500.3.9.4)"}
There is a fair amount of stuff here, but a lot of it looks
suspiciously like metadata we don't really care about for our
experiments going forward. The, uh, main
region of interest is from
lines 9-14 - notice that the function definition itself looks a
little more readable than the assembly language version, but slightly
lower-level than the original C program.
You can turn the IR into a runnable program:
$ clang -O3 three.ll -o three
$ ./three; echo $?
3
The approach I explore here is to generate LLVM IR "by fair means or foul."
Here, let's just edit our IR down to something more minimal and see how it goes.
I suspect the store
of 0
in "register" %1
is gratuitous, so
let's try to remove it, along with all the metadata:
$ cat 3.ll
target triple = "x86_64-apple-macosx14.0.0"
@x = global i32 3, align 4
define i32 @main() {
%1 = load i32, ptr @x, align 4
ret i32 %1
}
$ clang -O3 3.ll -o 3
$ ./3; echo $?
3
This is frankly not much more complicated than the C code, and it shows a helpful strategy at work:
Step 1: To understand how to accomplish something in LLVM IR,
write the corresponding C program and use clang
to generate
the IR, being alert for possible "extra stuff" like we saw
in the example.
Step 2. Try to generate, and test, working programs from the IR you write or adapt, making adjustments as desired.
There is another, optional step as well:
Step 3. Use, or write, language "bindings" to drive LLVM generation from the language of your choice. This is the step we will consider next.
Enter Babashka
While one can write LLVM IR directly, as we have seen, we are interested in compiling other languages (possibly higher-level ones of our own invention), so we will want to generate IR somehow. For this project I chose Babashka, an implementation of the Clojure programming language I have found ideal for small projects where both start-up speed and expressiveness are important.
(I assume some familiarity with Lisp and Clojure in this post; for those just getting started, Clojure for the Brave and True is a good introduction.)
The repo https://github.com/eigenhombre/llbb contains the source files
discussed here. The bulk of the code in this repo is in llir.bb
, a
source file which provides alternative definitions in Clojure for
common LLVM idioms. Some of these are trivial translations:
(def m1-target "arm64-apple-macosx14.0.0")
(defn target [t] (format "target triple = \"%s\"" t))
... whereas other expressions leverage the power of Clojure to a greater degree. For example, this section defines translations used to represent arithmetic operations:
(defn arithm [op typ a b]
(format "%s %s %s, %s"
(name? op)
(name? typ)
(sigil a)
(sigil b)))
(defn add [typ a b] (arithm :add typ a b))
(defn sub [typ a b] (arithm :sub typ a b))
(defn mul [typ a b] (arithm :mul typ a b))
(defn div [typ a b] (arithm :sdiv typ a b))
(comment
(div :i32 :a :b)
;;=>
"sdiv i32 %a, %b"
(add :i8 :x 1)
;;=>
"add i8 %x, 1")
You can see this approach at work by representing the C program discussed earlier:
(module
(assign-global :i :i32 3)
(def-fn :i32 :main []
(assign :retval (load :i32 :i))
(ret :i32 :retval)))
which evaluates to
target triple = "arm64-apple-macosx14.0.0"
@i = global i32 3
define i32 @main() nounwind {
%retval = load i32, i32* @i, align 4
ret i32 %retval
}
I use here a slightly different but equivalent pointer syntax for the
load
expression than output by clang
in the example above.
Two very small helper functions allow me to test out small programs quickly:
(require '[babashka.process :as sh])
(defn sh
"
Use `bash` to run command(s) `s`, capturing both stdout/stderr
as a concatenated string. Throw an exception if the exit code
is nonzero.
"
[s]
(let [{:keys [out err]}
(sh/shell {:out :string, :err :string}
(format "bash -c '%s'" s))]
(str/join "\n" (remove empty? [out err]))))
and
(require '[babashka.fs :as fs])
(defn compile-to
"
Save IR `body` to a temporary file and compile it, writing the
resulting binary to `progname` in the current working directory.
"
[progname body]
(let [ll-file
(str (fs/create-temp-file {:prefix "llbb-", :suffix ".ll"}))]
(spit ll-file body)
(sh (format "clang -O3 %s -o %s"
ll-file
progname))))
These two together allow me to test small programs out quickly at the REPL. Some examples follow, obtained by running the C compiler for equivalent programs, generating and inspecting the LLVM IR, and translating them into new Clojure bindings as cleanly as possible.
Minimum viable program: just return zero:
(compile-to "smallest-prog"
(module
(def-fn :i32 :main []
(ret :i32 0))))
(sh "./smallest-prog; echo -n $?")
;;=>
"0"
Argument count: return, as the exit code, the number of arguments, including the program name:
;; Argument counting: return number of arguments as an exit code:
(compile-to "argcount"
(module
(def-fn :i32 :main [[:i32 :arg0]
[:ptr :arg1_unused]]
(assign :retptr (alloca :i32))
(store :i32 :arg0 :ptr :retptr)
(assign :retval (load :i32 :retptr))
(ret :i32 :retval))))
(sh "./argcount; echo -n $?") ;;=> "1"
(sh "./argcount 1 2 3; echo -n $?") ;;=> "4"
Hello, world:
(let [msg "Hello, World."
n (inc (count msg))] ;; Includes string terminator
(compile-to "hello"
(module
(external-fn :i32 :puts :i8*)
(def-global-const-str :message msg)
(def-fn :i32 :main []
(assign :as_ptr
(gep (fixedarray n :i8)
(star (fixedarray n :i8))
(sigil :message)
[:i64 0]
[:i64 0]))
(call :i32 :puts [:i8* :as_ptr])
(ret :i32 0)))))
(sh "./hello") ;;=> "Hello, World.\n"
This is the first program one typically writes in a new programming
language. Note that we use here idioms (external-fn
, call
) to
define and invoke an external function from the C standard library.
Let's see how big the resulting program is:
(sh "ls -l hello")
;;=>
"-rwxr-xr-x 1 jacobsen staff 33432 Aug 14 21:09 hello\n"
At this point I want to pause to reconsider one of the points of this exercise, which is to produce small programs. Here are the rough executable sizes for a "Hello, World" example program in various languages that I use frequently:
Language | Size | Relative Size |
Common Lisp | 38 MB | 1151 |
Clojure | 3.4 MB | 103 |
Go | 1.9 MB | 58 |
C | 33 kB | 1 |
LLVM IR | 33 kB | 1 |
I threw Clojure in there for comparison even though, unlike the other examples, the resulting überjar also requires a Java bytecode VM in order to run. The programs generated from C and from LLVM IR are equivalent; this is not surprising, given that I used the C program to guide my writing and translation of the LLVM IR.
Building A Compiling Calculator
We are now ready to implement a "compiler" for something approaching a useful language, namely, greatly reduced subset of Forth. Forth is a stack-based language created in the 1970s and still in use today, especially for small embedded systems.
LLVM will handle the parts commonly known as "compiler backend" tasks, and Babashka will provide our "frontend," namely breaking the text into tokens and parsing them. This task is made easy for us, because Forth is syntactically quite simple, and Babashka relatively powerful. Here are the language rules we will adopt:
- Program tokens are separated by whitespace.
- Non-numeric tokens are math operators.
- Only integer operands are allowed.
- Comments begin with
\\
.
Forth expressions typically place the arguments first, and the operator last (so-called "reverse-Polish notation"). Here is an example program which does some math and prints the result:
(def example "
2 2 + \\ 4
5 * \\ multiply by five to get 20
2 / \\ divide by 2 -> 10
-1 + \\ add -1 -> 9
8 - \\ subtract 8 -> 1
. \\ prints '1'
")
The code in forth.bb
handles the parser, whose goal is to consume the
raw program text and generate an abstract syntax tree (in our case, just
a list) of operations to translate into IR:
(defn strip-comments
"
Remove parts of lines beginning with backslash
"
[s]
(str/replace s #"(?sm)^(.*?)\\.*?$" "$1"))
(defn tokenize
"
Split `s` on any kind of whitespace
"
[s]
(remove empty? (str/split s #"\s+")))
(defrecord node
[typ val] ;; A node has a type and a value
Object
(toString [this]
(format "[%s %s]" (:typ this) (:val this))))
;; Allowed operations
(def opmap {"+" :add
"-" :sub
"/" :div
"*" :mul
"." :dot
"drop" :drop})
(defn ast
"
Convert a list of tokens into an \"abstract syntax tree\",
which in our Forth is just a list of type/value pairs.
"
[tokens]
(for [t tokens
:let [op (get opmap t)]]
(cond
;; Integers (possibly negative)
(re-matches #"^\-?\d+$" t)
(node. :num (Integer. t))
;; Operations
op (node. :op op)
:else (node. :invalid :invalid))))
Running this on our example,
(->> example
strip-comments
tokenize
ast
(map str))
;;=>
("[:num 2]"
"[:num 2]"
"[:op :add]"
"[:num 5]"
"[:op :mul]"
"[:num 2]"
"[:op :div]"
"[:num -1]"
"[:op :add]"
"[:num 8]"
"[:op :sub]"
"[:op :dot]")
The remainder of forth.bb
essentially just implements the needed
operators, as well as the required stack and the reference to the printf
C library function. It is perhaps a bit lengthy to go through here in
its entirety, so I will share one example where Babashka helps:
(defn def-arithmetic-op [nam op-fn]
(def-fn :void nam []
(assign :sp (call :i32 :get_stack_cnt))
(if-lt :i32 :sp 2
(els) ;; NOP - not enough on stack
(els
(assign :value2
(call :i32 :pop))
(assign :value1
(call :i32 :pop))
(assign :result (op-fn :i32 :value1 :value2))
(call :void :push [:i32 :result])))
(ret :void)))
This makes LLVM IR which does the following, in pseudo-code:
- get current stack position; ensure at least two entries (else return)
- pop the operands off the stack
- apply the arithmetic operator to the operands
- put the result on the stack
Implementing the four arithmetic operators is then as simple as invoking
;; ...
(def-arithmetic-op :mul mul)
(def-arithmetic-op :add add)
(def-arithmetic-op :sub sub)
(def-arithmetic-op :div div)
;; ...
when generating the IR.
Aside from the general-purpose LLVM IR code (llir.bb
), the Forth
implementation is under two hundred lines. It includes the invocation
to clang
to compile the temporary IR file to make the runnable
program. Here's an example compilation session:
$ cat example.fs
\\ initial state stack: []
3 \\ put 3 on stack. stack: [3]
99 \\ put 99 on stack. stack: [3, 99]
drop \\ discard top item. stack: [3]
drop \\ discard top item. stack: []
2 2 \\ put 2 on stack, twice: stack: [2, 2]
+ \\ 2 + 2 = 4. stack: [4]
5 * \\ multiply 4 * 5. stack: [20]
2 / \\ divide by 2. stack: [10]
-1 + \\ add -1 stack: [9]
8 - \\ subtract 8 -> 1 stack: [1]
. \\ prints '1' stack: [1]
drop \\ removes 1. stack: []
$ ./forth.bb example.fs
$ ./example
1
The resulting program is fast, as expected...
$ time ./example
1
real 0m0.007s
user 0m0.002s
sys 0m0.003s
... and small:
$ ls -l ./example
-rwxr-xr-x 1 jacobsen staff 8952 Aug 16 09:52 ./example
Let's review what we've done: we have implemented a small subset of Forth,
writing a compiler front-end in Babashka/Clojure to translate source programs
into LLVM IR, and using clang
to turn the IR into compact binaries. The
resulting programs are small and fast.
Sensible next steps would be to implement more of Forth's stack
operators, and maybe start to implement the :
(colon) operator,
Forth's mechanism for defining new symbols and functions.
Lisp
Instead, let's implement a different variant of our arithmetic calculating language, using Lisp syntax (S-expressions). Consider our first Forth example:
2 2 +
5 *
2 /
-1 +
8 -
.
In Lisp, this looks like:
$ cat example.lisp
(print (- (+ -1
(/ (* 5
(+ 2 2))
2))
8))
Rather than coming last, as they did before ("postfix" notation), our operators come first ("prefix" notation). The order of operations is determined by parentheses, as opposed to using stack as we did for our Forth implementation.
Here Babashka helps us tremendously because such parenthesized prefix
expressions are valid EDN data, which the Clojure function
clojure.edn/read-string
can parse for us. But we need to convert the
resulting nested list into the "SSA" (single static assignment)
expressions LLVM understands. This is relatively straightforward with
a recursive function which expands leaves of the tree and stores the
results as intermediate values:
(defn to-ssa [expr bindings]
(if (not (coll? expr))
expr
(let [[op & args] expr
result (gensym "r")
args (doall
(for [arg args]
(if-not (coll? arg)
arg
(to-ssa arg bindings))))]
(swap! bindings conj (concat [result op] args))
result)))
(defn convert-to-ssa [expr]
(let [bindings (atom [])]
(to-ssa expr bindings)
@bindings))
We use gensym
here to get a unique variable name for each assignment,
and doall
to force the evaluation of the lazy for expansion of the
argument terms. The result:
(->> "example.lisp"
slurp
(edn/read-string)
convert-to-ssa)
;;=>
[(r623 + 2 2)
(r622 * 5 r623)
(r621 / r622 2)
(r620 + -1 r621)
(r619 - r620 8)
(r618 print r619)]
The next step will be to actually write out the corresponding LLVM
IR. The rest of lisp.bb
is satisfyingly compact. Operators (we
have five, but more can easily be added), are just a map of symbols
to tiny bits of LLVM code:
(def ops
{'* #(mul :i32 %1 %2)
'+ #(add :i32 %1 %2)
'/ #(div :i32 %1 %2)
'- #(sub :i32 %1 %2)
'print
#(call "i32 (i8*, ...)"
:printf
[:i8* :as_ptr]
[:i32 (sigil %1)])})
Similar to our Forth implementation, but even more compact, the main Babashka function,
after a brief setup for printf
, generates a series of SSA instructions.
(defn main [[path]]
(when path
(let [assignments (->> path
slurp
edn/read-string
convert-to-ssa)
outfile (->> path
fs/file-name
fs/split-ext
first)
ir (module
(external-fn :i32 :printf :i8*, :...)
(def-global-const-str :fmt_str "%d\n")
(def-fn :i32 :main []
(assign :as_ptr
(gep (fixedarray 4 :i8)
(star (fixedarray 4 :i8))
(sigil :fmt_str)
[:i64 0]
[:i64 0]))
;; Interpolate SSA instructions / operator invocations:
(apply els
(for [[reg op & args] assignments
:let [op-fn (ops op)]]
(if-not op-fn
(throw (ex-info "bad operator" {:op op}))
(assign reg (apply op-fn args)))))
(ret :i32 0)))]
(compile-to outfile ir))))
(main *command-line-args*)
Putting these parts together (see lisp.bb
on GitHub), we have:
$ ./lisp.bb example.lisp
$ ./example
1
It, too, is small and fast:
$ time ./example
1
real 0m0.006s
user 0m0.001s
sys 0m0.003s
$ ls -al ./example
-rwxr-xr-x 1 jacobsen staff 33432 Aug 16 20:52 ./example
To say this is a "working Lisp compiler" at this point would be grandiose (we still need functions, lists and other collection types, eval, macros, ...) but we have developed an excellent foundation to build upon.
To summarize, the strategy we have taken is as follows:
- Use a high level language (in our case, Babashka/Clojure) to parse input and translate into LLVM IR;
- When needed, write and generate small C programs to understand the equivalent IR to generate.
- Compile the IR to small, fast binaries using
clang
.
Alternatives and Future Directions
I should note that C itself has long been used as an intermediate language, and we could have used it here instead of LLVM IR; I don't have a strong sense of the tradeoffs involved yet, but wanted to take the opportunity to learn more about LLVM for this project.
LLVM is interesting to me because of the modularity of its toolchain; it also provides a JIT compiler which allows one to build and execute code at run-time. We didn't investigate tooling for that here (it needs deeper LLVM language bindings than the homegrown Babashka code I used), but it could provide a way to do run-time compilation similar to what SBCL (a Common Lisp implementation which can compile functions at run-time) does.
Here are some directions I'm considering going forward:
- Try interfacing with external libraries, e.g. a bignum library;
- Implement more Forth functionality;
- Implement more Lisp, possibly including a significant subset of
l1
; - Try a JIT-based approach, possibly using Rust as the host language.
Conclusion
Whenever possible, I want to make small, fast programs, and I like playing with and creating small programming languages. LLVM provides a fascinating set of tools and techniques for doing so, and using Babashka to make small front-ends for IR generation turns out to be surprisingly effective, at least for simple languages.
Marco Antoniotti — Helping HEΛP Again! ... and Again!
@2024-08-04 11:01 · 88 days agoIn the heat of the summer (the coolest summer of the next ones), it is never a good thing to get an email from Xach
telling you that "something does not compile on SBCL". In this case the issue was the usual, fascist STYLE-WARNING
,
that prevented a clean build on Quicklisp.
The fix was relatively easy, but it lead to a number of extra changes to properly test the fix itself.
Bottom line, a new version of HEΛP is available at helambdap.sf.net. Soon in Quicklisp as well.
Stay cool, hydrated and enjoy.
(cheers)
Tim Bradshaw — The abominable shadow
@2024-08-01 10:55 · 91 days agoMost uses of shadow
and shadowing-import
in Common Lisp packages point to design problems.
Let’s assume you are designing a language which is going to be a variant CL: most of it will be just CL, but perhaps some things will be different. For example, let’s imagine that you want if
to have a mandatory else clause. You might start by designing your package like this:
(defpackage :my-language
(:use :cl)
(:shadow #:if)
(:export #:if))
(in-package :my-language)
...
(defmacro if (test then else)
`(cl:if ,test ,then ,else))
...
That all seems fine, right? Well, not so much. Consider for a minute people who want to use your language. They need to write something like this:
(defpackage :my-language-user-package
(:use :cl :my-language)
(:shadowing-import-from :my-language #:if))
(in-package :my-language-user-package)
...
‘Oh well’, you say, ‘that’s not so bad’. Well, now let’s say you want to add a version of cond
to your language which understands else
and otherwise
. So:
(defpackage :my-language
(:use :cl)
(:shadow #:if #:cond)
(:export #:if #:cond #:else))
(in-package :my-language)
...
(defmacro if (test then else)
`(cl:if test then else))
(defmacro cond (&body clauses)
`(cl:cond
,@(mapcar (lambda (clause)
(if (and (consp clause)
(member (first clause) '(else otherwise)))
`(t ,@(rest clause))
clause))
clauses)))
...
And now every user of your language has to modify their package definitions:
(defpackage :my-language-user-package
(:use :cl :my-language)
(:shadowing-import-from :my-language #:if #:cond))
(in-package :my-language-user-package)
...
I’ll say that again: every user of your language has to modify their package definitions every time you enhance it in a way which is not compatible with CL.
That … sucks. It’s an absolutely terrible design. Wouldn’t it be nice if it could be avoided?
It can. Rather than shadowing symbols, you can instead construct the packages you actually would like to exist. In the example above what you probably want people to be able to do is to say
(defpackage :my-language-user-package
(:use :my-language))
and have that work, even when your language changes. So, you need the MY-LANGUAGE
package to export most of the symbols from CL
as well as a few of its own. You can do this by hand:
(defpackage :my-language
(:use)
(:export #:if #:cond #:else)
(:import-from :cl
cl:&allow-other-keys ...)
(:export
cl:&allow-other-keys ...))
Where the :import-from
and the second :export
clause specify all the symbols from CL
except those which are replaced by ones defined by your language.
Note the empty :use
clause: this avoids symbol clashes and therefore the need to shadow things.
You can then either define your language in this package or in an implementation package which uses it: the package has imported all of the external symbols from CL
other than the ones it overrides, so it doesn’t need to use the CL
package at all.
The benefit of doing things this way is that it means that every user of this system doesn’t have to care about the details of it and isn’t forced to change their code because of implementation changes. That’s worth it, even though writing the defpackage
forms is laborious: you should do the work, not every user of your systm.
Of course, in real life you would not have to remember the names of all the symbols you are reexporting: you’d write a program to do it for you. You’d write, in fact, a macro.
Well other people have already done that for you, in particular I did this in 1998 when I decided that this idea was interesting. Other people have since done similar things I think and may have done so before me, but I will describe my version: conduit packages. In particular I’ll mostly describe the functionality exported from the ORG.TFEB.CONDUIT-PACKAGES/DEFINE-PACKAGE
package, which doesn’t replace macros like defpackage
and functions like export
, but rather provides functionality under different names.
The basic notion is that packages can be conduits for one or more other packages: they serve to gather together and reexport subsets of the exported names from the packages for which they are conduits. define-package
lets you define conduit packages easily, and define-conduit-package
is even more specialised to the task.
Here is how you would define the package above
(define-package :my-language
(:use)
(:export #:if #:cond #:else)
(:extends/excluding :cl
#:if #:cond))
or with define-conduit-package
:
(define-conduit-package :my-language
(:export #:if #:cond #:else)
(:extends/excluding :cl
#:if #:cond))
Now you can quite happily define your language as before.
This works, of course, even if your package wants to extend other packages whose exports might change in a way that CL
’s are unlikely to do any time soon: the symbols to import & reexport are computed based on the state of the package system at the time the form is evaluated. In some cases — if the package you are extending is itself known to the system — the packages will be dynamically recomputed:
> (define-package :foo
(:export #:one))
#<The FOO package, 0/16 internal, 1/16 external>
> (define-conduit-package :bar
(:extends :foo))
#<The BAR package, 0/16 internal, 1/16 external>
> (do-external-symbols (s :bar (values)) (print (symbol-name s)))
"ONE"
> (define-package :foo
(:export #:one #:foo))
Warning: Using DEFPACKAGE to modify #<The FOO package, 0/16 internal, 1/16 external>.
#<The FOO package, 0/16 internal, 2/16 external>
> (do-external-symbols (s :bar (values)) (print (symbol-name s)))
"FOO"
"ONE"
And thus was the abominable shadow cast into the outer darkness.
A remaining question is: are there good uses for shadowing? Well, conduit packages itself uses them in its implementation package, mostly because I was too lazy to write the code which would explicitly map over CL
. And there must, I suppose, be other good uses, but it’s very hard to think of them. The other common case, where you want to use two packages which export the same names, is dealt with by simply using a conduit of course.
I think it’s worth remembering that when the CL package system was initially defined, people didn’t really understand how such a thing should work. MACLISP didn’t have a package system, Lisp Machine Lisp probably did (certainly Zetalisp did), but there was no great experience with what a package system should be like. Indeed the first CL version didn’t have defpackage
: instead you had to construct packages by hand, and there were all sorts of weirdnesses in the way the compiler handled make-package
and other package functions (or you had to use eval-when
all over the place).
Finally, when I wrote conduit packages I was still thinking that packages were big expensive objects, because in the late 1980s they were, and I hadn’t yet realised that this was no longer true. In the late 1980s a big workstation on which you ran CL might have had 16MB of memory. Today laptops have perhaps a thousand times as much memory: data structures which ate a lot of precious memory in 1990 don’t any more. So I think, today, it’s appropriate to use packages in a fairly fine-grained way: having a few extra packages really is not hurting you very much.
So here is another way to define the little language above.
First, define a conduit for CL
which exports just the symbols you want:
(define-conduit-package :my-language/cl
(:extends/excluding :cl
#:if #:cond))
Now define the implementation package for the language: this exports the new symbols:
(define-package :my-language/impl
(:use :my-language/cl)
(:export
#:if #:cond #:else))
Now, finally, define the public package, which is a conduit for both MY-LANGUAGE/CL
and MY-LANGUAGE/IMPL
:
(define-conduit-package :my-language
(:extends :my-language/cl :my-language/impl))
This is absurd overkill in this tiny example, but for real examples, where there might be several implementation packages, it lets you split things up in a nice way, while not burdening your users with lots of tiny packages.
Joe Marshall — Continuation passing style resource management
@2024-07-31 18:05 · 92 days agoOne good use of continuation passing style is to manage dynamic resources. The resource allocation function is written in continuation passing style and it takes a callback that it invokes once it has allocated and initialized the resource. When the callback exits, the resource is uninitialized and deallocated.
(defun call-with-resource (receiver) (let ((resource nil)) (unwind-protect (progn (setq resource (allocate-resource)) (funcall receiver resource)) (when resource (deallocate-resource resource))))) ;; example usage: (call-with-resource (lambda (res) (do-something-with res))) ;;; In Lisp, we would provide a convenient WITH- macro (defmacro with-resource ((resource) &body body) ‘(CALL-WITH-RESOURCE (LAMBDA (,resource) ,@body))) ;; example usage: (with-resource (res) (do-something-with res))
This pattern of usage separates and abstracts the resource usage
from the resource management. Notice how
the unwind-protect
is hidden
inside call-with-resource
so that the user of the
resource doesn’t have to remember to deallocate the resource.
The with-resource
macro is idiomatic to Lisp. You obviously can’t
provide such a macro in a language without macros, but you can still
provide the call-with-resource
function.
Continuation passing style for resource management can be used in
other languages, but it often requires some hairier syntax.
Because call-with-resource
takes a callback argument,
it is actually a higher-order function. The syntax for passing
higher-order functions in many languages is quite cumbersome. The
return value of the callback becomes the return value
of call-with-resource
, so the return type of the
callback must be compatible with the return type of the function.
(Hence the type of call-with-resource
is actually
parameterized on the return value of the callback.)
Languages without sophisticated type inference may balk at this.
Another advantage of the functional call-with-resource
pattern is that you can dynamically select the resource allocator.
Here is an example. I want to resolve git hashes against a git
repository. The git repository is large, so I don’t want to clone
it unless I have to. So I write two resource
allocators: CallCloningGitRepository
, which takes the
URL of the repository to clone,
and CallOpeningGitRepository
which takes the pathname
of an already cloned repository. The "cloning" allocator will clone
the repository to a temporary directory and delete the repository
when it is done. The "opening" allocator will open the repository
and close it when it is done. The callback that will be invoked
won’t care which allocator was used.
Here is what this looks like in golang:
// Invoke receiver with a temporary directory, removing the directory when receiver returns. func CallWithTemporaryDirectory(dir string, pattern string, receiver func(dir string) any) any { dir, err := os.MkdirTemp(dir, pattern) CheckErr(err) defer os.RemoveAll(dir) return receiver(dir) } // Invoke receiver with open git repository. func CallOpeningGitRepository(repodir string, receiver func(string, *git.Repository) any) any { repo, err := git.PlainOpen(repodir) if err != nil { log.Fatal(err) } return receiver(repodir, repo) } // Invoke receiver with a cloned git repository, removing the repository when receiver returns. func CallCloningGitRepository(dir string, pattern string, url string, receiver func(tmpdir string, repo *git.Repository) any) any { if url == "" { return nil } return CallWithTemporaryDirectory( dir, pattern, func(tempdir string) any { log.Print("Cloning " + url + " into " + tempdir) repo, err := git.PlainClone(tempdir, true, &git.CloneOptions{ Auth: &gitHttp.BasicAuth{ Username: username, Password: password, }, URL: url, Progress: os.Stdout, }) CheckErr(err) log.Print("Cloned.") return receiver(tempdir, repo) }) }
You specify a repository either with a URL or a pathname. We select the appropriate resource allocator based on whether the specifier begins with "https".
func RepositoryGetter (specifier string) func (receiver func(_ string, repo *git.Repository) any) any { if strings.EqualFold(specifier[0:5], "https") { return GetRemoteGitRepository (specifier) } else { return GetLocalGitRepository (specifier) } } func GetRemoteGitRepository(url string) func(receiver func(_ string, repo *git.Repository) any) any { return func(receiver func(_ string, repo *git.Repository) any) any { return CallCloningGitRepository("", "git", url, receiver) } } func GetLocalGitRepository(repodir string) func(receiver func(_ string, repo *git.Repository) any) any { return func(receiver func(_ string, repo *git.Repository) any) any { return CallOpeningGitRepository(repodir, receiver) } }
To open a repository, we
call RepositoryGetter(specifier)
to get
a getRepository
function. Then we invoke the
computed getRepository
function on a receiver callback
that accepts the local pathname and the repository:
getRepository := RepositoryGetter(specifier) return getRepository( func (_ string, repo *git.Repository) any { // resolve git hashes against the repository .... return nil })
If given a URL, this code will clone the repo into a temporary directory and open the cloned repo. If given a pathname, it will just open the repo at the pathname. It runs the callback and does the necessary cleanup when the callback returns.
The biggest point of confusion in this code (at least to me) are the type specifiers of the functions that manipulate the resource allocators. Static types don’t seem to mix well with continuation passing style.
Scott L. Burson — FSet 1.4.0 released (repost)
@2024-07-16 05:11 · 107 days ago[Reposting so it will show up at the top of Planet Lisp]
Greetings FSet users,
For several years I was too busy to do much with Common Lisp, but having left my last job a few months ago, I am now working on a project in CL. I'm using FSet, of course, and so I've been reminded that it needed some TLC; there were some bugs to fix, and the documentation was very old and possibly hard to find. So I've put some time into it and prepared a new release.
The first thing I did was to review all the commits Paul Dietz made back in 2020. These were more extensive than I had realized; he greatly expanded the test suite and fixed a number of bugs. I have tried to thank him for his work, but he seems to have retired from GrammaTech; I have not been able to reach him. If anyone is in touch with him. please convey my thanks.
One bug Paul noticed but didn't fix, probably because he thought someone might be depending on the current behavior, was that compare on maps and seqs was not comparing the default; if two maps or seqs had the same contents but different defaults, they would nonetheless be reported as equal. There is indeed a chance of breaking existing code by fixing this, but I think it's small; in any case, I've decided to risk it — the behavior was clearly a bug.
The only other possibly breaking change I've made is to revamp the APIs of list-relation and query-registry. I wrote these classes some time ago, specifically for the project I was working on (and have now resumed); they're not well documented, and I'll be surprised if anyone is using them, especially in the case of query-registry. If I'm wrong and you are using them. post a comment and I'll explain how to convert your code, if it's not obvious. (I did remove some methods from query-registry that I was no longer using; I can restore them if necessary.)
I've also collected the FSet documentation into one place, and freshened it a little.
As part of this work I have also updated Misc-Extensions, which contains some macros that I like to use (and are used in FSet). In particular, I made some improvements to GMap, my iteration macro (we all have our own iteration macros, right?), and wrote a README for the system, that should make it a lot easier for people to see what's in it.
Joe Marshall — Monkeys vs. Shakespeare
@2024-07-10 16:21 · 113 days agoGoogle's golang was designed for mediocre programmers that aren't “capable of understanding a brilliant language”. It omits features that are hard to understand or hard to use, like conditional expressions, macros, exceptions, and object systems.
Lisp, on the other hand, was originally designed for smart programmers in order to research artificial intelligence. It contains language constructs like conditional expressions, a powerful macro system, a highly customizable error handling system, and a sophisticated object system.
A million monkeys typing on a million keyboards will eventually produce the works of Shakespeare. Lisp is a language for Shakespeares, while golang is a language for monkeys.
What is the fundamental problem we are solving? If the problem is simply an insufficient amount of software, then we need to write as much software as possible as rapidly as possible. Hire more monkeys. If the problem is gainfully employing all the monkeys quickly, give them crude tools that are easy to use. But if the problem is lack of quality software, then we need the tools to write the best software possible. Hire more Shakespeares and give them powerful tools.
Hiring monkeys is easier and cheaper than hiring Shakespeares. So why not just keep hiring monkeys until you have a Shakespeare equivalent? Experience shows how well this works. Just think of all the projects that were brought in on time and under budget when they doubled the number of monkeys. Just think of any project that was brought in on time and under budget when they doubled the number of monkeys. Surely there must be one?
Joe Marshall — A YouTuber's First Look at Lisp
@2024-07-04 14:25 · 119 days agoThere is a guy who is making a series of videos where he takes a “first look” at various programming languages. I watched his “first look” at Lisp. Now he wasn’t going into it completely cold — he had looked at Scheme previously and had seen some Lisp — but it was still an early experience for him.
He gave it a good go. He didn’t have a pathological hatred for parenthesis and seemed willing to give it a fair shake. He downloaded SBCL and started playing with it.
Since he only had allocated an hour to it, he didn't cover much. But he encountered a few pitfalls that Lisp newbies often fall into. I found myself wanting to talk back at the screen and point out where he was going wrong when simple typos were causing errors.
For example, he was trying to format some output, but he forgot the closing double quote on the format string. The reader kept waiting for him to finish the string and simply gobbled up the format args and closing parens as part of the string. He was confused as to why nothing was happening. When he finally typed Control-C, he was faced with a spew of stack trace and a debugger prompt that was of no help to him.
A second problem he encountered was when he typed (print
(first (*mylist*)))
. Any experienced Lisp hacker will
obviously see the problem right away: *mylist*
is not
a function. But the error message was buried in a 30 level deep
stack trace that wound back through the reader and REPL and
completely obscured the problem.
The Lisp debugger is amazing, but it can be a bit overwhelming to a newbie. When I was TAing Lisp classes, I would often find that students were intimidated by the debugger. They felt that while they were in the debugger there was some sort of impending doom hanging over them and that they had to exit and get back to the normal REPL as quickly as possible. I think if I were teaching a Lisp course now, I would start off by deliberately causing errors and then showing students how to use the debugger to find the problem and recover from it.
PLT Scheme has an interesting “beginner” mode that disables some of the more advanced features of Lisp. Many typos in Lisp cause errors not because they are invalid, but because they mean something advanced, but unintended. For example, the beginner mode disables first-class functions, so accidentally putting in extra parenthesis becomes a syntax error instead of an attempt to funcall something inappropriate. Perhaps a “training wheels” approach would work with Lisp beginners.
But SBCL generates pretty good error messages. It was a case of TLDR. The reviewer just didn't take the time to read and understand what the error message said. He was too busy trying to figure out how to get back to the REPL.
All in all, the reviewer gave it a good go and came away with a positive impression of Lisp. He also took he time to research the language and provided some examples of where Lisp is used today. The video is at https://www.youtube.com/watch?v=nVhzHu2zSEQ if you are interested.
Marco Antoniotti — Helping HEΛP Again!
@2024-06-27 08:47 · 126 days agoIn a flurry of ... free time, I also went back to HEΛP and fixed a few bugs that were exposed by some of the things I did with CLAST. Recording the documentation strings from the pesky
(setf (documentation 'foo 'function) "Foo Fun!")
are now all working as expected, at least at top-level and within PROGN
-like constructs, e.g., EVAL-WHEN
.
Meanwhile, I also updated the documentation and the web page adding a few caveats about how to run the DOCUMENT
function, and how to work around issues I have seen in my (not so) extensive tests.
Of course, I put my money where my mouth is: the HEΛP documentation web pages are built with HEΛP.
(cheers)
Marco Antoniotti — CLAST reworked
@2024-06-26 21:16 · 126 days agoPrompted by a post on one of the various Common Lisp fora, I finally got my act together and went back to CLAST, i.e., the Common Lisp Abstract Syntax Tree library that I had in the works for ... some time.
The library has an interesting origin, which I will recount in a different post. Suffice to say that eventually I needed a code walker which did a few complicated things. NIH sydrome immediately kicked in.
The main think I needed were functions inspecting code, as in the example below.
cl-prompt> (clast:find-free-variables '(let ((x 42)) (+ x y)))
(Y)
To achieve this (apparently) simple goal, a (mostly) portable environment library had to be developed and a full set of AST node structures had to be provided.
The result is now finally ready for prime time. In Lispworks you can also see how things actually get parsed. As an example, the picture below shows the result of the following command.
cl-prompt> (clast:parse '(loop for i in '(1 2 3)
count (oddp (+ qd i)) into odds))
#<LOOP-FORM 24ECE77F>
NIL
Please try it, report bugs, blast my design choices and suggest improvements.
Thank you
(cheers)
Joe Marshall — Goto not that harmful
@2024-06-26 17:26 · 127 days agoI use goto all the time. When I have a loop, I goto the top of the loop after each iteration. When a function delegates to another function, I just goto the other function.
Gotos that just transfer control have the problem that the context is implicit. It isn’t obvious from the code what parts of the context are expected to persist across the control transfer. But if you explicitly pass arguments along with your control transfer, then you can see exactly what is carried across the control transfer.
A tail call isn’t “optimized” to a goto, it is a goto. It is a goto that passes arguments. Gotos that pass arguments aren’t harmful.
Paolo Amoroso — Adding an Exec command and File Browser support to Insphex
@2024-06-25 08:54 · 128 days agoI implemented the last features originally planned for Insphex, my hex dump tool in Common Lisp for Medley Interlisp.
The first new feature is an Exec command for invoking the program. The command HD
works the same way as the function INSPHEX:HEXDUMP
and accepts the same arguments, a file name and an optional boolean flag to indicate whether the output should go to a separate window:
← HD FILENAME [NEWIN-P]
The other feature is the addition to the File Browser menu of the Hexdump
command, which shows the hex dump of the selected files in as many separate windows:
For other commands that produce output in windows the File Browser lets the user view one window at a time, with menu options for skipping through the windows. Insphex doesn't do anything so elaborate though.
Implementing the features was easy as the relevant Interlisp APIs are well documented and I have experience with adding an Exec command to Stringscope.
The Medley Lisp library modules manual covers the File Browser API from page 115 of the PDF, with the explanation of how to add commands on page 118. It's as simple as registering a callback function the command invokes, INSPHEX::FB-HEXDUMP
for Insphex.
An issue I bumped into is that instead of 4 arguments as the manual says, the callback actually requires 5. The last, undocumented argument was likely introduced since the publication of the manual.
#insphex #CommonLisp #Interlisp #Lisp
Discuss... Email | Reply @amoroso@fosstodon.org
Joe Marshall — Embrace the Suck
@2024-06-24 14:20 · 129 days agoThe key point here is our programmers are Googlers, they're not researchers. They're typically fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They're not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt.
— Rob Pike
How sad. Google used to hire top-notch programmers. Now it appears that they have hired a bunch of mediocre programmers who can't even learn how to properly use exceptions or ternary conditionals. Programmers that need “training wheels” on their C.
And how insulting to Googlers to be told that they are not capable of understanding a brilliant language. To be seen as merely a fungible resource to be “used” to build software. I'm sure some of them are capable of understanding a brilliant language. I'm sure that some are capable of learning how to use Haskell or OCaml or Clojure (or F# or Rust or Erlang or Scala) whatever language is best for the task.
Does success come to those that strive for excellence or those that embrace mediocrity?
Scott L. Burson — On the time complexity of functional collections
@2024-06-20 02:27 · 133 days agoClojure made functional collections popular. Rich Hickey, its inventor, deserves a lot of credit for that. However, he also propagated an inaccurate way of describing their time complexity on several common operations such as looking up a key in a map. I don't know exactly what phrase he used at first, but I've seen people describe the time complexity of these operations as "near-constant" or "effectively constant", or sometimes shouting: "effectively constant". He also seems to have originated the practice I see in the Clojure community of speaking as if the base of the logarithm mattered: "O(log32 n)". (The "32" should be a subscript, but I don't see an affordance for subscripts in this Blogger UI.)
All of these locutions are wrong. The only correct way to describe the time complexity of the operations in question is as "O(log n)" or "logarithmic time" ("log time" for short). Time complexity describes how the time to perform the operation grows as the size of the input (in this case, the collection) grows without bound. Because the Hash Array-Mapped Trie (HAMT) — the very clever data structure invented by Phil Bagwell — is a tree, the worst-case time to access an element in the tree must be proportional to the depth of the tree, which is proportional to the logarithm of the number of elements (provided that the tree is balanced, which it will be if the hash function is well distributed). The base of the logarithm is the radix (branching factor) of the tree, which in Clojure's case is 32, but this has no bearing on its time complexity; as everyone knows, logarithms of different bases differ only by a constant factor, and big-O notation ignores constant factors.
I think part of what is going on here is a bit of confusion between the time complexity of an algorithm and its real-world performance. Consider this sentence from Hickey's HOPL 2020 paper, A History of Clojure:
Performance was excellent, more akin to O(1) than the theoretical bounds of O(logN).
You don't find the time complexity of an algorithm by measurement, but by analyzing the algorithm. While it's not 100% clear, this sentence certainly gives the impression that he didn't quite understand that.
Let me speculate a little. The performance of a lookup on a map, implemented as an HAMT, using string keys, has two components: the time to hash the key, and the time to walk the HAMT, using the hash value, to find the map entry containing that key. I'm going to guess that for the string keys that Rich tried in his testing, the tree-walking time was less than or comparable to the string-hashing time up to a depth of maybe 3 or 4, or maybe larger. 32^4 is 1,048,576, which might be larger than any map he tested; so it's entirely plausible that he just didn't test any collections large enough to see the logarithmic behavior emerge.
If that's right, it certainly speaks well for the performance of the HAMT design. Let me acknowledge at this point that Rich also had a marketing problem to deal with: he had to convince potential Clojure users that its functional collections would not make their programs unusably slow. O(1) or "near-constant" certainly sounds better than O(log n). I can understand the temptation he faced.
But again: time complexity is about how the time grows as the size of the input grows without bound. And clearly, in this case, there will be some point at which the tree-walking time will begin to be larger than the hashing time. This will happen sooner for short keys than long ones, and soonest if the keys are integers hashed by the identity function (or maybe by folding a 64-bit integer into a 32-bit hash; probably one or two instructions). But it will happen.
— That is, it will happen as long as the algorithm doesn't run out of hash bits. Clojure uses a 32-bit hash; since each tree level consumes 5 bits, that gives it 6.4 levels. As the tree starts to fill up, the number of collisions will begin to become significant. I'm not sure what Clojure does with collisions. Bagwell suggested rehashing to obtain more bits, but I don't know that Clojure does that; it might just do linear search over collision buckets. In the latter case, the time complexity would actually be linear (O(n)) rather than logarithmic; the linear behavior won't begin to emerge until the map has billions of entries, but again, time complexity isn't about that.
The other point worth making here is that while time complexity is an important fact about the performance of an algorithm, it is not the only important fact. The amount of time it takes on small instances can also matter; depending on the use case, it can be more important than the time complexity. There are algorithms in the CS literature (called "galactic algorithms"; TIL!) which have state-of-the-art time complexity, but are not used in practice because their constant factors are too large (I guess in practice this means they have complicated initializations to perform before getting to the meat of the computation).
None of this is intended as a criticism of Hickey's choice of HAMTs for Clojure. The only reason FSet doesn't use HAMTs is that I wasn't aware of their existence when I was writing it. Probably I will rectify this at some point, though that's not a trivial thing to do because the change can't be perfectly compatible; FSet's trees are comparison-based, while HAMTs are hash-based, requiring a change to how user-defined classes are interfaced to the library. Still, I expect HAMTs would be substantially faster in many applications.
Joe Marshall — Decode a Float (Solution)
@2024-06-16 19:19 · 137 days agoWe can multiply or divide a floating point number by 2 without changing the bits in the mantissa. So we can rescale the number to be in the range [1, 2) by repeatedly multiplying or dividing by 2. The leftmost bit of the mantissa is always 1, but next bit determines whether the number is in the top half of the range or the bottom half. So if the number is equal to or greater than 1.5, the next bit is 1, otherwise it is 0.
If the number is in the range [1, 2), we can subtract 1 from it. The remaining bits will be shifted left until the leftmost bit is 1. If the number is greater than or equal to 1.5, then subtracting 1 will shift the bits left by one. But if the number is in [1, 1.5), we won’t know how many zero bits will be shifted out when we subtract 1. What we’ll do is add .5 to the number to turn on the next bit, then subtract 1 to shift the bits left by one.
Here is the code in Common Lisp:
(defun float->bits (float) (cons 1 (read-bits float 0))) (defun read-bits (float count) (cond ((>= count 52) nil) ((> float 2.0d0) (read-bits (/ float 2.0d0) count)) ((< float 1.0d0) (read-bits (* float 2.0d0) count)) ((>= float 1.5d0) (cons 1 (read-bits (- float 1.0d0) (1+ count)))) (t (cons 0 (read-bits (- (+ float .5d0) 1.0d0) (1+ count))))))
Note that all these operations are exact and do not cause round off.
Joe Marshall — Decode a Float
@2024-06-16 00:50 · 137 days agoThe leftmost bit of any positive binary number is always 1. So if you were to left-justify a positive binary number, the top bit would always be 1. If the top bit is always 1, there is no need to implement it. Floating point numbers use this trick.
You can determine the bits of an integer using only arithmetic
operations by repeatedly dividing by two and collecting the
remainders. Today’s puzzle is to determine the bits of a floating
point number using only arithmetic operations
(no decode-float
or integer-decode-float
).
For older items, see the Planet Lisp Archives.
Last updated: 2024-10-28 00:00